Ah yes, I know this, and if my memory serves me correct, some configurations makes MySQL think it’s the only service running on the server, and thus consume whatever is avaliable.
I do believe it’s not the case here, that configuration shouldn’t be the default one.
I don’t know what’s your standard here, but the final test results told me that over 750MB were transferred per second.
Well, guess I have to agree with this one. Unfortunately this is how those shady Chinese panels have been advertising about, and many did believe those.
I still use control panel if I ever needed to manage full servers, or just to tackle my local environment, but it never hurts to have some commands in your mind.
A standard MySQL installation uses fixed limits, which are pretty conservative with memory usage. If you want to tune it to use all memory available on the server, you will have to tune it yourself. Which settings should be changed and how they should be changed depends on your dataset, configuration and query behavior.
How much data was transferred is not import. How much data is stored (or rather accessed) is what affects memory usage.
If you give MySQL a lot of memory to use, it makes it possible to load your entire data set in memory, which should be faster than having to read all the data from disk.
But if you have a fairly big server and can give MySQL 8 GB of memory to store cached data, but you only have 500 MB of data, then that memory buffer can never exceed 500 MB. Because if the entire dataset is already in memory, what else could MySQL use the memory for?
So copying a small dataset from one piece of memory (the MySQL buffers) to another piece of memory (the database client’s process) will give you amazing throughput, but doesn’t tell you anything about the stability of a system with a larger dataset.
Since few weeks ago @Repetza let me bought his vps with 3-core allocated epyc Milan, 6gb ram, and 90gb storage. Previously hosted mastodon, but realized internet is kinda expensive for me to load few megabytes of pictures, so I hosted discourse
I somehow thinks that the forum is becoming slower and slower after Admin decided that the forum will use the beta version rather than stable.
I have no idea what those new updates do but it’s seriously slower than before, especially with the popups.
Fun fact: by default, new discourse installations uses tests-passed branch, where new commits must pass certain tests before the new code/feature “deployed” from main to tests-passed
Honestly I don’t know why they did this, but in the future if someone wants to install discourse on a fresh server, they must pay attention in the samples/standalone.yml file with changing the branch to stable, adjust other configurations as needed, copy samples/standalone.yml to containers/app.yml, then use ./launcher rebuild app instead of discourse-setup for installing discourse
I made the switch to tests-passed after a bug appeared in the stable version and the Discourse devs were VERY reluctant to fix it because “we really don’t want to ever push anything to stable”. They recommend tests-passed unless you really don’t want to have to update, which is mostly enterprise clients I think.
Whatever performance problems were introduced in Discourse would have made it into stable eventually.
Oh, and for some reason, my first login attempts on new devices always result in a Cloudflare 502 Bad gateway error. Only by attempting again can I login. Why is it like that? Does anyone have the same issue?
I’ve heard many people say that connecting a windows xp computer to the internet is one of the most dangerous things you can do so I was just having a joke with you, im sure its fine
I’ve seen it myself a few times. But the issue is very hard to reproduce, so I’m not quite sure.
As far as I can tell, it has something to do with HTTP headers being too big. I’ve increased the limits already, but it still doesn’t work always for some reason.