Discussion about free hosting limits

Hi Admin,

This is the part that I haven’t got the “how” part. I wonder how this is even possible or is the factor controllable by the web admin. As far as I know sites can get “sudden fame” and has a lot of traffic that gets hit daily limits without the web admin being able to do anything, but that’s still daily limit.

I see, is upgrading to premium an option tho?


The temp suspensions don’t always trigger immediately, so a flood will cause you to go way over the limit (It’s happened to me once, and I argued the point).

Not always. Depends on how far over you went. They don’t want really high traffic and stuff like that on premium shared hosting either. Suppose they would be OK with it if you went VPS or dedicated though.


So if a site gets a sudden burst of traffic, has a bug that causes high server load (infinite loops, memory leaks, etc.), or gets attacked, who is responsible for footing the costs?

Should the hosting provider have to accept that this happens and will cause problems for other unrelated websites? Or dedicate server capacity and migrate the site somewhere where it can receive this traffic without impact to the site itself and other sites? And invest a lot of time, money and effort into a site for which a user who doesn’t pay a cent extra? Or should other people suffer for your site?

Because the load is there, the cost is there, and someone needs to pay for it.

Hardly anyone purposely overloads the servers. But just because the high load isn’t the result of a deliberate action from the website owner doesn’t mean that the owner should be shielded from what’s happening on their website at all costs.

If you want to reduce the risk that this will happen to you, then you need to account for that in capacity planning. But you won’t be able to get that for free. It’s up to the website owner to weigh the risk and costs.


Hi Admin,

I think there are different cases and different scenarios there and thus should be handled differently.

My inquiry didn’t assume that the website is full of bugs or has security issues in the first place (because I won’t get that simply from the term “overloaded”, maybe it’s mentioned in the tickets idk, at least not in this post). I agree that insecure websites or websites that endanger others should be suspended with a chance for the web admin to rectify for sure. No hosting would allow insecure websites on their infrastructure afaik across the industry.

Burst of traffic, if legitimate, should be entertained, but since free hosting has a daily limit then I don’t think this would be an issue for discussion, limit is limit, it’s a numbers talk. As for infinite loops, we have max_execution_time in place so that should in theory be taken care of as well.

Memory leaks and other factors are the answers to my initial inquiry, given that the source code caused those leaks like didn’t close connections or MySQL queries.

No, simple as that. Why should others bear the burden? I don’t see how that’s related to my inquiry tho tbh.

I’m not sure how this free hosting is set up and I have no idea or initiative to go too deep in this aspect, but I do expect a stable performance on a normal secure website that runs within the set limits. How one website can affect another on the same machine completely depends on the server infrastructure which I believe is a business secret and I won’t dive deep on that subject. Subsequent follow-up money-talk questions therefore skipped, I’m only interested in the technical possibility part and the reason part.

Maybe the suspension message should say something like “insecurity factors found on the current website that causes overload” so web admins can understand what they have done wrong and not be confused.

Understandably it’s not feasible to have staff look through codes to say something like “this part has something that goes wrong”, but some hints/directions can help a long way. Even as simple as “abnormal high traffic detected, please implement security measures, etc.” would be much more helpful than “overload the servers”, especially when the target audience is mostly newbies that are trying out the hosting technology.

In case web admins do want to figure it out, then this community is definately the place for them to seek help.

I agree, this is something that the programmer should consider if their feature could be abused in certain ways, or is their current setup strong enough to withstand the expected high traffic.

As logs are not available on free hosting, it would be quite difficult for web admins to know their website is receiving high traffic. A notification after is not something that they could respond to when the thing is in action, but at least they know something happened, and they have to implement a fix the day after (if they have the chance to).



I see I misread a thing you said before. I thought you said that sites should be able to get spikes of traffic and should not be punished for it, but that doesn’t seem to be what you actually said. Sorry about the confusion.

As for this particular type of “overload” suspension, I don’t know exactly how it works, but I do know that under the hood it works a bit different from a normal suspension. I also don’t know exactly what triggers it, but I do know that accounts that were suspended in this way hit the daily limits before, and often by quite a big margin from what I could tell.

We’ve also seen cases where people’s sites go so much use that they basically hit the daily limits again just 10 minutes after reactivation. Which despite being “fair” don’t create much opportunity to fix things.

I should also note that these suspensions were only done at scale during a fairly short period of time (a few months) and are only used very rarely now.


Hi Admin,

While I’m not sure about the metrics for this type of suspension, nor the reason behind, having a clear metric of this specific type is good-to-have. (if applicable)

Maybe in the reactivation message or from within the process, reactivate with an IP restriction that defaults to a registered IP address for the web admin, this way they have a much larger window to fix their things before opening to the public again. That being said tho, if the problem-maker is the web admin himself or herself then there’s not much else one could do to help them.


The metrics for daily limits are deliberately vague too. So it wouldn’t make sense to have more clear metrics now.

The rule is still that you should stay within the limits, not try to skirt them.

This would be quite complicated to fit into the process. And it assumes that:

  • All development is happening from a single IP address. Which means that there is only one person working on the site, from one location, which has a relatively consistent IP address.
  • The issue actually can be fixed if given the opportunity, and that the website owner has the required skills to do so.

So it sounds like a lot of work for something that may not actually help solve the problem in many situations.


As a programmer, what I can do is stay below the limits that are known to me, but for the vague ones, I’m not entirely sure, so that might be an aspect that even an experienced coder might accidentally hit without knowing, which as a user I would want to avoid despite my existing low usage.

It depends on perspective, as a user I would expect the limits given to be what I’m entitled to use without concern, while the metrics can be changed at any time, the current value is the ones that users would consider. It might look as if someone using 9999/10000 inodes is skirting the limit, but to the user, it might be strategically planned to keep his/her data intact just enough within the limit and without abusing the system. Who knows what type of development is going on there, but from your perspective, this can be classified as skirting the limit. (and yes, I’m aware that somewhere in the TOS has mentioned something about not allowing abusive use on the database to circumvent this limitation whatsoever). The user might argue if their site was suspended at this point as they never reached the limit (assumingly).

This assumption is more like a precautionary measure to prevent others from hitting the website before it’s fixed. An IP is more than enough for any web devs to fix their site. Web devs should have a way of changing this IP (be it via control panel / simply .htaccess file) in case they have dynamic IPs. It’s the responsibility of the web dev to maintain this setting before the site is completely fixed. Alternatively, impose an access control using the existing protected directory feature where only the web dev can access and fix things.

I thought about replying something about targeting the audience and whether IF promotes learning code sort-of-thing (because as far as we see this community has a lot of novice users who aren’t as good as in code), but after a while I might just say the support team can refer him/her here and prompt him/her to ask this issue here, solve the issue here before re-attempting an unlock request (if that’s even allowed over such a long time).

However, his/her question has to be very specific as to why limit X is hit. In this case, the said limit X is unknown to the web dev, so I also find it hard for him to ask anything that might be technically helpful for us to solve besides questions like “why my stuff is suspended”. If limit X is inode, we can then tell him/her that he has a script that creates a freak ton of files that has to be otherwise handled, or there’s a security issue somewhere in his code that we can advice case-by-case.

Well, if there is evidence of intentional breaches from the web dev himself/herself, block it. There’s nothing to challenge on that.


I think you’re really overestimating how actionable the metrics would be. Would knowing that your account is limited to 1 MB/s of IO usage and 50% of a CPU core really help you plan your website? Or if you were getting close to the limit, what you would need to do to reduce your usage?

Besides a few known culprits for high load (like backup plugins leading to high IO usage), most of these daily limit suspensions are not actionable. The answers usually are “yes, your usage is high, we don’t know why either” and ends up with the user either throwing stuff at the wall to see what sticks, or being forced to upgrade to get higher limits.

And in this particular case, the high usage was due to hits. And if those hits are just because of more traffic to your site, there is not much to “fix” in the first place.

The idea that “people should have the chance to fix this” does assume that (most) people can fix it if given the chance. And I just don’t think that’s true.

Most people just build their website and accept traffic, and you’ll just have to see about server load afterwards.

I don’t know how you would even be able to skirt the limits if you wanted to. But I do know that the limits were set to allow for burst usage, not continuously trying to use as much server capacity as possible.

In the past, Softaculous used to have IP-locked sessions. This meant that many people were unable to access Softaculous because basically every request their device made was routed through a different IP address.

Talking about inodes here is not relevant, because nobody is ever suspended due to disk usage or inode usage. Also, different from most limits, disk usage and inode usage is quite easy to plan for, because you know how big the site is that you’re setting up.

If the limit is not know, it means they are not paying attention, because their account was suspended for hitting a daily limit before. It shouldn’t be hard to guess that if their account is suspended for high usage first, and then suspended for high usage again, that it’s probably for the same reason.


Hi Admin,

Well, I would say metrics that are useful of course, IO and CPU % isn’t what I consider helpful as well.

Well, then here we have it, the reason for a suspension other than overload. From the error message I see, it directs me to consider a lot more than something that’s fixable. If it’s hits, then there’s nothing that can be done unless Cloudflare Rate limiting with a custom domain, preventing the hits from reaching the server in the first place. As far as I know IF counts hits differently including 403 as a hit as well as we talked before in another post.

As a user, I won’t associate the two suspensions as it could be something else unless there is a second clear clue. But if that site is drawing that much traffic, upgrading is definately the way to go, if they may get a backup.

P.S. Nothing in the post reveals that they have been hitting limits before because of hits, so nobody can tell.


This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.