At any time web servers can be overloaded because of:
- Too much legitimate web traffic. Thousands or even millions of clients connecting to the web site in a short interval, e.g., Slashdot effect;
- Distributed Denial of Service attacks.A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users. ;
- Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
- XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
- Internet bots. Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
- Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
- Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.
 Symptoms of overload
The symptoms of an overloaded web server are:
- Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
- The web server returns an HTTP error code, such as 500, 502, 503, 504, or 408, or even 404, which is inappropriate for an overload condition.
- The web server refuses or resets (interrupts) TCP connections before it returns any content.
- In very rare cases, the web server returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload.
 Anti-overload techniques
To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:
- managing network traffic, by using:
- deploying Web cache techniques;
- using different domain names to serve different (static and dynamic) content by separate web servers, i.e.:
- using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 – 1000 MB) files by using different settings;
- using many web servers (programs) per computer, each one bound to its own network card and IP address;
- using many web servers (computers) that are grouped together so that they act or are seen as one big web server (see also Load balancer);
- adding more hardware resources (i.e. RAM, disks) to each computer;
- tuning OS parameters for hardware capabilities and usage;
- using more efficient computer programs for web servers, etc.;
- using other workarounds, especially if dynamic content is involved.