Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How do website servers handle a large amount of traffic per day?

When a website has thousands of requests every day, the underlying infrastructure must be equipped enough to handle such a huge traffic. The optimization to manage heavy network-traffic at Server level can be done at two levels:

  1. Physical Server

  2. Web Application Server

There are two main approaches to handle a large amount of traffic load on Physical Servers:

  1. Linear Scaling

Invest on a single machine with a lot of processing, memory, space on the hard drive and redundancy. This is apt for a small website with a number of static web pages. For example: A processor having a capacity of 500 MHz or an operating system, which loads a web server like Apache. It is connected to the internet through a reliable connection such as E1 (2 MB per second) or E3 (34 MB per second). This physical server can handle thousands of visitors every day.

  1. Lateral scaling of servers and load-balancing:

Servers have a natural hard limit irrespective of how large they are. Same is applicable to the software that runs on the server (For example: Apache). A virtual private server or dedicated server hosting plan is best suited for websites that have a huge amount of daily users.

Load balancing refers to an efficient distribution of the incoming network traffic across a group of backend servers called the server pool.

Creation of more servers and balancing the load across the servers is the best solution for managing peak-loads.

A load balancer promptly performs the below functions:

  • Dispenses client requests or network load resourcefully across the multiple servers.

  • Assures high availability and reliability by sending requests only to servers that are online.

  • Affords the flexibility to add or subtract servers as demand dictates.

Let us get into more elaborated details in handling high loads by web servers.

  1. Handling requests to a web domain

  • The Domain Name Server (DNS) can dispense the load. (DNS, an internet service that translates the domain name to IP addresses). Each time the server receives a request, DNS will rotate through the available IP addresses in a circular manner to distribute and share the load. Each server has a common access to the pages of the same website.

  • The load can be distributed through switches. Requests for a website are made to a machine. It is then passed onto one of the available servers. The main advantage of using this approach is the redundancy. Even if one of the servers fails, other machines will continue to work and the website remains always accessible. One more advantage here is that the capacity can be increased in an incremental manner.

  1. Autoscaling

Autoscaling is a process in Cloud computing which ensures that the correct number of instances is available to handle the load of the current application. It automatically increases the number of instances during high traffic to maintain the server performance and decrease the instances again as the traffic subsides.

This feature ensures availability of the website and allows to automatically rise up and down based on the network traffic. This is best suited for websites that experience hourly, daily or weekly variability in usage.

  1. Optimize database server settings

Everyday users post new comments, website owners add new pages, modify or remove older pages and add or remove listed products. Such activities create ‘holes’ in database tables. These are the gaps where data entries got deleted but never filled up. Such gaps create fragmentation and cause longer fetch time. If a database has more than 5% of its space as ‘holes’, it has to be fixed.

Multiple table joins, slow queries and other inefficient calls often create a significant impact on the application server performance and need to be optimized periodically.

A few commonly modified database settings are:

  • max_connections – This setting is normally used to prevent a single user from controlling the entire server in a multi-user server environment. In heavily loaded shared servers, this value can be set to a minimum of 10, and in dedicated servers, it can be as high as 250.

  • innodb_buffer_pool_size – Query results are normally stored in a memory area called “buffer pool” for fast access in MySQL databases with InnoDB. This value is set between 50-70% of available RAM for MySQL.

  • key_buffer_size – It determines the cache size for MyISAM tables. This is set approximately at 20% of available memory of MySQL.

  • query_cache_size – This option is enabled only for only single website servers, and its value is set to 10 MB or less, depending on how slow the queries are at present.

  1. Monitor the performance and fine-tune the web server

Keep track of the KPIs (including anecdotal/qualitative customer feedback through help-desks, customer-service, etc.) to ensure optimum performance of the website. It is best to have the web servers audited on a regular basis in case of a high-traffic website. A few settings are mentioned below:

  • Timeout: This setting determines how long a web server will wait for a user to send a request. This value is set based on the server traffic. In busy servers, it is set to 120 secs. Normally, it is always recommended to keep this value as low as possible to avoid resource wastage.

  • KeepAlive: If KeepAlive is set to ON, web server uses a single connection to transfer all the files to load a page. This prevents the need to establish a new connection for loading each file. This saves huge time on a busy traffic day.

  • MaxKeepAliveRequests – This setting determines how many files can be transferred via a KeepAlive connection. Set this as ‘unlimited’ except for situations like resource constraints.

  • MaxClients – It indicates in the web server, how many visitors can be served simultaneously. A very high value causes resource wastage, and setting it too low will result in lost visitors. Set it at an ideal value based on the visitor base.

  • MinSpareServers and MaxSpareServers – The application server keeps a few “workers” on standby to handle a sudden surge of requests. If your site is prone to visit spikes, configure these variables to be at a good limit.

  • HostnameLookups – The application server might try to find out the hostname of every IP that connects to it, which could cause a waste of resources. To prevent this, set HostnameLookups to “0”.

5. Turn on HTTP/2

HTTP/2 is the latest version and contains a lot of performance improvements. It improves server response time by:

  • Using a single connection instead of time-consuming parallel connections to transfer files.

  • Transferring crucial files first to complete a page.

  • Using compression for faster header transfer.

  • Using binary data instead of bulky text data transfer.

  • PUSHing all files required to load a page before it is requested by the browser. It saves precious seconds in sites using multiple CSS, JS, and images (which are basically all modern sites).

HTTP/2 requires the usage of SSL, which makes the website secure by default.

  1. Web Server caching

Tools such as NGINX provide websites the ability to scale high-traffic trends. To handle both dynamic and static web server requests, NGINX is configured along with the main server and NGINX acts as a reverse proxy. With its unique processing ability, this tool handles numerous connections with limited resources. Being a single-threaded server, memory and CPU usage remains relatively stable even at times of high traffic. Using Engintron (addon for cPanel) on the server that has NGINX as the reverse proxy, furthers enhances the server performance at times of peak traffic.

NGINX acts as open source software for web serving, reverse proxying, caching, load balancing and media streaming. It can also function as a proxy server for email (IMAP, POP3) and as a reverse proxy, load balancer for HTTP, TCP and UDP servers.

There is always a trade-off between the site speed and content freshness while using caching and a perfect balance between these two needs to be created.

7. More horsepower for database server/web server

Certain applications would need more capacity for their database servers. In such cases, allocating more resources in terms of CPU and RAM will help the website to perform more smoothly with vastly increased traffic. Some database servers would need their own bare metal (Bare metal are the servers having their own hardware, built and dedicated to one single client) used in conjunction with lighter virtual web servers. Examples include MS SQL Server.

Similarly, high-traffic website obtains more speed and increased performance with the addition of computation and memory resources to web servers as well.

8. Use a CDN for static objects

Videos or images present on a website are often defined as static objects. Using a CDN to serve static objects would reduce the strain from the primary web servers and this also contributes to increasing the performance. In high-traffic websites situations, CDN caches static files across multiple servers and delivers them from the location nearest to each visitor to eliminate as much time lag as possible. CDN is normally combined with a standard hosting platform and is one of the key solutions for maximizing the performance.

9. Database Caching

One of the main reasons for a poor performance of a website is the issue related to database performance. The e-commerce website is a classic example of such a problem. A lot of content remains static or uniform across the website and need not be requested every single time. Caching such content eliminates a large number of requests between browser and web servers.

Caching tools such as Memcache and Redis provide significant performance gain and can help high traffic websites to perform under heavy loads. Such tools speed up the database driven-websites by caching data and objects in RAM to minimize the number of occurrences an external data source requires to be read.

A few more optimization techniques are there for the website servers to handle a large amount of traffic by controlling the load on the web server are listed below:

1. Availability of adequate bandwidth and disabling of resource-intensive services.

Availability of ample bandwidth must be ensured for high traffic websites. Traffic bottlenecks can be easily diagnosed with the help of tools like LINX, LoNAP, and INEX.

Monitor all enabled services on the server and disable those settings which are not used. This unused service consumes a bulk of the CPU and memory. For resource heavy services, night time is the most ideal to perform backups when the traffic is minimal.

Upgrading the hard disk to SSD storages (at least for database partition) can cut down the load time by approximately 10%.

2. Move content closer to the audience

Closer the data is to the user, the faster the user can make use of the website. The faster the users utilize the website, more users can be served by that website. In fast-growing websites, splitting the website into multiple nodes can bring about a rapid rise in performance and capacity gains. High-traffic websites improve significantly from multiple delivery websites for the key Internet markets.

3. Network basics should be intact

Network basics should always be kept intact for ensuring a smooth performance in case of a fast-growing website. A well-designed website still fails if a switch has been set to 100 MB rather than 1 GB or a firewall setting is unnecessarily inspecting traffic causing critical performance issues. Marinating the network basics intact plays a vital role in serving high-performance websites.

4. Sharding

Sharding is the method of storing data records across multiple machines to meet the demands of data growth. Sharding across multiple nodes/clusters increase the performance of a website. For example: MongoDB.

5. Streamline site design

Web pages with a high number of scripts or media are more likely to load slowly. So, for websites that require images, scripts, videos and dynamic content additional measures need to be taken to ensure a good performance.

  • Minify the scripts to remove unnecessary characters.

  • Upload media files using an image optimizer.

  • Specify image dimensions for display (srcset).

  • Merge all CSS into one file.

  • Lazy load option needs to be used to load images on demand rather than at once.

  • Minimize or use lightweight plugins or CMS platforms.

  • Avoid using redirects as much as possible.

  • Detect and fix broken or dead links.

The post How do website servers handle a large amount of traffic per day? appeared first on Apachebooster Blog: Showcasing the tech blogs written by our writers.



This post first appeared on Tired Of Your Web Server Performance? Try Apachebooster, An Apache Based Plugin For Cpanel, please read the originial post: here

Share the post

How do website servers handle a large amount of traffic per day?

×

Subscribe to Tired Of Your Web Server Performance? Try Apachebooster, An Apache Based Plugin For Cpanel

Get updates delivered right to your inbox!

Thank you for your subscription

×