Optimizing a Web Server
Web server is the very first link in the operation of any web site. It receives a request from the client, generates a response and sends it back to the client. When the number of such requests increases, the speed of the Web server will fall.
Basics of optimization
There are several simple methods for increasing the efficiency of the Web server. Everything is based on three principles:
- Query Compression
- Reducing the number of requests
- Increase / limit resources
Query Compression
All modern browsers support working with Gzip compression. The Web server compresses the contents of the response before sending it to the client, and the browser extracts it at the time it is received. So you can save up to 70% of the file size. For customers, this will mean a higher speed of the Web site.
Compression works only for text format files (HTML / XML, CSS, Javascript). Compression will also lead to additional load on the Web server, but insignificant.
Reducing the number of requests
Each picture or script on the Web page is a separate request to the Web server. 10 pictures and 3 Java scripts on the page will result in the Web server receiving 14 requests from each visitor:
1 basic query + 10 pictures + 3 JS = 14
The less each client will send requests to the server, the better. There are several simple approaches for this:
- Minimize the number of external requests from the page (gluing css / js, CSS sprites).
- Use client caching so that the client does not send repeated requests to the server. To do this, the Web server must send a special Cache-control header to the browser.
Setting up resources
A web server without internal configuration most likely does not use all available platform resources. Adjusting the parameters can increase the efficiency of the work several times.
The most important
- Compressing and reducing the number of requests to the Web server will bring 90% of the effect to your visitors
- Optimizing server settings will help increase the number of visitors it can serve without loss
Optimal Nginx Configuration
In a standard configuration, Nginx can run at very high loads. Nevertheless, the efficiency of its operation can be significantly improved by tuning its parameters. This setting is called tuning (tuning, adjustment).
How to tune Nginx
Usually the configuration file is called nginx.conf. You can find it here:
Debian
/ etc / nginx / nginx.conf
Freebsd
/ usr / local / etc / nginx / nginx.conf
The settings file usually looks like this:
user www-data;
worker_processes 1;
events {
worker_connections 1024;
}
http {
...
}
Optimization of parameters
Processing of connections
The maximum number of connections that Nginx can maintain simultaneously are determined by the product of the two parameters:
Total connections = worker_processes x worker_connections
worker_processes auto;
# Specifies the number of workflows. It’s better to install it in auto in newer versions.
worker_connections 1024;
# Sets the maximum number of connections per workflow. The values ​​from 1024 to 4096 should be selected.
The use directive sets the method for selecting connections. For different operating systems, you need to use different methods.
Linux
use epoll
Freebsd
use kqueue
By default, Nginx will try to select the most efficient method by itself.
Query Processing
multi_accept on;
# Will accept the maximum number of connections possible
sendfile on;
# The sendfile method is more efficient than the standard read + write method
tcp_nodelay on;
tcp_nopush on;
# Will send the headers and the beginning of the file in one package
File information
Nginx can cache information about the files with which it has to work (for example, css styles or pictures). If there are a lot of calls to such files, caching can greatly speed up this process.
open_file_cache max=200000 inactive=20s;
# Specifies the maximum number of files that information will be contained in the cache
open_file_cache_valid 30s;
# Determines after what time the information will be removed from the cache
open_file_cache_min_uses 2;
# Will cache information about those files that were used at least 2 times
open_file_cache_errors on;
# Will cache information about missing files
Logging
The main log is better to disable to save disk operations, and the error log is better to switch to logging only critical situations.
access_log off;
error_log /var/log/nginx/error.log crit;
Gzip Compression
It is necessary to use compression, it will significantly reduce traffic. To check if compression is enabled, you can use Gzip checker.
gzip on;
gzip_disable "msie6";
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
# Will compress all files with the listed types
Client Processing
Keepalive connections avoid the need to re-create a connection between the client and the server.
keepalive_timeout 30;
# Wait for 30 seconds before closing the keepalive connection
keepalive_requests 100;
# Maximum number of keepalive requests from one client
Many problems can create slow clients. Slow transfer of the request body from the client to the server and the unexpected closing of connections by the client can create a large number of unnecessary connections on the server.
reset_timedout_connection on;
# If the client stops responding, Nginx will discard the connection with it
client_body_timeout 10;
# Wait for 10 seconds for the request body from the client, and then drop the connection
send_timeout 2;
# If the client stops reading the response, Nginx will wait 2 seconds and discard the connection
Limit the sending of large requests to the server (for example, downloading large files), if this is not provided by the site.
client_max_body_size 1m;
# In this case, the server will not accept requests larger than 1MB
After editing the settings, you must reboot:
nginx -s reload