While NGINX is already a lot better than the most commonly used web server, Apache 2, and is already pretty decent out of the box, but there are as always a few things that we can optimize about it.
If you are following the guides on JvdC.me you now have a server which has gone through the initial server setup to create a new user with root capabilities and has the latest version of NGINX successfully installed. Now we will optimize the NGINX web server configuration. In a next article we will discuss how to ensure GZIP compressing and caching are working properly.
→ A Linux (Ubuntu) server that has gone through the initial setup.
→ Have the open source NGINX web server installed.
→ Know how to connect to your server via SSH (Terminal, Putty,…) or via a client that supports SFTP.
→ Know how to edit and save a file using SSH.
Step 1: Open up nginx.conf
You can find and edit the NGINX configuration file via the terminal like this:
sudo nano /etc/nginx/nginx.conf
Some may find it easier to edit files using SFTP. You can of course find the file in the same location and the end result is the same.
Throughout the time I have been using NGINX I have created a default set of settings that I use on almost all of my websites. Like always you’ll have to experiment and find the perfect settings for your situation, but these should work for the majority. I will go over them one by one.
Step 2: Workers settings
worker_processes is responsible for letting the server know how many workers to spawn once it has become bound to the proper IP and port(s). It should be set equal to the number of cores you have on your server. If you do not know how many cores your server has, simply using this command:
grep processor /proc/cpuinfo | wc -l
worker_connections tells the worker_processes how many people can simultaneously be served by the NGINX web server. The default value is 768. You can check your core’s limitations by using the ulimit -n command. On a smaller machine this will probably output 1024 which is a good starting number.
worker_processes = 1; worker_connections = 1024;
Step 2: Buffers settings
Buffer sizes are the next important settings to tweak. If the buffer size is set too low NGINX will create a temporary file causing the disk to read and write constantly. While this problem is mitigated more or less with super fast SSD disks, it still can cause low performance.
- client_body_buffer_size handles POST actions, which are typically form submissions, sent to NGINX.
- client_header_buffer_size is similar to the previous directive, but this handles client header size. For almost every intent and purpose, 1K is a decent size.
- client_max_body_size is the maximum allowed size for a client request. If this value is exceeded NGINX will throw you a 413 error (i.e. Request Entity Too Large).
- large_client_header_buffers includes two number. The first one sets the number of buffers for large client headers and the second one sets the maximum size.
client_body_buffer_size 10k; client_header_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 2 1k;
Step 2: Timeouts settings
Besides editing the worker and buffer settings, changing timeouts can also improve performance a whole lot. In general you want to keep these as low as possible, but not too low so it doesn’t cut off the visitor before your website is loaded.
- Both client_body_timeout and client_header_timeout will set the time a server will wait for a client body or client header to be sent after it has been requested. If nothing is sent the server will throw a 408 error (i.e. Request Time Out).
- The keepalive_timeout directive sets the period of time after which NGINX will close the connection with the client. It’s better if this is higher than client_body_timeout and client_header_timeout.
- send_timeout will occur if the clients takes nothing between two operations of reading. NGINX will then shut down the connection.
client_body_timeout 15; client_header_timeout 15; keepalive_timeout 20; send_timeout 10;
Super Performance Activated! What’s next?
You know have a proper installed NGINX web server on your server which is also gone through some basic optimization. What is next? You can further increase the usefulness of NGINX by enabling and optimizing GZIP compressing and static file caching.