Configuration File Structure

Nginx’s configuration file nginx.conf contains three parts:

  • Global block
  • events block
  • http block

Nginx’s configuration file (usually /etc/nginx/nginx.conf) uses a block structure, mainly including these three types:

├── Global block (main block)
├── events block
└── http block
     ├── server block
     │    └── location block

Global Block

Content from the beginning of the configuration file to the events block. Settings here affect Nginx server’s overall operation, such as worker process numbers, error log location, etc.

  • Location: Top of configuration file
  • Function: Configure global parameters affecting the entire Nginx service

Configuration Example:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

Main parameter descriptions:

  • user: Set Nginx process running user
  • worker_processes: Number of worker processes, recommended to set to CPU core count
  • error_log: Specify error log path and log level
  • pid: Specify location of process ID file

Events Block

events mainly affects Nginx server’s network connection with users, for example worker_connections 1024 indicates each process supports a maximum of 1024 connections

  • Location: Immediately after global block
  • Function: Control Nginx’s connection processing model, closely related to high concurrency performance

Configuration Example:

events {
    worker_connections 10240;
    use epoll;
}

Main parameter descriptions:

  • worker_connections: Maximum connections supported per worker process
  • use epoll: Specify event-driven model (Linux usually uses epoll)

HTTP Block

HTTP block configuration changes most frequently, including virtual host configuration, port listening configuration, request forwarding, reverse proxy, load balancing, etc.

  • Location: Most core module
  • Function: Handle all HTTP protocol-based requests, including server configuration, reverse proxy, load balancing, etc.

Configuration Example:

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
}

Main includes:

  • server block: Define virtual host (listening port, domain name)
  • location block: Define URI matching rules and processing methods
  • upstream block: Define upstream server group (load balancing)
  • log_format/access_log: Define log format

Reverse Proxy

Requirement 1

Deploy tomcat, keep default port 8080. Modify Nginx configuration and reload.

Requirement 2

Deploy one tomcat, keep port 8081. Modify Nginx configuration and reload.

This part mainly uses multiple Locations. Syntax as follows:

location [=|~|~*|^~] /uri/ { … }

In Nginx configuration file, Location has these forms:

  • 1 Regex match location ~ /xxx {}
  • 2 Case-insensitive regex match location ~* /xxx {}
  • 3 Match path prefix location ^~ /xxx {}
  • 4 Exact match location = /xxx {}
  • 5 Normal path prefix match location /xxx {}

Priority: 4 > 3 > 2 > 1 > 5


Load Balancing

Round Robin

Default strategy. Each request is assigned to different servers in time sequence. If a server goes down, it can be automatically removed.

upstream wzk {
 server localhost:8080;
 server localhost:8082;
}

location /abc {
 proxy_pass http://wzk/;
}

weight

Weight represents weight. Default each load-balanced server is 1. Higher weight means more requests assigned (used for scenarios with unbalanced server performance)

upstream wzk {
 server localhost:8080 weight=1;
 server localhost:8082 weight=2;
}

ip_hash

Each request is allocated based on IP hash result. Each client request will be fixed to the same target server. Can solve Session problem.

upstream wzk {
 ip_hash;
 server localhost:8080 weight=1;
 server localhost:8082 weight=2;
}

Dynamic-Static Separation

Dynamic-static separation means distributing dynamic and static resource request processing to different servers. A classic combination is Nginx+Tomcat (Nginx handles static resources, Tomcat handles dynamic resources). In previous examples, Nginx reverse proxied to target server Tomcat. We saw the target server’s ROOT project index.jsp, which is Tomcat handling dynamic resources.