Basic Introduction
Nginx is a high-performance HTTP service and reverse proxy web server. Its core features are low memory footprint and strong concurrent processing capability.
Origins and Development Milestones
- Birth: Nginx (“engine-x”) was first publicly released by Russian engineer Igor Sysoev in 2004, using event-driven + non-blocking I/O model to solve Apache’s “C10K” problem in high concurrency scenarios.
- Ecosystem Growth: Commercial company Nginx Inc. established in 2011; acquired by F5 for $670 million in 2019. Open source version continues to be maintained by the community. Commercial version renamed NGINX Plus, supplementing enterprise features like WAF, active health checks, and graphical monitoring.
- Latest Updates: Version 1.28.0 stable released on 2025-04-23, introducing QUIC performance improvements, upstream hostname auto-reparsing, OCSP Stapling for Stream, etc. This is the currently recommended production version.
Core Architecture Analysis
Nginx uses “Master + Worker” multi-process model:
- Master: Responsible for loading configuration, listening on ports, managing Workers.
- Worker: Uses efficient event polling like epoll/kqueue. Each Worker is independent and lock-free sharing handles, fully utilizing multi-cores.
- Hot Loading: nginx -s reload can complete lossless rolling upgrade in milliseconds.
- Resource Efficiency: 10,000 keep-alive connections only consume about 2.5MB memory, easily supporting tens of thousands of concurrent connections.
This architecture brings three advantages: low memory footprint, graceful reloading, high connection concurrency, and pluggable modularity (static/dynamic modules).
Core Features
- HTTP Server: Static files, index/directory listing, Gzip, Range, XSLT, SSI, HTTP/2, HTTP/3/QUIC
- Reverse Proxy & Caching: proxy_pass, multi-level caching, proxy_cache, bandwidth limiting, backend health checks
- Load Balancing: upstream supports RR, IP hash, least connections, slow start, supports session persistence and active/passive health checks
- Traffic Management: Rate limiting (limit_req/conn), gray release (split_clients), A/B testing, mirrored requests
- Multi-protocol Proxy: TCP/UDP, FastCGI, uwsgi, SCGI, gRPC, WebSocket
- Script Extensions: Built-in njs/Perl, dynamic logic on demand
Common Deployment Scenarios
- Static resource server: Blogs, frontend SPA, download mirror distribution
- Reverse proxy + caching layer: Accelerate backend Java/Spring, Go, Python services, peak shaving and valley filling
- API Gateway/BFF: Combined with Lua, OpenResty, Kong, etc., implementing authentication, rate limiting, service orchestration
- Kubernetes Ingress: Official NGINX Ingress Controller; can also use NGINX Gateway Fabric to implement Gateway API
- Multi-protocol service hub: MQTT, WebSocket, gRPC, database tunnel, etc. L4/L7 unified entry
- Live streaming/on-demand: HLS, DASH, FLV, MP4 slicing and time shift, edge caching
Application Scenarios
HTTP Server
Very high performance, efficiency-focused, supports high load. Supports 50,000 concurrent connections. Not only that, CPU and memory usage is very low.
Reverse Proxy Server
- Forward Proxy: Configure proxy server information in the browser, access target website through proxy server. After the proxy server receives the target website’s response, it returns the information to our own browser client.
- Reverse Proxy: Browser client sends request to reverse proxy server (such as Nginx). The reverse proxy server selects the origin server to provide service and get result response, finally returning to the client browser.
Basic Configuration Example
worker_processes auto;
events { worker_connections 10240; }
http {
include mime.types;
server {
listen 80 default_server;
server_name _;
root /var/www/html;
location / { try_files $uri $uri/ =404; }
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
}
}
Reverse Proxy Configuration
http {
upstream backend {
server 10.0.0.1 max_fails=3 fail_timeout=10s;
server 10.0.0.2 max_fails=3 fail_timeout=10s;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
location /api/ {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
Operations Optimization Points
- Resource Limits: worker_rlimit_nofile & ulimit synchronized increase; reuseport on Linux 3.9+ improves multi-core utilization
- TLS Tuning: Prioritize TLS 1.3, properly configure ssl_session_cache, ssl_buffer_size, OCSP Stapling (1.28 supports Stream module)
- Logging and Monitoring: JSON format output + Fluent Bit/Loki; or enable Nginx Amplify, Grafana Agent to collect metrics
- Security Hardening: headers_more adds HSTS/CSP; use ModSecurity or commercial WAF
- Containerization/K8s: Only expose necessary ports, utilize readinessProbe + status module; inject log collection and certificate refresh in sidecar