Skip to main content

Nginx 101: Zero to Hero - Red Hat System Engineer Edition

Last Updated: 2025-11-03
Target Audience: Red Hat/CentOS System Engineers
Focus: Open Source, Production-Ready Configurations

Table of Contents

  1. Introduction
  2. Installation on Red Hat
  3. Core Concepts
  4. Basic Configuration
  5. IP Whitelist & Map Functions
  6. Virtual Hosts (Server Blocks)
  7. Reverse Proxy & Load Balancing
  8. SSL/TLS Configuration
  9. Performance Tuning
  10. Comparison Tables
  11. Troubleshooting
  12. Red Hat Specific
  13. Best Practices

Introduction

Nginx is a lightweight, high-performance, event-driven web server and reverse proxy optimized for:

  • Concurrency: Handles 1000s of concurrent connections with minimal resources
  • Performance: Non-blocking I/O model
  • Flexibility: Modular architecture for customization
  • Reliability: Used by major companies (Netflix, Slack, Dropbox)

Why Nginx for Red Hat Engineers?

  • Minimal resource consumption
  • Perfect for containerized environments
  • Excellent reverse proxy capabilities
  • Community-driven open source
  • Native integration with systemd

Installation on Red Hat

System Requirements

# Check system info
cat /etc/redhat-release
uname -m
uname -r

# Minimum requirements
# - CPU: 1 core (2+ recommended)
# - RAM: 512 MB minimum (2GB+ recommended)
# - Disk: 1 GB free space
# Step 1: Update system
sudo dnf update -y

# Step 2: Enable EPEL repository
sudo dnf install epel-release -y

# Step 3: Install Nginx
sudo dnf install nginx -y

# Step 4: Verify installation
nginx -v
nginx -V | grep -o 'with-[^[:space:]]*' | head -10

# Step 5: Enable and start service
sudo systemctl enable nginx
sudo systemctl start nginx

# Step 6: Verify running
sudo systemctl status nginx

Method 2: Install from Official Nginx Repository (Latest)

# Step 1: Create repository file
sudo tee /etc/yum.repos.d/nginx.repo > /dev/null <<'EOF'
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/rhel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://nginx.org/keys/nginx_signing.key
enabled=1

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/rhel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://nginx.org/keys/nginx_signing.key
enabled=0
EOF

# Step 2: Install Nginx
sudo dnf install nginx -y

# Step 3: Enable and start
sudo systemctl enable nginx
sudo systemctl start nginx

Method 3: Compile from Source (Advanced)

# Step 1: Install build dependencies
sudo dnf groupinstall "Development Tools" -y
sudo dnf install pcre-devel zlib-devel openssl-devel libxslt-devel \
gd-devel geoip-devel -y

# Step 2: Create nginx user
sudo useradd -r -M -s /sbin/nologin nginx

# Step 3: Download and extract source
cd /tmp
wget http://nginx.org/download/nginx-1.26.0.tar.gz
tar -xzf nginx-1.26.0.tar.gz
cd nginx-1.26.0

# Step 4: Configure with common modules
./configure \
--prefix=/etc/nginx \
--sbin-path=/usr/sbin/nginx \
--modules-path=/usr/lib64/nginx/modules \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--user=nginx \
--group=nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_gzip_static_module \
--with-http_proxy_module \
--with-http_upstream_module \
--with-http_map_module \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module

# Step 5: Compile
make -j$(nproc)

# Step 6: Install
sudo make install

# Step 7: Create systemd service file
sudo tee /etc/systemd/system/nginx.service > /dev/null <<'EOF'
[Unit]
Description=Nginx HTTP and reverse proxy server
After=network.target

[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

# Step 8: Enable and start
sudo systemctl daemon-reload
sudo systemctl enable nginx
sudo systemctl start nginx

Post-Installation Verification

# Check service status
sudo systemctl status nginx

# Check if listening on ports
sudo ss -tulpn | grep nginx

# Check nginx processes
ps aux | grep nginx

# Verify configuration
sudo nginx -t

# Check log files
ls -la /var/log/nginx/

Core Concepts

Nginx Architecture

REQUEST FLOW
──────────────────────────────────────────────────────

Client Request


┌─────────────────────────┐
│ Master Process │ (1 per instance)
│ - Reads config │ (root privileges)
│ - Validates syntax │ (UID: 0)
│ - Manages workers │
└──────────┬──────────────┘

┌──────┴──────┐
│ │
▼ ▼
┌──────────┐ ┌──────────┐
│ Worker 1 │ │ Worker N │ (Multiple)
│ UID: │ │ UID: │ (nginx user)
│ nginx │ │ nginx │ (non-root)
└──────┬───┘ └────┬─────┘
│ │
Event Loop Event Loop
(epoll) (epoll)
↓ Handle 1000s of connections per worker

Response to Client

Request Processing Pipeline

1. REQUEST RECEIVED
└─ Client connects to port 80/443

2. SERVER BLOCK MATCHING
└─ Match server_name and listen directive

3. LOCATION MATCHING
└─ Pattern: = (exact) → ^~ (prefix priority) → ~ (regex) → prefix

4. PHASE PROCESSING
├─ post-read: Headers processing
├─ server-rewrite: Rewrite rules
├─ find-config: Location selection
├─ rewrite: URL rewriting
├─ post-rewrite: Verification
├─ preaccess: Rate limiting, auth
├─ access: Access control
├─ post-access: Processing
├─ precontent: Content generation
├─ content: Response generation
└─ log: Logging

5. RESPONSE SENT
└─ Headers + Body to Client

Worker Processes Explained

# Master process (PID X)
nginx: master process /usr/sbin/nginx

# Worker processes (PPID X)
nginx: worker process
nginx: worker process
nginx: worker process
...

Why multiple workers?

  • Each worker can handle concurrent connections
  • Non-blocking I/O means 1 worker can handle 1000+ connections
  • Workers are CPU-bound (not I/O bound with async)

Basic Configuration

File Locations on Red Hat

/etc/nginx/                      # Main config directory
├── nginx.conf # Primary config file
├── conf.d/ # Additional configs (*.conf)
├── sites-available/ # Available server blocks
├── sites-enabled/ # Active server blocks (symlinks)
├── mime.types # MIME type mappings
└── fastcgi_params # FastCGI parameters

/var/log/nginx/
├── access.log # Request log
└── error.log # Error log

/var/cache/nginx/ # Cache directory
/var/run/nginx/ # Runtime files (PID, sockets)

Main Configuration File Structure

# /etc/nginx/nginx.conf

# === GLOBAL CONTEXT ===
user nginx; # Run as nginx user
worker_processes auto; # Match CPU cores
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

# === EVENTS CONTEXT ===
events {
worker_connections 1024; # Per worker limit
use epoll; # Linux kernel event model
multi_accept on; # Accept multiple connections
}

# === HTTP CONTEXT ===
http {
# Basic settings
include /etc/nginx/mime.types;
default_type application/octet-stream;

# Logging
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;

# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

# Gzip compression
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json
application/javascript text/xml application/xml;

# Include all config files
include /etc/nginx/conf.d/*.conf;
}

Configuration Directive Scope

DirectiveMainEventsHTTPServerLocation
user
worker_processes
worker_connections
error_log
access_log
proxy_pass
ssl_certificate
gzip
rewrite

IP Whitelist & Map Functions

Map functions are powerful for creating conditional logic in Nginx. Here's how to implement IP whitelisting:

Basic IP Whitelist with Map

# /etc/nginx/conf.d/whitelist.conf

# Define map for IP whitelisting
map $remote_addr $ip_whitelist {
default 0;

# Internal network
10.0.0.0/8 1;
172.16.0.0/12 1;
192.168.0.0/16 1;

# Specific IPs
203.0.113.10 1; # Example admin IP
203.0.113.11 1; # Example admin IP
198.51.100.20 1; # Example partner IP

# Localhost
127.0.0.1 1;
::1 1;
}

# Use in server block
server {
listen 80;
server_name admin.example.com;

location / {
# Deny if not in whitelist
if ($ip_whitelist = 0) {
return 403 "Access Denied - IP not whitelisted\n";
}

proxy_pass http://backend;
}
}

Advanced IP Whitelist with Variables

# /etc/nginx/conf.d/advanced-whitelist.conf

# Whitelist map with description
map $remote_addr $ip_whitelist {
default "denied";

# Internal networks
"~^10\." "internal-office";
"~^172\.(1[6-9]|2[0-9]|3[01])\." "internal-office";
"~^192\.168\." "internal-office";

# VPN networks
"~^203\.0\.113\." "vpn-users";

# Specific partner
"~^198\.51\.100\.20$" "trusted-partner";

# Localhost
"127.0.0.1" "localhost";
"::1" "localhost";
}

server {
listen 80;
server_name admin.example.com;

location / {
if ($ip_whitelist = "denied") {
return 403 "Access Denied\nYour IP: $remote_addr is not authorized\n";
}

# Add header showing whitelist status
add_header X-Whitelist-Status $ip_whitelist;

proxy_pass http://backend;
}
}

IP Whitelist with External File (Geo Module)

# /etc/nginx/conf.d/geo-whitelist.conf

# Using geo directive for IP ranges (requires GeoIP module)
geo $ip_whitelist {
default 0;

# CIDR ranges
10.0.0.0/8 1;
172.16.0.0/12 1;
192.168.0.0/16 1;
203.0.113.0/24 1;
198.51.100.0/24 1;
}

server {
listen 80;
server_name api.example.com;

location /admin {
if ($ip_whitelist = 0) {
return 403;
}

proxy_pass http://backend;
}
}

Multi-Level Whitelist (API Access Control)

# /etc/nginx/conf.d/api-whitelist.conf

# Public endpoints - no IP restriction
map $request_uri $public_endpoint {
default 0;
"~^/api/v1/status" 1;
"~^/api/v1/health" 1;
"~^/api/v1/docs" 1;
}

# Admin IP whitelist
map $remote_addr $admin_whitelist {
default 0;
10.0.0.0/8 1;
127.0.0.1 1;
}

# Partner API IP whitelist
map $remote_addr $partner_whitelist {
default 0;
203.0.113.0/24 1;
198.51.100.0/24 1;
}

server {
listen 80;
server_name api.example.com;

# Public API endpoints
location ~ ^/api/v1/(status|health|docs) {
proxy_pass http://backend;
}

# Admin API endpoints
location ~ ^/api/v1/admin {
if ($admin_whitelist = 0) {
return 403 "Admin access denied\n";
}
proxy_pass http://backend;
}

# Partner API endpoints
location ~ ^/api/v1/partner {
if ($partner_whitelist = 0) {
return 403 "Partner access denied\n";
}
proxy_pass http://backend;
}

# Default deny all other endpoints
location /api/ {
return 404;
}
}

Dynamic IP Whitelist with X-Forwarded-For (Behind Load Balancer)

# /etc/nginx/conf.d/x-forwarded-for-whitelist.conf

# Set real IP from load balancer
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12;
real_ip_header X-Forwarded-For;

# Whitelist based on original client IP
map $remote_addr $ip_whitelist {
default 0;
10.0.0.0/8 1; # Internal LB
127.0.0.1 1; # Direct access
}

# Get real client IP for whitelist check
map $http_x_forwarded_for $real_client_ip {
default $remote_addr;
"~^(?P<IP>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})" $IP;
}

# Whitelist real client IP
map $real_client_ip $real_ip_whitelist {
default 0;
203.0.113.0/24 1; # Trusted partners
198.51.100.0/24 1; # Trusted partners
}

server {
listen 80;
server_name api.example.com;

location /secure {
if ($real_ip_whitelist = 0) {
return 403 "Your IP ($real_client_ip) is not whitelisted\n";
}

add_header X-Real-Client-IP $real_client_ip;
proxy_pass http://backend;
}
}

Whitelist with Rate Limiting

# /etc/nginx/conf.d/whitelist-ratelimit.conf

# IP whitelist
map $remote_addr $ip_whitelist {
default 0;
10.0.0.0/8 1;
127.0.0.1 1;
}

# Rate limiting zones
limit_req_zone $binary_remote_addr zone=public:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=whitelisted:10m rate=100r/s;

server {
listen 80;
server_name api.example.com;

location / {
# Apply different rate limits based on whitelist
if ($ip_whitelist = 1) {
limit_req zone=whitelisted burst=200 nodelay;
}

if ($ip_whitelist = 0) {
limit_req zone=public burst=20 nodelay;
}

proxy_pass http://backend;
}
}

Testing IP Whitelist Configuration

# Test from whitelisted IP
curl -H "X-Forwarded-For: 203.0.113.10" http://api.example.com/admin

# Test from non-whitelisted IP
curl -H "X-Forwarded-For: 203.0.114.99" http://api.example.com/admin

# View real IP in logs
sudo tail -f /var/log/nginx/access.log | grep admin

# Check current client IP
curl https://icanhazip.com/
curl https://ifconfig.me/

# Test from command line
for ip in 203.0.113.10 203.0.113.11 203.0.114.99; do
echo "Testing IP: $ip"
curl -s -w "HTTP %{http_code}\n" -H "X-Forwarded-For: $ip" \
http://api.example.com/admin
done

Virtual Hosts (Server Blocks)

Basic Server Block Structure

server {
# Listening directives
listen 80;
listen [::]:80; # IPv6
server_name example.com www.example.com;

# Root directory
root /var/www/example.com;
index index.html index.htm;

# Logging
access_log /var/log/nginx/example.com_access.log;
error_log /var/log/nginx/example.com_error.log;

# Location blocks
location / {
try_files $uri $uri/ =404;
}
}

Multiple Server Blocks Pattern

# /etc/nginx/sites-available/example.com.conf

# HTTP Server - Redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;

return 301 https://$server_name$request_uri;
}

# HTTPS Server
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;

# SSL configuration
ssl_certificate /etc/nginx/ssl/example.com.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;

# Root
root /var/www/example.com;
index index.html;

# Locations
location / {
try_files $uri $uri/ =404;
}
}

Organization Best Practice

# Directory structure
/etc/nginx/
├── nginx.conf # Main config
├── conf.d/
│ ├── default.conf
│ ├── security.conf # Security settings
│ └── maps.conf # Map functions
├── sites-available/
│ ├── example.com.conf
│ ├── api.example.com.conf
│ └── admin.example.com.conf
└── sites-enabled/ # Symlinks to active sites
├── example.com.conf -> ../sites-available/example.com.conf
├── api.example.com.conf -> ../sites-available/api.example.com.conf
└── admin.example.com.conf -> ../sites-available/admin.example.com.conf

# Enable a site
sudo ln -s /etc/nginx/sites-available/newsite.conf \
/etc/nginx/sites-enabled/

# Test and reload
sudo nginx -t
sudo systemctl reload nginx

Reverse Proxy & Load Balancing

Simple Reverse Proxy

server {
listen 80;
server_name example.com;

location / {
proxy_pass http://backend_server:8080;

# Pass original request info
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}

Upstream Configuration

# Define backend pool
upstream backend_pool {
server backend1.example.com:8080;
server backend2.example.com:8080;
server backend3.example.com:8080;
}

server {
listen 80;
server_name example.com;

location / {
proxy_pass http://backend_pool;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Load Balancing Algorithms

AlgorithmUse CaseConfiguration
Round-robinGeneral purpose, equal distribution(default)
Least connectionsLong-lived connectionsleast_conn;
IP HashSession persistenceip_hash;
Least timePerformance optimizationleast_time header; (Nginx+)
RandomLoad testing, specialty casesrandom;
WeightedUnequal server capacityweight=5;
# Round-robin (default)
upstream backend {
server backend1:8080;
server backend2:8080;
}

# Least connections
upstream backend {
least_conn;
server backend1:8080;
server backend2:8080;
}

# IP hash (sticky sessions)
upstream backend {
ip_hash;
server backend1:8080;
server backend2:8080;
}

# Weighted round-robin
upstream backend {
server backend1:8080 weight=5; # 5x more traffic
server backend2:8080 weight=3;
server backend3:8080 weight=1;
}

# With health check parameters
upstream backend {
server backend1:8080 max_fails=3 fail_timeout=30s;
server backend2:8080 max_fails=3 fail_timeout=30s;
server backup:8080 backup; # Only if others down
}

Advanced Proxy Configuration

upstream backend {
server backend1:8080;
server backend2:8080;
keepalive 32;
}

server {
listen 80;
server_name api.example.com;

location /api {
proxy_pass http://backend;

# Protocol
proxy_http_version 1.1;
proxy_set_header Connection "";

# Headers
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;

# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;

# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

# Upstream retry
proxy_next_upstream error timeout invalid_header http_500 http_503;
proxy_next_upstream_tries 2;
}
}

SSL/TLS Configuration

Generate Self-Signed Certificate

# Create certificate directory
sudo mkdir -p /etc/nginx/ssl
sudo chmod 700 /etc/nginx/ssl

# Generate private key (4096-bit RSA)
sudo openssl genrsa -out /etc/nginx/ssl/private.key 4096

# Generate certificate signing request
sudo openssl req -new \
-key /etc/nginx/ssl/private.key \
-out /etc/nginx/ssl/cert.csr \
-subj "/C=US/ST=California/L=SanFrancisco/O=MyCompany/CN=example.com"

# Self-sign the certificate (valid 365 days)
sudo openssl x509 -req -days 365 \
-in /etc/nginx/ssl/cert.csr \
-signkey /etc/nginx/ssl/private.key \
-out /etc/nginx/ssl/cert.crt

# Set permissions
sudo chown -R root:nginx /etc/nginx/ssl
sudo chmod 600 /etc/nginx/ssl/*

# Verify certificate
openssl x509 -in /etc/nginx/ssl/cert.crt -text -noout

HTTPS Server Block

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;

# SSL certificates
ssl_certificate /etc/nginx/ssl/example.com.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;

# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets off;

# HSTS (HTTP Strict Transport Security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Root and index
root /var/www/example.com;
index index.html;

location / {
try_files $uri $uri/ =404;
}
}

# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;

return 301 https://$server_name$request_uri;
}

Let's Encrypt with Certbot

# Install certbot
sudo dnf install certbot python3-certbot-nginx -y

# Obtain certificate
sudo certbot certonly --nginx \
-d example.com \
-d www.example.com \
--email admin@example.com \
--agree-tos \
--no-eff-email

# Auto-renewal (enabled by default)
sudo systemctl enable certbot-renew.timer

# Check renewal status
sudo certbot renew --dry-run

# View certificates
sudo certbot certificates

# Renew specific certificate
sudo certbot renew --cert-name example.com

Nginx Configuration for Let's Encrypt

server {
listen 443 ssl http2;
server_name example.com;

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;

root /var/www/example.com;

location / {
try_files $uri $uri/ =404;
}
}

server {
listen 80;
server_name example.com;

# ACME challenge location for renewal
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}

# Redirect all other traffic to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}

Performance Tuning

Worker Process Configuration

# /etc/nginx/nginx.conf

# Auto-detect CPU cores (RECOMMENDED)
worker_processes auto;

# Manual setting - match CPU count
worker_processes 8;

# Worker priority (nice value, -20 to 19)
worker_priority -5;

# Maximum file descriptors per worker
worker_rlimit_nofile 65535;

# Connection limits
events {
worker_connections 2048;
use epoll; # Linux (best performance)
multi_accept on;
}

Caching Strategy

http {
# Static assets - long expiry (30 days)
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}

# HTML - must revalidate
location ~* \.(html)$ {
expires 1d;
add_header Cache-Control "public, must-revalidate";
}

# Dynamic content - no cache
location ~* \.(php|jsp|aspx)$ {
expires -1;
add_header Cache-Control "private, must-revalidate";
}
}

Gzip Compression

http {
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6; # 1-9 (6 is good default)
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/x-icon;
gzip_min_length 1000;
gzip_buffers 4 4k;
gzip_disable "msie6";
}

Buffer Optimization

http {
# Client-side buffers
client_body_buffer_size 128k;
client_max_body_size 20m;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;

# Proxy buffers
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;

# Connection handling
keepalive_timeout 65;
keepalive_requests 100;
tcp_nopush on;
tcp_nodelay on;

# Reset timed out connections
reset_timedout_connection on;
client_body_timeout 10;
client_header_timeout 10;
send_timeout 10;
}

Connection Optimization

events {
worker_connections 4096; # Increase for high traffic
use epoll; # Optimal for Linux
multi_accept on; # Accept multiple connections
}

http {
# Keep-alive settings
keepalive_timeout 65;
keepalive_requests 100;

# Connection timeouts
client_body_timeout 12;
client_header_timeout 12;
server_tokens off; # Don't expose version

# Disable reset connections
tcp_nopush on; # Send headers in one packet
tcp_nodelay on; # No delay for small packets
}

Performance Benchmarking

# Benchmark with Apache Bench
ab -n 10000 -c 100 -t 30 http://example.com/

# Benchmark with wrk (concurrent HTTP benchmarking tool)
sudo dnf install wrk -y
wrk -t12 -c400 -d30s --latency http://example.com/

# Monitor during test
watch -n1 'netstat -an | grep ESTABLISHED | grep :80 | wc -l'

# Check system resources
top
free -h
df -h

# Monitor Nginx performance
sudo tail -f /var/log/nginx/access.log | grep -oP '(rt=[\d.]+|uct=[\d.]+|urt=[\d.]+)'

Comparison Tables

Nginx vs Apache

FeatureNginxApache
ArchitectureEvent-driven, asyncThread/process-based
Memory UsageLow (typical: 2-5 MB)High (typical: 10-20 MB)
ConcurrencyHandles 1000s easilyStruggles above 500
ConfigurationSimple, minimalComplex, verbose
PerformanceExcellentGood
ModulesLimited, compiledExtensive, dynamic
.htaccessNot supportedSupported
Use CaseStatic, proxy, high-trafficPHP apps, legacy systems
Learning CurveModerateSteep
CommunityGrowing, modernMature, large

HTTP Status Codes Reference

CodeDescriptionMeaning
200OKRequest successful
301Moved PermanentlyPermanent redirect
302FoundTemporary redirect
304Not ModifiedUse cached version
400Bad RequestInvalid request
401UnauthorizedAuthentication required
403ForbiddenAccess denied
404Not FoundResource not found
429Too Many RequestsRate limit exceeded
500Internal Server ErrorServer error
502Bad GatewayBackend unreachable
503Service UnavailableServer overloaded
504Gateway TimeoutBackend timeout

Nginx Modules Comparison

ModulePurposeStatus
http_ssl_moduleHTTPS/TLS supportBuilt-in
http_v2_moduleHTTP/2 supportBuilt-in
http_gzip_moduleCompressionBuilt-in
http_proxy_moduleReverse proxyBuilt-in
http_rewrite_moduleURL rewritingBuilt-in
http_map_moduleVariable mappingBuilt-in
http_upstream_moduleLoad balancingBuilt-in
http_limit_conn_moduleConnection limitingBuilt-in
http_limit_req_moduleRate limitingBuilt-in
http_geo_moduleGeoIP mappingBuilt-in
http_realip_moduleReal IP extractionBuilt-in
stream_moduleTCP/UDP proxyBuilt-in (1.9+)
njs_moduleJavaScript executionOptional
http_geoip2_moduleMaxMind GeoIP23rd party
ngx_cache_purgeCache management3rd party

Location Matching Priority

PriorityPatternExampleBehavior
1= (exact)location = /pathExact match only
2^~ (priority)location ^~ /pathPriority prefix match
3~ (regex)location ~ \.(php)$Case-sensitive regex
4~* (regex)location ~* \.(js)$Case-insensitive regex
5(prefix)location /pathStandard prefix match
6/ (default)location /Catch-all
# Testing priority
location = /exact { return 200 "1: Exact\n"; }
location ^~ /path { return 200 "2: Priority\n"; }
location ~ \.(php)$ { return 200 "3: Case-sensitive regex\n"; }
location ~* \.(js)$ { return 200 "4: Case-insensitive regex\n"; }
location /path { return 200 "5: Prefix\n"; }
location / { return 200 "6: Default\n"; }

# Requests:
# GET /exact → 1: Exact
# GET /path → 2: Priority
# GET /path/file.php → 3: Case-sensitive regex
# GET /file.js → 4: Case-insensitive regex
# GET /other → 6: Default

SSL/TLS Protocol Support

ProtocolVersionReleaseSecurity
SSL 3.0Legacy1996❌ Deprecated (POODLE)
TLS 1.0Legacy1999❌ Deprecated (BEAST)
TLS 1.1Legacy2006⚠️ Avoid (weak)
TLS 1.2Current2008✅ Secure
TLS 1.3Latest2018✅✅ Most secure

Recommendation:

ssl_protocols TLSv1.2 TLSv1.3;  # Modern only

Load Balancing Method Comparison

MethodSession PersistencePerformanceUse Case
Round-robinNoGoodGeneral purpose
Least connectionsNoBest for persistentLong-lived connections
IP HashYesGoodSession stickiness
Least timeNoBestPerformance critical*
WeightedNoGoodMixed capacity servers
RandomNoGoodTesting, distributed

*Nginx Plus feature only

Log Format Variables

VariableValue
$remote_addrClient IP address
$remote_userAuthenticated username
$time_localLocal time in log format
$requestFull request line (method, URI, protocol)
$statusHTTP response status code
$body_bytes_sentResponse body size
$http_refererHTTP Referer header
$http_user_agentHTTP User-Agent header
$http_x_forwarded_forX-Forwarded-For header
$request_timeRequest processing time
$upstream_addrUpstream backend address
$upstream_statusUpstream response status
$upstream_response_timeUpstream response time

Troubleshooting

Configuration Testing

# Check syntax
sudo nginx -t

# Show full configuration (debug)
sudo nginx -T

# Show configuration with line numbers
sudo nginx -T | nl

# Check specific config file
sudo nginx -t -c /etc/nginx/sites-available/example.com.conf

Common Errors and Solutions

ErrorCauseSolution
Address already in usePort in usesudo lsof -i :80 then kill process
Permission deniedWrong file ownershipsudo chown -R nginx:nginx /var/www
502 Bad GatewayBackend downCheck backend server, firewall rules
504 Gateway TimeoutBackend slowIncrease proxy_read_timeout
413 Payload Too LargeFile upload limitIncrease client_max_body_size
Connection refusedPort not listeningCheck listen directive, firewall
SSL certificate problemCert not found/invalidCheck cert paths, openssl x509 -text -in file.crt
Too many redirectsInfinite redirect loopCheck return/rewrite rules

Debugging Logs

# Real-time error log
sudo tail -f /var/log/nginx/error.log

# Real-time access log
sudo tail -f /var/log/nginx/access.log

# Search for errors
sudo grep "error" /var/log/nginx/error.log | tail -20

# Count status codes
sudo awk '{print $9}' /var/log/nginx/access.log | \
sort | uniq -c | sort -rn

# Top 10 IPs
sudo awk '{print $1}' /var/log/nginx/access.log | \
sort | uniq -c | sort -rn | head -10

# Top 10 URLs
sudo awk '{print $7}' /var/log/nginx/access.log | \
sort | uniq -c | sort -rn | head -10

# URLs with 404 errors
sudo awk '$9 == 404 {print $7}' /var/log/nginx/access.log | sort | uniq -c

# URLs with 502/503 errors
sudo awk '$9 ~ /502|503/ {print $7}' /var/log/nginx/access.log | sort | uniq -c

# Response time analysis
sudo awk '{sum+=$NF; n++} END {print "avg:", sum/n, "ms"}' \
/var/log/nginx/access.log

Performance Diagnosis

# Check active connections
netstat -tulpn | grep nginx

# Count established connections on port 80
netstat -an | grep ESTABLISHED | grep :80 | wc -l

# Monitor connections in real-time
watch -n1 'netstat -an | grep ESTABLISHED | grep :80 | wc -l'

# Check memory usage
ps aux | grep nginx | grep -v grep

# Monitor with top
top -p $(pgrep -f 'nginx: worker' | tr '\n' ',' | sed 's/,$/')

# Check CPU usage
top -b -n1 -u nginx

# Monitor system resources during test
vmstat 1
iostat -x 1

Network Troubleshooting

# Check if port is listening
sudo ss -tulpn | grep nginx

# Check DNS resolution
nslookup example.com
dig example.com

# Test connectivity
curl -v http://example.com

# Follow redirects
curl -L http://example.com

# Test with specific IP
curl -H "Host: example.com" http://127.0.0.1

# Check SSL certificate
openssl s_client -connect example.com:443

# Verify certificate chain
openssl s_client -connect example.com:443 -showcerts

Red Hat Specific

SELinux Configuration

# Check SELinux status
getenforce
sestatus

# View Nginx contexts
semanage fcontext -l | grep nginx

# Set Nginx to permissive mode (testing)
sudo semanage permissive -a nginx_t

# Add context for web root
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/myapp(/.*)?"

# Apply contexts
sudo restorecon -Rv /var/www/myapp

# Check file contexts
ls -Z /var/www/myapp

# Allow Nginx network connections
sudo setsebool -P httpd_can_network_connect on

# Allow Nginx to access NFS
sudo setsebool -P httpd_use_nfs on

# View SELinux denials
sudo tail -f /var/log/audit/audit.log | grep nginx

Firewall Configuration (firewalld)

# Start firewall
sudo systemctl start firewalld
sudo systemctl enable firewalld

# Add HTTP service
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https

# Or add specific ports
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=443/tcp

# Add port from specific source
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="203.0.113.0/24" port protocol="tcp" port="8080" accept'

# Reload firewall
sudo firewall-cmd --reload

# List active rules
sudo firewall-cmd --list-all

Systemd Service Management

# View service file
cat /usr/lib/systemd/system/nginx.service

# Create custom service override
sudo mkdir -p /etc/systemd/system/nginx.service.d/
sudo tee /etc/systemd/system/nginx.service.d/custom.conf > /dev/null <<EOF
[Service]
# Custom settings
LimitNOFILE=65536
PrivateTmp=true
ProtectSystem=strict
EOF

# Reload systemd
sudo systemctl daemon-reload

# View service status
sudo systemctl status nginx

# View service logs
sudo journalctl -u nginx -n 50 -f

Log Rotation

# View logrotate configuration
cat /etc/logrotate.d/nginx

# Create custom rotation policy
sudo tee /etc/logrotate.d/nginx-custom > /dev/null <<'EOF'
/var/log/nginx/*.log {
daily
rotate 14
compress
delaycompress
notifempty
create 0640 nginx nginx
sharedscripts
postrotate
systemctl reload nginx > /dev/null 2>&1 || true
endscript
}
EOF

# Test rotation
sudo logrotate -f /etc/logrotate.d/nginx-custom

# Check rotation logs
sudo cat /var/lib/logrotate/status | grep nginx

Package Management

# List installed Nginx packages
dnf list installed | grep nginx

# Check Nginx dependencies
dnf deplist nginx

# Install Nginx with specific modules
sudo dnf install nginx-modules-* -y

# List available modules
dnf list | grep nginx-module

# View installed modules
nginx -V

# Update Nginx
sudo dnf update nginx -y

# Downgrade Nginx
sudo dnf downgrade nginx

# Remove Nginx
sudo dnf remove nginx -y

Best Practices

Configuration Management

# 1. Use version control
sudo git init /etc/nginx/
sudo git config user.email "admin@example.com"
sudo git config user.name "Nginx Admin"
sudo git add -A
sudo git commit -m "Initial Nginx configuration"

# 2. Create backups before changes
sudo cp -r /etc/nginx /etc/nginx.backup-$(date +%Y%m%d-%H%M%S)

# 3. Test configuration
sudo nginx -t

# 4. Reload gracefully
sudo systemctl reload nginx

# 5. Verify changes
sudo tail -f /var/log/nginx/error.log

# 6. Commit successful changes
sudo git add -A
sudo git commit -m "Updated configuration for feature X"

# 7. Tag releases
sudo git tag -a v1.0 -m "Production release v1.0"

Security Hardening

# /etc/nginx/conf.d/security.conf

# Hide Nginx version
server_tokens off;

# Disable HTTP methods
location / {
limit_except GET POST HEAD {
deny all;
}
}

# Implement HSTS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

# Prevent clickjacking
add_header X-Frame-Options "SAMEORIGIN" always;

# Prevent MIME type sniffing
add_header X-Content-Type-Options "nosniff" always;

# Enable XSS protection
add_header X-XSS-Protection "1; mode=block" always;

# Set Content Security Policy
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline';" always;

# Rate limiting
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

location / {
limit_req zone=general burst=20 nodelay;
}

# Client body timeout (prevent slow attacks)
client_body_timeout 10s;
client_header_timeout 10s;
send_timeout 10s;

Monitoring and Maintenance

#!/bin/bash
# /usr/local/bin/nginx-health-check.sh

# Check if Nginx is running
if ! pgrep -x "nginx" > /dev/null; then
echo "ERROR: Nginx is not running"
sudo systemctl start nginx
echo "Nginx restarted"
fi

# Check configuration
if ! sudo nginx -t 2>&1 | grep -q "successful"; then
echo "ERROR: Nginx configuration invalid"
exit 1
fi

# Check certificate expiry
openssl x509 -in /etc/letsencrypt/live/example.com/cert.pem \
-noout -dates | grep notAfter

# Check disk space
df -h /var/log/nginx

# Check error log
sudo tail -20 /var/log/nginx/error.log | grep -i error

echo "Nginx health check completed"

Documentation

# /etc/nginx/conf.d/example.conf

# ============================================================
# Example.com Virtual Host Configuration
# ============================================================
# Purpose: Web server for example.com
# Owner: DevOps Team
# Last Updated: 2025-11-03
# ============================================================

upstream backend {
# Application servers
server app1.internal:8080 max_fails=2 fail_timeout=30s;
server app2.internal:8080 max_fails=2 fail_timeout=30s;

# Keep-alive connections
keepalive 32;
}

# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;

# Allow Let's Encrypt validation
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}

# Redirect all other traffic
location / {
return 301 https://$server_name$request_uri;
}
}

# HTTPS server block
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;

# SSL Configuration
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

# Security Headers
add_header Strict-Transport-Security "max-age=31536000" always;
add_header X-Frame-Options "SAMEORIGIN" always;

# Root directory
root /var/www/example.com;
index index.html index.htm;

# Main location
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

Production Checklist

  • Configuration tested with nginx -t
  • SSL certificates valid and renewed
  • Firewall rules configured
  • SELinux contexts set correctly
  • Log rotation configured
  • Monitoring and alerting enabled
  • Rate limiting configured
  • Security headers implemented
  • Backup procedures documented
  • Disaster recovery plan tested
  • Performance baseline established
  • Documentation updated
  • Team trained on deployment
  • Graceful reload tested
  • Health checks operational

Quick Reference

# Installation
sudo dnf install nginx -y
sudo systemctl enable --now nginx

# Testing
sudo nginx -t
sudo nginx -T

# Service Management
sudo systemctl {start|stop|restart|reload} nginx
sudo systemctl status nginx

# Configuration Locations
/etc/nginx/nginx.conf
/etc/nginx/conf.d/*.conf
/etc/nginx/sites-available/
/etc/nginx/sites-enabled/

# Logging
/var/log/nginx/access.log
/var/log/nginx/error.log

# Monitoring
ps aux | grep nginx
netstat -tulpn | grep nginx
sudo tail -f /var/log/nginx/{access,error}.log

# SELinux
sudo getenforce
sudo setsebool -P httpd_can_network_connect on
sudo semanage fcontext -a -t httpd_sys_content_t "/path(/.*)?"

# Firewall
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload

# Performance
ab -n 10000 -c 100 http://example.com/
wrk -t12 -c400 -d30s http://example.com/

Additional Resources


Conclusion

You now have a comprehensive guide to Nginx for Red Hat system engineers. Key takeaways:

✅ Installation and configuration on Red Hat/CentOS
✅ Core architecture and request processing
✅ IP whitelisting with map functions
✅ Virtual hosting and SSL/TLS setup
✅ Load balancing and reverse proxy
✅ Performance optimization techniques
✅ SELinux and firewall integration
✅ Production-ready configurations
✅ Troubleshooting and monitoring

Next Steps:

  1. Deploy a test instance
  2. Implement IP whitelist for your use case
  3. Configure SSL certificates
  4. Set up monitoring and alerting
  5. Document your infrastructure
  6. Contribute to open source projects

Happy Nginx journey! 🚀


REPLACE