Most slow PHP websites running on a Linux VPS are not slow because the server is too small. They are slow because the default configuration was never changed from the values that ship with the software — values optimized for compatibility across all hardware, not performance for your specific server.
Nginx and PHP-FPM together handle the majority of dynamic PHP workloads (WordPress, Laravel, Symfony, Magento) on Linux VPS environments. Both are highly configurable. And both ship with conservative defaults that leave real performance on the table.
This guide gives you the specific settings that matter, the values to start with based on your server's RAM, and a way to measure before and after so you can see the actual improvement.
Before you change anything: measure your baseline
Changes without a baseline measurement are guesswork. Before touching any configuration, record your current performance:
bashcurl -o /dev/null -s -w "TTFB: %{time_starttransfer}s | Total: %{time_total}s\n" https://yourdomain.com
Run this 3–5 times and note the average. A typical untuned VPS with a standard WordPress site will show TTFB between 800ms and 2 seconds. After tuning, this often drops to under 300ms without any changes to the application code.
Also check your current server resources so you can tune appropriately:
bashfree -h # Available RAM nproc # CPU core count
Step 1: Tune Nginx worker settings
Open the main Nginx configuration:
bashsudo nano /etc/nginx/nginx.conf
Find and update the worker settings:
nginxworker_processes auto; events { worker_connections 2048; multi_accept on; use epoll; }
What each setting does:
worker_processes auto— Nginx uses one worker process per CPU core. On a 4-core VPS, this creates 4 workers, each handling connections independently.worker_connections 2048— each worker can handle up to 2048 simultaneous connections. Total capacity = workers × connections.multi_accept on— each worker accepts all new connections at once instead of one at a time, improving throughput under load.use epoll— Linux-specific efficient I/O event model. Faster than the defaultselectfor high connection counts.
Also add these in the http {} block if not present:
nginxhttp { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; }
server_tokens off removes the Nginx version from response headers — a minor security improvement with zero cost.
Step 2: Enable gzip compression
Gzip compresses text-based responses before sending them to the browser. For a typical HTML page, this reduces transfer size by 60–80%, directly improving page load time on any connection speed.
Add to your http {} block:
nginxgzip on; gzip_vary on; gzip_comp_level 5; gzip_min_length 256; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/xml application/rss+xml image/svg+xml font/woff2;
gzip_comp_level 5 is the sweet spot — above this, CPU cost increases faster than compression benefit. Level 9 uses roughly 5× the CPU of level 5 for only marginal size reduction.
Step 3: Add browser cache headers
Static assets — CSS, JavaScript, images, fonts — should be cached aggressively in the browser. Once a visitor loads them once, subsequent page visits load from cache with zero server requests.
In your server block:
nginxlocation ~* \.(jpg|jpeg|png|gif|ico|svg|webp|css|js|woff|woff2|ttf|eot)$ { expires 30d; add_header Cache-Control "public, no-transform"; access_log off; }
access_log off for static assets reduces disk I/O from logging thousands of cached asset requests that carry no useful diagnostic information.
Step 4: Tune the PHP-FPM pool
PHP-FPM manages a pool of PHP worker processes. Too few workers means requests queue up during traffic spikes. Too many workers consumes more RAM than your server has available, causing swapping, which is dramatically slower than RAM.
The correct value depends on your available RAM and how much RAM each PHP process uses.
First, check average PHP process memory usage:
bashps -o rss= -C php-fpm8.3 | awk '{sum+=$1} END {printf "Average: %.0f MB\n", sum/NR/1024}'
Then calculate pm.max_children:
bashmax_children = (available RAM for PHP) / (average PHP process size)
For a 2GB VPS running only your web app, leaving 512MB for the OS and Nginx, you have roughly 1.5GB for PHP:
- If each PHP process uses ~50MB: max_children = 30
- If each process uses ~80MB: max_children = 18
Edit the pool configuration:
bashsudo nano /etc/php/8.3/fpm/pool.d/www.conf
Example for a 2GB VPS with ~50MB per process:
inipm = dynamic pm.max_children = 28 pm.start_servers = 4 pm.min_spare_servers = 2 pm.max_spare_servers = 8 pm.max_requests = 500
What these settings control:
pm = dynamic— worker count scales between min and max based on demandpm.start_servers— initial number of workers on startuppm.min_spare_servers / pm.max_spare_servers— keep this many workers idle and readypm.max_requests = 500— recycle a worker after 500 requests, preventing memory leaks from accumulating indefinitely
After editing, restart PHP-FPM:
bashsudo systemctl restart php8.3-fpm
Step 5: Apply changes and verify
Test Nginx configuration before reloading:
bashsudo nginx -t
If it passes:
bashsudo systemctl reload nginx
Now run your TTFB measurement again:
bashcurl -o /dev/null -s -w "TTFB: %{time_starttransfer}s | Total: %{time_total}s\n" https://yourdomain.com
Compare to your baseline. You should see a meaningful improvement, particularly on first-time page loads. For WordPress sites with a page cache plugin (WP Rocket, W3 Total Cache, or Nginx FastCGI cache), TTFB under 100ms is achievable.
Step 6: Add FastCGI cache for WordPress (optional but high-impact)
If you are running WordPress, Nginx FastCGI cache is one of the highest-impact performance improvements available. It caches PHP-generated HTML at the Nginx level, bypassing PHP-FPM entirely for cached pages.
Add to your http {} block:
nginxfastcgi_cache_path /tmp/nginx_cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m; fastcgi_cache_key "$scheme$request_method$host$request_uri";
And in your server block PHP location:
nginxlocation ~ \.php$ { fastcgi_cache WORDPRESS; fastcgi_cache_valid 200 60m; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; add_header X-FastCGI-Cache $upstream_cache_status; include fastcgi_params; fastcgi_pass unix:/run/php/php8.3-fpm.sock; }
Define $skip_cache to exclude logged-in users and cart pages:
nginxset $skip_cache 0; if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap") { set $skip_cache 1; } if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; }
With FastCGI cache active, your server handles the same traffic with a fraction of the PHP-FPM load. The X-FastCGI-Cache header tells you whether a response was served from cache (HIT) or generated fresh (MISS).
Practical tuning approach
Do not change everything at once. Change one setting, measure, verify, then move to the next. This way you know which change produced which improvement and can roll back confidently if something does not behave as expected.
Keep notes. A text file with "before" and "after" TTFB values and the change that produced them is invaluable when you need to replicate a setup on another server or explain the configuration to a new team member.
Final recommendation
The configuration changes in this guide typically take under an hour on a standard Ubuntu VPS with Nginx and PHP-FPM already running. The performance improvement — often a 50–70% reduction in TTFB for uncached pages and near-zero TTFB for cached pages — is one of the best returns per hour of infrastructure work available.
For growing sites, revisit this configuration whenever you upgrade your VPS plan. More RAM means more PHP workers. More cores means more Nginx workers. The configuration needs to grow with the hardware.









Discussion
Have a question or tip about this topic? Share it below — your comment will appear after review.