Ecommerce hosting failures tend to happen at the worst possible moment. You send an email campaign to 20,000 subscribers, traffic spikes 10x in 30 minutes, and your checkout page starts timing out. Sales stop. Customers don't retry — they go somewhere else.
This guide covers what separates hosting that survives traffic spikes from hosting that buckles under them, and how to build an ecommerce infrastructure that holds up when you need it most.
Why ecommerce is harder on hosting than regular websites
A standard blog or business site is mostly read operations — visitors browse pages, the server retrieves content, done. Ecommerce is different:
- Product pages require database joins across product tables, pricing, stock levels, and variation data
- Search and filtering runs complex database queries with every keystroke
- Cart and checkout require write operations — creating sessions, updating stock, processing payments
- Concurrent users multiply all of the above
A shared hosting server that handles 1,000 daily visitors on a blog starts struggling at 200 concurrent visitors on a WooCommerce store. The workload profile is completely different.
The hosting metrics that directly affect store revenue
Time to First Byte (TTFB). This is the server's response time before any content reaches the browser. For ecommerce, TTFB above 800ms on product pages loses you mobile visitors. Google's research shows that as page load time increases from 1 to 3 seconds, bounce rate increases by 32%. For mobile shoppers on LTE connections, TTFB is often the dominant speed factor.
Checkout reliability under load. Checkout is a write-heavy operation. When your server is under load, write operations fail before read operations do. Test your checkout with simulated concurrent users (k6 or Locust are free tools) before your next big campaign. A checkout that works fine for 5 concurrent users may fail for 50.
Database query performance. WooCommerce is notoriously database-heavy. Product listing pages with filters and sorting run multiple queries per page load. On a large catalog, this scales badly unless you have proper indexing, Redis object caching, and a database server with adequate buffer pool.
Uptime during promotional windows. If your site goes down during a Black Friday campaign, you're not just losing those sales — you're potentially losing customers permanently. Uptime guarantees matter more for ecommerce than almost any other site type.
Shared hosting vs VPS vs cloud for ecommerce
Shared hosting for ecommerce works for small stores with under ~100 orders/month and no significant traffic spikes. WooCommerce with 50 products and a basic checkout works on quality shared hosting. Add 500 products, variable pricing, and a sale campaign and it starts struggling.
VPS for ecommerce is the right tier for most growing stores. Dedicated CPU and RAM means your site's performance isn't affected by other sites on the same hardware. A 4–8GB VPS with Redis, properly tuned PHP-FPM, and Nginx handles WooCommerce well up to significant traffic levels.
Cloud hosting for ecommerce adds horizontal scaling and higher availability. When traffic spikes, cloud infrastructure can provision additional resources automatically. This is the right choice when unpredictable traffic spikes are a business reality — flash sales, viral moments, influencer promotions.
What to configure on your ecommerce VPS or cloud server
Full-page cache for product pages
Most caching plugins bypass cache for logged-in users and cart/checkout pages — which is correct behavior. But product listing pages, category pages, and individual product pages for logged-out visitors should be cached aggressively.
Nginx FastCGI cache configuration with WooCommerce bypass:
nginxset $skip_cache 0; # Don't cache logged-in users if ($http_cookie ~* "wordpress_logged_in") { set $skip_cache 1; } # Don't cache cart, checkout, account pages if ($request_uri ~* "/cart|/checkout|/my-account|/wc-api") { set $skip_cache 1; } # Don't cache when items are in cart if ($cookie_woocommerce_items_in_cart) { set $skip_cache 1; }
This setup caches product pages for anonymous visitors (the majority of traffic) while ensuring cart and checkout always get fresh responses.
Redis for WooCommerce object caching
WooCommerce makes heavy use of WordPress's object cache for product data, session data, and transients. Without Redis, this data is re-fetched from the database on every page load.
bashsudo apt install redis-server php8.2-redis -y sudo systemctl enable --now redis-server
Enable Redis Object Cache plugin in WordPress. The difference in database query count per page load on an active WooCommerce store is dramatic — typically 40–60% reduction.
PHP-FPM worker sizing for concurrent orders
During a promotion, concurrent checkout sessions multiply PHP-FPM worker demand. Size your workers based on expected peak concurrent users, not average users:
ini; /etc/php/8.2/fpm/pool.d/www.conf pm = dynamic pm.max_children = 30 ; Adjust based on RAM pm.start_servers = 8 pm.min_spare_servers = 5 pm.max_spare_servers = 15 pm.max_requests = 200 ; Restart workers periodically to prevent memory leaks
Session storage for cart reliability
By default, WooCommerce stores sessions in the database. Under high concurrent load, this creates database write contention. Move sessions to Redis:
WooCommerce → Settings → Advanced → turn on "Redis session storage" (or use WP Redis plugin with session support).
Capacity planning for ecommerce events
Don't discover your server's limits during a sale. Test beforehand:
bash# Install k6 for load testing snap install k6 # Simple load test script cat > loadtest.js << 'EOF' import http from 'k6/http'; export default function() { http.get('https://yourdomain.com/shop/'); } EOF k6 run --vus 50 --duration 30s loadtest.js
Run with 50 virtual users for 30 seconds. Watch your server's CPU, RAM, and PHP-FPM worker count during the test. If workers max out or response times spike above 2 seconds, you need more capacity before the real event.
WooCommerce store sizing guide
| Store size | Orders/month | vCPU | RAM | Storage | |-----------|-------------|------|-----|---------| | Small store | Under 200 | 2 | 4 GB | 50 GB NVMe | | Medium store | 200–1,000 | 4 | 8 GB | 100 GB NVMe | | Growing store | 1,000–5,000 | 6–8 | 16 GB | 200 GB NVMe | | High volume | 5,000+ | 8+ | 32 GB | 400+ GB NVMe |
These are starting points. Add Redis to all configurations. Size up if you run regular promotions that spike traffic significantly beyond baseline.
CDN for product images
Product images are typically 60–80% of ecommerce page weight. A CDN that caches product images at edge locations close to your customers dramatically reduces load on your origin server and improves image load speed globally.
Cloudflare's free tier handles this adequately for most stores. For stores with large catalogs or international audiences, Cloudflare Pro or a dedicated image CDN (Cloudinary, Imgix) provides better image optimization.
Ecommerce hosting migration checklist
When moving a live store to new hosting:
- [ ] Export full product database + orders separately (not just a file backup)
- [ ] Verify payment gateway settings in new environment
- [ ] Test checkout end-to-end in staging before DNS switch
- [ ] Test order confirmation emails — SMTP settings differ between hosts
- [ ] Lower DNS TTL 24h before cutover
- [ ] Schedule migration during lowest-traffic window (usually 2–4am on a Tuesday)
- [ ] Keep old hosting active for 48h after switch
- [ ] Monitor order processing for 24h after cutover
The order confirmation email test is one people forget. Payment gateway credentials and SMTP settings often need updating in the new environment.
Bottom line
For ecommerce, hosting isn't infrastructure — it's sales infrastructure. A server that buckles during your busiest campaign isn't just a technical problem. It's a revenue problem.
Get the basics right: dedicated resources (VPS or cloud), Redis object caching, properly sized PHP-FPM workers, and a load test before your next big event. That combination handles most growth scenarios without drama.
HostAccent cloud and VPS hosting is built for stores that need to stay live when it matters most — with the performance and support to back it up.










Discussion
Have a question or tip about this topic? Share it below — your comment will appear after review.