Microcaching WordPress in Nginx to Improve Server Requests by 2,400%

We’ve talked a lot about WordPress performance and hosting WordPress here at Delicious Brains. A common theme amongst those articles is the importance of page caching and how it’s arguably the best way to improve the performance of your WordPress site:

…if you’ve opted to self-host or have no alternative but to use shared hosting, page caching is without a doubt the single biggest thing you can enable to make your site fly.

However, we’ve also alluded to the fact that page caching is difficult to implement on highly dynamic sites:

Performance optimization is a lot more difficult for highly dynamic sites where the content updates frequently, such as those that use bbPress or BuddyPress. In these situations, it’s often required to disable page caching on the dynamic sections of the site (the forums for example).

In these circumstances page caching still has its place but the duration of the cache has to be significantly reduced. This is known as microcaching. Microcaching is a technique where content is cached for a very short period of time, usually in the range of 1-10 seconds.

In this article, I’m going to demonstrate how to configure WordPress and bbPress with Nginx FastCGI caching. The forums will use a cache duration of 1 second and everything else will be cached for a duration of 1 hour. Let’s see if such a short cache duration can have a positive impact on performance.

Initial Benchmarks

Before we start configuring the microcache let’s see how much traffic the forums can handle without a page cache enabled. To do this I’m going to use Blitz.io, a load testing tool. For the entirety of this article, the tests will be performed against a clean WordPress install on a 1GB Digital Ocean droplet, which has been configured as per Hosting WordPress Yourself. The only difference is that PHP 7 is installed opposed to PHP 7.1. Object caching has been configured using Redis. The variable feature in Blitz will be used to send traffic to 10 different forum topics.

Blitz configuration

Let’s run the first test using a basic Nginx config and see how it handles 100 concurrent users over a 60 second period:

Initial Blitz benchmarks

Not good! You’ll see that only 16.08% of the requests were successful, the rest resulted in a timeout or 502 error. Based on these results the forums can only handle around 40 concurrent users before errors start to arise. That’s only 4 users per forum topic!

Page Caching

For us to dramatically increase the number of requests the server can handle we need to take PHP and MySQL out of the equation for the majority of requests. To do this we will enable page caching using this improved Nginx config. This will cache all requests for non-logged-in users for a total of 60 minutes.

This time I’m going to up the concurrent users to 1,000 over a 60 second period:

Page cache Blitz benchmarks

Much better! The forums were able to handle a total of 1,000 concurrent users, which is roughly 100 visitors per forum topic per second. That’s a huge increase in performance, but the problem is that the forums are no longer dynamic. New topics and replies will take up to 60 minutes to appear to visitors.

Microcaching

At this point, you could simply lower the fastcgi_cache_valid directive in Nginx to 1s and be done. However, if you’re running a static site alongside a forum you’re unnecessarily regenerating the cache for those static pages. This will increase CPU usage and decrease the number of overall requests your server can handle.

A better approach would be to use microcaching for the forums only and fallback to the default cache duration for all other requests. Luckily, you can do this in Nginx using the X-Accel-Expires header. When a request is passed from PHP with an X-Accel-Expires header it will overwrite the value set in the fastcgi_cache_valid directive because it has a higher priority.

Let’s use this mechanism to our advantage. Add the following code to an MU plugin, which instructs WordPress to add the X-Accel-Expires header for any request to the forums. The forums needle should match the ‘Forum Root’ configured in bbPress.

function add_expires_header( $headers, $wp ) {
    if ( 0 === strpos( $wp->request, 'forums' ) ) {
        $headers['X-Accel-Expires'] = 1;
    }

    return $headers;
}
add_filter( 'wp_headers', 'add_expires_header', 10, 2 );

This will cache all pages under the forum root for 1 second. You can check that the filter is working by using cURL. If you issue a request to a forum topic the Fastcgi-Cache header should have a value of EXPIRED.

curl -I http://138.68.155.236/forums/topic/test-10/
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 10 Apr 2017 12:05:02 GMT
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Pingback: http://138.68.155.236/xmlrpc.php
Link: <http://138.68.155.236/wp-json/>; rel="https://api.w.org/"
Link: <http://138.68.155.236/?p=25>; rel=shortlink
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
Fastcgi-Cache: EXPIRED
Connection: keep-alive

But if you cURL the URL in quick succession you should receive a HIT:

curl -I http://138.68.155.236/forums/topic/test-10/
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 10 Apr 2017 12:05:39 GMT
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Pingback: http://138.68.155.236/xmlrpc.php
Link: <http://138.68.155.236/wp-json/>; rel="https://api.w.org/"
Link: <http://138.68.155.236/?p=25>; rel=shortlink
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
Fastcgi-Cache: HIT
Connection: keep-alive

Let’s see how this impacts our benchmarks. Again, I’m going to send 1,000 concurrent users over a 60 second period:

Microcaching Blitz benchmarks

That’s much better than the initial benchmarks, but there are lot more timeouts than desired.

Tweaking the Page Cache

To understand why so many timeouts are occurring it helps to know the various cache statuses in Nginx. The ones that we are interested in are:

  • HIT – The request is being served from the cache.
  • MISS – A cache key doesn’t exist for the current request and has been forwarded to PHP. The result of this request will be cached and subsequent requests will result in a HIT.
  • EXPIRED – A cache key exists, but the cache duration has elapsed. The request is forwarded to PHP which will regenerate the cache.
  • UPDATING – The cache is currently being regenerated for the current request.

Under normal circumstances EXPIRED and UPDATING don’t occur that often, but when microcaching they occur every 1 second (based on the X-Accel-Expires value). The problem with the current Nginx configuration is that when a request is marked as UPDATING Nginx doesn’t serve a cached version of the page and instead forwards the request to PHP. This results in potentially multiple requests hitting PHP every second while the cache is being regenerated. This is inefficient because only 1 request is required to refresh the cache.

Instead, when a request is UPDATING a cached version should be served for subsequent requests. It would also help if cache regeneration was locked down to a single request per cache key. This can easily be achieved by updating the fastcgi_cache_use_stale directive to include updating. You can also add the fastcgi_cache_lock directive to ensure only a single request will populate the cache. Once the changes have been made remember to reload Nginx before testing the new config.

Let’s run the previous test again and see how it performs:

Final Blitz benchmarks

Job done! The forums are now handling 1,000 concurrent users just like when we first enabled the page cache. The difference this time, however, is that new topics and replies appear almost instantly.

Caveats

As with most solutions, there are a few caveats:

For microcaching to be useful, you need to already have a high traffic site. Microcaching will have no positive impact if you’re not receiving multiple visits to the same endpoint within the cache duration. In this situation, it could degrade performance because you’re using CPU cycles to generate cached pages that will never be used. In these circumstances, you may want to increase the microcache duration. For example, if you’re receiving an average of 1 request per second per endpoint a cache duration of 3-5 seconds might be more appropriate.

The solution demonstrated in this article won’t help if the majority of your visitors are logged-in because the cache is bypassed for those users. It’s certainly possible to cache requests for logged-in users with some adjustments to your Nginx config, but that’s a topic for another article. You may also need to change your WordPress theme if it generates personalized content, like navigation bars with the user’s avatar. Personalized content will need to be handled via JavaScript.

Conclusion

If used correctly, microcaching can significantly increase the number of requests your server can handle without scaling hardware. In this example we’ve gone from being able to handle 40 concurrent users to 1,000, which is a 2,400% increase! So cache everything, even if it is just for 1 second. Do you use microcaching? Do you want to learn more about microcaching for logged-in users? Let us know in the comments below.

About the Author

Ashley Rich

Ashley is a PHP and JavaScript developer with a fondness for solving complex problems with simple, elegant solutions. He also has a love affair with WordPress and learning new technologies.

  • Jamie Oastler

    Nice write-up Ashley!

    “It’s certainly possible to cache requests for logged-in users with some
    adjustments to your Nginx config, but that’s a topic for another
    article. You may also need to change your WordPress theme if it
    generates personalized content, like navigation bars with the user’s
    avatar. Personalized content will need to be handled via JavaScript.”

    That is an article I really look forward to reading!

  • Hi,

    I would really like to know more about micro caching for logged in users. I have a BuddyPress / BBPress / Woocommerce community where the whole community is logged in.

    Either an article or can you do this work for us?

    Dale.
    https://mydisabilitymatters.club

    • Dan Knauss
    • Hi Dale, I recently helped a client speed up their BuddyPress site considerably. Switched them to nginx with php7 and used Gator cache https://wordpress.org/plugins/gator-cache/ which supports BuddyPress. Also set up a custom nginx configuration to bypass PHP and serve directly from cache for non-logged in users, the logged in users get processed by PHP and get cache from Gator too.

      • Bowe Frankema

        We’ve built a near identical setup for Dale at WeFoster, which was based on your recommendations Mike 🙂

    • Hi Dale,

      An article is in the works, stay tuned!

  • Lee Peterson

    Oy, this article could not be more timely. So many issues with fine-tuning caching on a huge site with a ton of custom functionality. Thanks for sharing, Ashley!

  • Incredible read man, will be digging deep in your repo this weekend.

  • Bowe Frankema

    Great tutorial and very insightful for those who want to get the most out of NGINX. Especially the quest to improving performance for logged-in users has been something our team has been working on for our WordPress Community Hosting. From per-user cached files to micro caching to just object caching, there’s many techniques to explore. I’d love to read more about your ideas Ashley! For now we’ve settled on full page cache for visitors & Redis + PHP7 for logged in users!

  • Michael Møller

    Since blitz.io is closing and are not taking in any new customers, could you recommend another load testing tool?