Hosting WordPress Yourself Part 4 – Server Monitoring and Caching

Updated on Feb 22, 2017.

In the previous installment of this series, I walked you through the process of how to quickly setup sites using WP-CLI. However, no considerations were made on how well the server would handle traffic under heavy load. As a system administrator, it is also imperative that you can quickly determine the current utilisation of system resources. This will better help you to gauge how many sites your server can potentially handle, and when you may need to plan the scaling of hardware or cloud resources. In this post I will guide you through the process of tackling all of those issues, as well as setting up server monitoring and alerts.

Server Monitoring with New Relic

Monitoring server performance is a relatively simple process using New Relic, which allows you to see the server’s current health status from a web-based dashboard. There’s also a smartphone app available, so you can monitor server performance on the go, if you’re into that sort of thing. On top of that, server monitoring is a free service provided by New Relic, so go ahead and sign up now!

Once signed up, you’ll be presented with the following screen.

New Relic

Click the ‘Servers’ option and then select Ubuntu as the platform, which will reveal the installation instructions.

New Relic

It’s time to SSH to your server.

ssh ashley@pluto.ashleyrich.com

You must run the installation as the root user. To do so, issue the su root command and enter the root user’s password. Follow the installation instructions listed on the New Relic site, which should result in the following message.

New Relic - Installation Complete

After a few minutes your server will show up on the New Relic dashboard.

New Relic - Server Overview

Clicking on the server will reveal more detailed statistics.

New Relic - Server Statistics

Server Alerts

Server alerts are extremely useful, as it cuts the need to manually check the servers health. By default New Relic will notify you when the server CPU, disk IO, memory or disk space reaches a certain threshold. You can tweak these thresholds, enable server downtime alerts and configure how you wish to be notified.

Click the gear icon followed by ‘Manage alert policy’.

New Relic Alerts

From here you can see the current settings and adjust them if necessary.

New Relic Alerts

The default values are a good starting point, however I like to enable downtime alerts, which notifies you when the server stops responding for more than 5 minutes.

New Relic Alerts

That’s all there is to server monitoring from a resource utilisation perspective. With alerts enabled, you don’t even need to check the dashboard to know that the server isn’t running out of system resources.

Initial Benchmarks

Although it isn’t necessary for you to perform this step, I want to show you how this setup handles traffic prior to any performance optimizations. It’s difficult to simulate web traffic, however, it is possible to send a large amount of concurrent requests to a server and track the responses. This gives you a rough indication of the amount of traffic a server can handle, but also allows you to measure the performance gains once you’ve implemented the optimizations.

The server I have setup for this series is a 512MB Digital Ocean Droplet. All of the tests in this post will be performed using Blitz, which is a cloud based performance and loading testing tool. Blitz makes it super easy to send a large amount of virtual users to your server in incremental stages and from various geographic regions. It also provides you with useful analytics, which can help to determine where bottlenecks are occurring.

The test I will perform will send an incremental amount of concurrent users to the server within a 60 second time period. The users scale, starting with 1 concurrent user and increasing to 200 concurrent users by the end of the test.

Here goes…

Full results here.

Based on the results the server can theoretically handle 2,885,760 requests a day with an average response time of 411. However, issues arose at around 142 concurrent users, which resulted in over 44% of visitors within the 60 second time period receiving a timeout or error. Not good at all!

With that out of the way, it’s time to optimize!

Object Cache

An object cache stores potentially computationally expensive data such as database query results and serves them from memory. This greatly improves the performance of WordPress as there is no longer a need to query the database on every page load for information already stored within the object cache.

Redis is the latest and greatest when it comes to object caching. However, popular alternatives include Memcache and Memcached.

To install Redis, issue the following commands.

sudo apt-get install redis-server
sudo apt-get install php-redis

It’s also a good idea to set a maximum memory usage. As I’m only using a 512Mb server, I set mine to 64mb.

sudo nano /etc/redis/redis.conf

Uncomment the line # maxmemory and set the desired value.

maxmemory 64mb

Save the configuration and restart both Redis and PHP-FPM.

sudo service redis-server restart
sudo service php7.1-fpm restart

In order for WordPress to make use of Redis as an object cache you need to install the Redis Object Cache plugin by Till Krüss.

Object Cache - Plugins Screen

Once installed and activated, go to Tools > Redis to enable the object cache.

Object Cache - Enable

This is also the screen where you can flush the cache if required.

Object Cache - Flush

I’m not going to run the benchmarks again as the results won’t dramatically change. Although object caching reduces the average amount of database queries on the front page from 22 to 2, the database server is still being hit. Establishing a database connection on every page request is one of the biggest bottlenecks within WordPress.

The benefit of object caching can be seen when you look at the average database query time, which has decreased from 2.1ms to 0.3ms. The average query times were measured using Query Monitor.

If you check New Relic you’ll see how the server handles under heavy load. The spikes correspond to each test performed using Blitz. Clicking onto the processes shows that PHP is causing the high CPU usage.


New Relic

In order to further improve performance and decrease server resource usage you need to bypass PHP altogether. Enter page caching…

Page Cache

Although an object cache can go a long way to improving your WordPress site’s performance, there is still a lot of unnecessary overhead in serving a page request. For many sites, content is rarely updated. It’s therefore inefficient to load WordPress, query the database and build the desired page on every single request. Instead, you should serve a static HTML version of the requested page.

Nginx allows you to automatically cache a static HTML version of a page using the FastCGI module. Any subsequent calls to the requested page will receive the cached HTML version without ever hitting PHP.

Setup requires a few changes to your Nginx server block, so open your virtual host file.

sudo nano /etc/nginx/sites-available/ashleyrich.com

Add the following line before the server block, ensuring that you change the fastcgi_cache_path directive and keys_zone. You’ll notice that I store my cache within the site’s directory, on the same level as the logs and public directories.

fastcgi_cache_path /home/ashley/ashleyrich.com/cache levels=1:2 keys_zone=ashleyrich.com:100m inactive=60m;

You need to instruct Nginx not to cache certain pages. The following will ensure admin screens and pages for logged in users are not cached, plus a few others. This should go above the first location block.

set $skip_cache 0;

# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
    set $skip_cache 1;
}   
if ($query_string != "") {
    set $skip_cache 1;
}   

# Don’t cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
    set $skip_cache 1;
}   

# Don’t use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
    set $skip_cache 1;
}

Next, within the PHP location block add the following directives.

fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache ashleyrich.com;
fastcgi_cache_valid 60m;

Notice how the fastcgi_cache directive matches the keys_zone set before the server block. In addition to changing the cache location, you can also specify the cache duration by replacing 60m with the desired duration in minutes. The default of 60 minutes is a good starting point for most people. Once happy, save the configuration.

Next you need to add the following directives to your nginx.conf file. The first instructs the FastCGI module on how to generate key names and the second adds an extra header to server responses so that you can easily determine whether a request is being served from the cache.

sudo nano /etc/nginx/nginx.conf

Add the following below the Gzip settings.

##
# Cache Settings
##

fastcgi_cache_key "$scheme$request_method$host$request_uri";
add_header Fastcgi-Cache $upstream_cache_status;

Save the configuration and restart Nginx.

sudo service nginx restart

Now when you visit the site and view the headers, you should see an extra parameter.

Nginx - Response Headers

The possible return values are:

  • HIT – Page cached
  • MISS – Page not cached (refreshing should cause a HIT or BYPASS)
  • BYPASS – Page cached but not served (admin screens or when logged in)

The final step is to install the Nginx Cache plugin, again by Till Krüss. This will automatically purge the FastCGI cache whenever your WordPress content changes. You can also manually purge the cache from the WordPress dashboard.

Once installed, navigate to Tools > Nginx and define your cache zone path. This should match the value you specified in your Nginx hosts file.

Final Benchmarks

With the optimizations out of the way it’s time to perform a final benchmark. This time I’m going to up the maximum concurrent users to 1,000.

Full results here.

Not bad at all! The average response time has dropped to 44ms, with a new theoretical limit of 40,632,480 requests per day, which is pretty phenomenal for a $5/mo server.

Now when you take a look at New Relic you will notice that the CPU spike is much less severe. You’ll also notice that PHP is no longer causing the spike as the heavy lifting is handled by Nginx.


New Relic

Performance optimization is a lot more difficult on highly dynamic sites where the content updates frequently, such as those that use bbPress or BuddyPress. In these situations it’s often required to disable page caching on the dynamic sections of the site (the forums for example). This is achieved by adding additional rules to the skip cache section within the Nginx server block. This will force those requests to always hit PHP and generate the page on the fly. Doing so will often mean you have to scale hardware sooner, thus increasing server costs.

Caching Plugins

At this point you may be wondering why I chose this route instead of installing a plugin such as W3 Total Cache or WP Super Cache. Firstly, not all plugins include an object cache and for those that do you will often need to install additional software on the server (Redis for example) in order to take full advantage of the feature. Secondly, the static pages generated by these plugins are often more computationally expensive to generate as the processing is done by PHP. Using Nginx and the FastCGI module will often lower CPU and memory usage, which is ideal on servers with less system resources. Thirdly, have you seen the settings screens within these plugins recently!?

Finally, if you prefer to run WordPress with fewer plugins you can actually replace the Redis Object Cache plugin with a drop-in file. You can also remove the Nginx Cache plugin, however, the page cache will no longer conditionally purge when new posts or comments are published. This isn’t however a problem for completely static sites, as you can always delete the contents of the cache folder (which you created earlier) if you need to flush the cache.

That concludes this post. Next time I’ll guide you through the process of handling outgoing email, configuring cron and scheduling automatic backups. Until then!

About the Author

Ashley Rich

Ashley is a PHP and JavaScript developer with a fondness for solving complex problems with simple, elegant solutions. He also has a love affair with WordPress and learning new technologies.

  • Renato Alves

    A fantastic post! Really helpful! =D

    Could you provide some suggestions in dealing with sites that use BuddyPress or bbPress? What possibly one can do to improve the speed in such dynamic sites?

    Thanks again! Very fond of the series! =)

    • Opcode and object caching are going to be your biggest assets. Although I mention that page caching would probably need to be disabled on highly dynamic sites, this isn’t always true. While caching for 60 minutes isn’t feasible, you should be able to safely cache most pages for a few minutes. If you do go this route, you’ll have to pay close attention to which pages are bypassed and expect the skip catch rules to grow quite large.

      On really high traffic BuddyPress/bbPress sites you’re probably going to need to look at modifying your theme or adding custom plugins. This step will vary from site to site, but Mark Jaquith wrote a plugin which demonstrates some of the concepts to make WordPress more cacheable. https://github.com/markjaquith/cache-buddy

      • Renato Alves

        Thanks! Caching for a few minutes seems ideal in this case. I’m gonna try it and see how the server behaves.

        Thanks a lot for the tips. =)

      • Hi Ashley, FANTASTIC series!
        We are in the midst of adding a buddypress community to a blog site that has a pretty big facebook following. Very excited but I am scared to death of the hit our servers will take now that we cannot rely on Varnish to insulate (currently using managed wp hosting) traffic once a good portion of our users are now logged in (members). I understand all the usual optimizations (like what you demonstrated here), but it seems that while you can’t really truly cache dynamic pages, you can cache static elements of those pages and try to reduce delivery latency when sending the extra dynamic data needed at the network edge for those pages. I am looking very closely at cloudflare’s Railgun and to a lesser extent edge side includes (esi) as options.

        You mention in the above comment that for high traffic BP sites, theme modifactions and adding custom plugins may be needed. For theme modifications are you referring to ESI type mark ups? If not, if you could shed a little more light or point me to some resources that do, that would be great.
        I’ve also checked out the cache buddy plugin you linked to and discovered the same guy also has a very interesting plugin for using APC to add more persistence to WP object cache. We are leaning toward using Redis/memcache in place of APC, but may look into it if his method is better for WP.
        Any other plugins or resources you can share buddypress optimization/scaling would be greatly appreciated.
        THanks.

  • In the beginning of this series I felt interested but now you have my attention. Awesome Piece 🙂

  • Joaquín González

    Why not use a reverse proxy cache like varnish? I use on a small DO server and works beautifully.

    • There’s a couple of reasons I don’t use Varnish (that’s not to say it’s a bad choice):

      1. It’s an extra moving part that needs installing and configuring vs adding a few additional directives to your existing Nginx config. Why complicate the stack by adding more software that takes up valuable server resources?

      2. Varnish doesn’t support SSL termination out of the box. You can certainly add support, but that requires you to add yet another layer in front of Varnish to perform the termination. So your stack ends up looking like:

      Pound/Nginx
      Varnish
      Nginx
      PHP5-FPM

      vs

      Nginx
      PHP5-FPM

      • Lu

        Great article Ashley, really useful. I think it is important to note that since “most” servers run on Apache, and not NGINX – a very brief mention of page caching options would be really useful. Which for WordPress often means a plugin or for those are more daring to use a reverse proxy which opens up a world of possibility with a learning curve to it.

        • Cheers Lu! You are correct, but as you’ve probably gathered this series is very opinionated and I didn’t really want to side track by discussing alternative software.

          But, I do plan on writing a post in the future which will compare various caching options including Varnish, caching plugins and Nginx Fast-CGI.

  • Daniel Tara

    Thank you for the in-depth documentation. I’ve never used Nginx’s page cache before and was wondering wether the pages are served from memory or from disk. If the disk’s I/O is hit on every request then that almost as bad as hitting the MySQL server unless the server uses SSD.

    • Thanks Daniel. All Linode and Digital Ocean servers use SSD, which is how the cache is served in this post. It’s simple enough to serve the cache from memory, but in my testing (on this setup) there wasn’t any noticeable difference. I believe all Ubuntu installs on Digital Ocean come with a ramdisk partition by default, usually found under `/var/run`. Saving the cache to that location will serve it from memory, so it may be worth doing if you have ample RAM available. Just change the following directive and update the `RT_WP_NGINX_HELPER_CACHE_PATH` constant within your wp-config.php file.

      `fastcgi_cache_path /var/run/ashleyrich.com/cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m;`

  • Kalen Johnson

    Great article, I like how you start with the most base of cache, opcode, and work your way up, along with benchmarks along the way.

    I’ve used Varnish as well as PHP page caching solutions in the past, but it seems like Nginx can handle this mighty fine.

  • Nice post!
    Would this setup work with SSL? Also would be nice to see how you manage the W3 Total cache/Super cache, maybe even WP rocket plugin and which settings you use that go well with your setup. Maybe part 5? 🙂

    It would be nice to have a gist of the final Nginx file just so we don’t break anything 😉

    • Thanks Darjan!

      This works perfectly with SSL, which will be covered in a future post along with SPDY.

      FastCGI caching is an alternative to using plugins such as W3 Total Cache, WP Super Cache and WP Rocket. There should be no need to install those plugins with this setup. W3 Total Cache and WP Rocket also allow you to configure a CDN, but I would use something more lightweight such as https://wordpress.org/plugins/amazon-s3-and-cloudfront/

      And remember to minify your CSS and JS!

  • channeleaton

    Thanks for the excellent series, Ashley. It’s been a huge help for me to fill in my server admin knowledge gaps.

    I know the series is focused on single-user servers but I was wondering if you had any insight into running this type of caching with different FPM pools. Each pool is run by a different server user. The cache never seems to get hit and the cache directory that was created is owned by the www-data user. Recursively changing the owner of the cache directory seems to result in a 500 error.

    Any ideas?

    • Remember that the cache is created and served at a web server level (Nginx), therefore utilising multiple PHP pools shouldn’t have any effect. The cache directories will be owned by the www-data user if you haven’t changed the user that Nginx runs as, as www-data is the default. I’m guessing the 500 errors are due to chowning the directories, resulting in the www-data user no longer having read permissions (possibly?).

      Are you certain that the cache isn’t being hit? If not, this is more than likely an issue with the Nginx configuration, opposed to problems with PHP pools.

      The only issue that you may have is if you need to clear the cache from within PHP. As the directories are owned by the www-data user, the PHP pools probably won’t be able to delete the cache contents. You can overcome this by adding all of the PHP pools to a common group and then assigning group write permissions to the cache directory.

  • Paul Vincent

    Ashley this tutorial series is amazing, thanks so much!

    I got only cache MISSES until I added

    fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

    to my nginx file in sites-available.

    • Thanks Paul.

      Are you using a plugin or something to add those cache headers?

      • Paul Vincent

        Ashley you are right, it has to be a plugin or something of the theme (opening a test “raw” php file in the directory does not have this headers) but I could not figure out what is it exactly…

    • Popsantiago

      Hello @disqus_qhYs1CxDrU:disqus,
      Same for me =/ Did you find the solution ? Thx
      @A5hleyRich:disqus : I use Hide My WP to change wp-admin to dashboard do i need to change the cache line of wp-admin ? thx

      • If your dashboard isn’t mapped to ‘wp-admin’ you will need to update the exclude rules with the new location.

        • Popsantiago

          ok Thx.

  • webdeme

    Thanks Ashley for this excellent resource.

    However when I tried to creates multiple server blocks, I get this error.

    nginx: [emerg] duplicate zone “WORDPRESS” in /etc/nginx/sites-enabled/obooks.com.au:1
    nginx: configuration file /etc/nginx/nginx.conf test failed

    Where am I going wrong. Thanks in advance

    • If you are hosting multiple sites on the server each key will need to be unique. I usually base them on the domain name:

      `fastcgi_cache_path /home/ashley/ashleyrich.com/cache levels=1:2 keys_zone=ashleyrich.com:100m inactive=60m;`

      Blog post updated to reflect this.

      • webdeme

        Thanks Ashley,

        I now get this error.

        nginx: [emerg] “fastcgi_cache_key” directive is duplicate in /etc/nginx/sites-enabled/obooks.com.au:2
        nginx: configuration file /etc/nginx/nginx.conf test failed

        This is the first 2 rows of the file.

        fastcgi_cache_path /home/charaka/obooks.com.au/cache levels=1:2 keys_zone=OBOOKS:100m inactive=60m;
        fastcgi_cache_key “$scheme$request_method$host$request_uri”;

        • Apologies, I forgot to mention that when multiple sites are using FastCGI cache you need to move the fastcgi_cache_key directive the global nginx.conf file (/etc/nginx/nginx.conf) and remove all other occurrences.

          Take a look at this updated Gist https://gist.github.com/A5hleyRich/6713da272b5b33f62b7c#file-nginx-conf-L55

          Post updated. Cheers for the heads up!

          • webdeme

            Thank you
            It works.
            Your guide is better than the DO help guide.

  • Hello;

    I guess there is no need for cache plugin then?

    • The only other thing you may want to install is a CDN. There’s plenty of options: MaxCDN, Amazon CloudFront or CloudFlare. We have WP Offload S3 available, which allows you to serve your assets from Amazon CloudFront.

  • “Add the following lines before the server block” am confuse here, do you mean before server {
    server_name or location ~ .php$ {

  • or can you publish the full ashleyrich.com conf file on Github

  • [emerg] “fastcgi_cache_key” directive is duplicate in /etc/nginx/sites-enabled/healthable.org:1

  • seems everything is working fine now, so i will leave it for few days before migrating the other sites to the account

  • LEMON

    These posts have been super helpful, Thank you!!

    I’m hoping you could suggest something for me to try. I’m stuck trying to get the page cache to work. When I test/try to restart nginx I get:

    invalid “levels” “levels=1:2 keys_zone=awebsite.us:32m inactive=60m” in /etc/nginx/sites-enabled/awebsite.us:1
    nginx: configuration file /etc/nginx/nginx.conf test failed

    I’m running 3 sites on one VPS and have altered the keys_zone and moved the cache_key to the nginx.conf etc. Any suggestions would be super appreciated!

  • Alex

    Great article and very helpful! Needed a good solution to add caching on my VPS and your tutorials really helped big time since i didn’t want to install caching plugins on my wordpress sites. Great stuff. Thank you!

  • John

    Any reason why you’re not using the Pagespeed module for Nginx? It’s already included in the nginx-custom install.

    • No reason. It’s not actually a module I’ve played with yet, although it does look promising.

  • Ashley, Thanks for the useful information. Well, I have a question. An error occurred with nginx: [emerg] unknown directive “fastcgi_cache_purge” in /etc/nginx/sites-enabled/seototo:71 when trying to apply your writing. Let me know what should I do. Thank you.

  • kumaraswami

    Thank you for giving In-depth article about hosting Wp & cache. When ever i am setup multiple websites.

    First website working fine with Nginx Helper & Redis Object Cache plugin’s.
    when ever Install Redis Object cache plugin in second website, The website redirect to first website.

    Please add What the Solution For This ?

  • Jesse

    For anyone else looking to install Redis on PHP 7 (i.e. Ubuntu 16.04) it’s now just:

    sudo apt-get install redis-server
    sudo apt-get install php-redis

  • Andrew Jonathan

    Hi, I am using Ubuntu 16, Nginx, PHP-fpm 7 .. will this tutorial still work or would I mess up my site ?!

  • Ryan Prentiss

    Ideas for installing fastcgi_cache_purge on CentOS?

  • Steven

    nice one, although I had to insert add_header Fastcgi-Cache $upstream_cache_status; into my php location server block within sites-available in order for it to show response headers…

    I noticed that my page speed went from 400ms to 700ms after adding these, is that expected?

    One last thing I notice that nginx helper plugin gives you the option of redis OR nginx fastcgi cache, should they no longer be used in tandem?

  • That was really the most in-depth post and tutorial on WordPress caching and installation I have seen online.
    What would be determination to store FastCGI cached pages to disk or memory?

  • efecto cocuyo

    I have a big problem with nginx cache. Site is not getting updated when I post a new post.
    When I flush nginx cache and redis cache, the site is not getting updated to show last posts.
    What could be the reason for that?

  • Daniel Kerkow

    Thanks for the detailed write up. I managed to implement the fastcgi_cache for the high load WP blog that I run, but the only page that is being cached (as shown by Header X-Cache: HIT) is the index. All other pages get a BYPASS, which indicates that on of the “set $no_cache” rules applied, but I cant find the relevant strings that are definded in these rules. What am I missing?

  • George K

    It would be nice if its possible not to redirect the static files (Jpeg, CSS, JS) and serve them directly from Nginx. If someone had the config for this, please share.

  • Lulla Tone

    Hello Ashley, thanks for this guide.

    Im having a problem regarding the Nginx Cache, the path gets deleted automatically each time I make any changes or on any theme change, I’m getting “Cache Zone Path” does not exist.

    This is my nginx.conf http://pastebin.com/GVPJPbbU
    This is my Virtual Host conf http://pastebin.com/7p7qAu4L

    Both nginx & php-fpm run under the same user (userchanged).
    Any help would be apreciated and sorry for my bad English

    • Does the Nginx Cache plugin have the correct path set? It should be /home/userchanged/cache

      • Lulla Tone

        It does.
        I just tried in another domain/vps and still getting the same error, not really sure why the path gets deleted, even after I create the cache directory 2 times at least.

        • Hi, I have the excat same problem. Everytime the content changes the path-to-cache/cache folder gets deleted. No idea why this happens

  • Lary Stucker

    Really appreciate this series. I’ve updated my nginx site to use memcached and nginx fastcgi_cache, disabled w3total cache, and confirmed they’re working. However, in my testing first bite seems to perform much faster with w3total cache. Any suggestion on how to troubleshoot it?

  • Ashley, I went to New Relic website, looks like it does not offer free service any more, although I did see the free trial offer there.

  • davidbitton

    Do you have any perf suggestions for php-fpm?

  • nilblank

    FYI – New Relic no longer seems to offer a free tier (at least not obviously) and Blitz is shuttering early in 2018 and no longer accepting new accounts.

  • So this may not be a concern for you with ^^ all this but how would you handle Minify, browser caching, etc..? Would you still recommend a plugin like w3tc or WP Super Cache?

  • jacky.krakauer

    Hi Ashley and thanks for the tutorial.
    I have to use Cloudflare, because it is required by the client. Would I need to make any changes to the fastcgi setup from your article? Or should I rather use a plugin like S3TC?

  • Carlos Augusto dos Santos

    I configured my wordpress website as this tutorial has been doing a while and I am satisfied. I noticed that the tutorial is being updated and this is very good. I received an email from New Relic informing that until the month of november the free accounts will be closed. Can you tell me a free monitoring service that works in this configuration? Thank you!

    • Ruy Fialho

      I have a related question. The Servers Menu option does not exist anymore. It was replaced by Infrastrucute. They say that infrastructure is better. It would be nice an update here. Thanks anyway. I did my configuration anyway using the principles applied in Servers. It have been an invaluable resource.

  • Thanks for the informative series Ashley. I am a newbie and your articles have been immensely helpful.

    I have followed your instructions to the t, yet I am getting the following message on activating the Nginx cache plugin:
    “Filesystem API could not be initialized.”

    Could you please help me resolve this?

    • That error usually occurs when the PHP user doesn’t have write permissions for the cache directory. Are both Nginx and PHP running under the same user?

      • Thanks Ashley. I think they are, but is there a way to check that?

      • I checked the default pool configuration file and nginx conf file and both are running under the same user.

  • i have this message every time “Cache could not be purged. “Cache Zone Path” does not exist.” the cache folder is deleted every time the cache is purged… i cant create the folder every time!

  • Chris Leonard

    This is a great post. Got this all done in just under one hour flawlessly. One thing. Do you use any minify plugins? If so which one please. Also I guess no cdn?

  • Chris Leonard

    Hi, thanks for the posts, they are great and very helpful. 1 issue I have is the cache directory keeps getting deleted. I think when I make changes and I have to recreate it? when I check the nginx plugin I get this:

    “Cache Zone Path” does not exist.

    I then recreate the directory and everything seems ok until a few days and its gone again.

    Any ideas?

    • The cache directory should automatically get recreated, presuming Nginx has write access on the parent directory.

  • Jhonis

    For anyone following this tutorial using an Ubuntu 16.04 Droplet that are stuck on the Server Monitoring part, like I was.

    You can use Digital Ocean’s built in agent (do-agent) and setup alerts.

    For new Droplets, the Agent can be installed during the creation process by selecting Monitoring under the Select additional options section of the create page.

    For existing Droplets, the Agent can be installed by logging into the Droplet and typing:


    curl -sSL https://agent.digitalocean.com/install.sh | sh

    more info here:
    https://www.digitalocean.com/products/monitoring/