Automating Server Setup with Ansible


There’s no doubt that building a web server from scratch is a great learning experience, and one that I recommend all WordPress developers undertake. Doing so will give you a greater understanding of the various components required to serve a website, not just the code you write. It can also broaden your knowledge on wider security and performance topics, which often get overlooked when you’re bogged down in code.

However, once you are familiar with the process, building a new server can be a time consuming task. Wouldn’t it be great if you could automate the entire process and provision new servers in a matter of minutes with little intervention? Thankfully you can, using a tool called Ansible.

Why Ansible?

Ansible is an automation tool for provisioning, application deployment, and configuration management. Gone are the days of SSH’ing into your server to run a command or hacking together bash scripts to semi-automate laborious tasks. Whether you’re managing a single server or an entire fleet, Ansible can not only simplify the process, but save you time. So what makes Ansible so great?

Ansible is completely agent-less, meaning you don’t have to install any software on your managed hosts. All commands are run through Ansible via SSH and if Ansible needs updating you only need to update your single control machine.

Commands you execute via Ansible are idempotent, allowing you to safely run them multiple times without anything being changed, unless required. Need to ensure Nginx is installed on all hosts? Just run the command and Ansible will ensure only those that are missing the software will install it. All other hosts will remain untouched.

That’s enough of an introduction, let’s see Ansible in action.


We need to set up a single control machine which we will use to execute our commands. I’m going to install Ansible locally on OS X, but any platform with Python installed should work (excluding Windows).

First ensure that pip is installed.

sudo easy_install pip

Then install Ansible.

sudo pip install ansible

Once the installation has completed you can verify that everything installed correctly by issuing:

ansible --version

Now that Ansible is setup we need a few servers to work with. For the purpose of this article I’m going to fire up three small Digital Ocean droplets with Ubuntu 14.04.4 x64 installed. I’ve also added my public key so that it will be copied to each host during the droplet creation. This will ensure we can SSH in via Ansible using the root user without providing a password later on.


Once they’ve finished provisioning you’ll be presented with the IP addresses.


Inventory Setup

Ansible uses a simple inventory system to manage your hosts. This allows you to organise hosts into logical groups and negates the need to remember individual IP addresses or domain names. Want to run a command only on your staging servers? No problem, pass the group name to the CLI command and Ansible will handle the rest.

Let’s create our inventory, but before doing so we need to create a new directory to house our Ansible logic. Anywhere is fine, but I use my home directory.

mkdir ~/wordpress-ansible

Create a new plain text file called hosts in the new directory, with the following contents:


The first line indicates the group name and the lines that follow are our hosts, which we provisioned in Digital Ocean. Multiple groups can be created using the [group name] syntax and hosts can belong to multiple groups. For example:




Now we need to instruct Ansible where our inventory file is located. Create a new file called ansible.cfg with the following contents.

inventory = hosts

Running Commands

With our inventory file populated we can start running basic commands on the hosts, but first let’s briefly look at modules. Modules are small plugins that are executed on the host and allow you to interact with the remote system, as if you were logged in via SSH. Common modules include: apt, service, file and lineinfile, but Ansible ships with hundreds of core modules, all of which are maintained by the core development team. Modules greatly simplify the process of running commands on your remote systems, and cut down the need to manually write shell or bash scripts. Generally, most unix commands have an associated module and if not, someone else has probably created one.

Let’s take a look at the ping module, which ensures we can connect to our hosts:

ansible production -m ping -u root

The syntax is simple, we provide the group, followed by the module we wish to execute. We also need to provide the remote SSH user (by default Ansible will attempt to connect using your local user). Assuming everything is setup correctly you should receive three success responses.


You can also run any arbitrary command on the remote hosts using the a flag. For example, to view the available memory on each host:

ansible all -a "free -m" -u root

This time I haven’t provided a group, but instead passed all which will run the command across every host in your inventory file.


Already you should start to see how much time Ansible can save you, but running single commands on your hosts will only get you so far. Often, you will want to perform a series of sequential actions to fully automate the process of provisioning, deploying and maintaining your servers. Let’s take a look at playbooks.


Playbooks allow you to chain commands together, essentially creating a blueprint or set of procedual instructions. Ansible will execute the playbook in sequence and ensure the state of each command is as desired before moving onto the next. This is what makes Ansible idempotent. If you cancel the playbook execution partway through and restart it later, only the commands that haven’t completed previously will execute. The rest will be skipped.

Playbooks allow you to create truly complex instructions, but if you’re not careful they can quickly become unwieldy (think of god classes in OOP), which brings us onto roles.

Roles add organisation to playbooks. They allow you to split your complex build instructions into smaller reusable chunks, very much like a function in programming terms. This makes it possible to share your roles across different playbooks, without duplicating code. For example, you may have a role for installing Nginx and configuring sensible defaults, which can be used across multiple hosting environments.

Provisioning a Modern Hosting Environment

For the remainder of this article I’m going to show you how to put together a playbook based on Hosting WordPress Yourself. The provisioning process will take care of the following:

  • User setup
  • SSH hardening
  • Firewall setup

It will also install the following software:

You can view the completed playbook on GitHub, but I will explain how it works below.


Let’s take a look at how our playbook is organized.

├── ansible.cfg
├── hosts
├── provision.yml
└── roles
  └── nginx
    ├── handlers
      └── main.yml
    ├── tasks
      └── main.yml

The hosts and ansible.cfg files should be familiar, but let’s take a look at the provision.yml file.

- hosts: production
  user: root
    username: ashley
    password: $6$rlLdG6wd1CT8v7i$7psP8l26lmaPhT3cigoYYXhjG28CtD1ifILq9KzvA0W0TH2Hj4.iO43RkPWgJGIi60Mz0CsxWbRVBSQkAY95W0
    public_key: ~/.ssh/
   - common
   - ufw
   - user
   - nginx
   - php
   - mariadb
   - wp-cli
   - ssh

It’s relatively simple. We set the group of hosts from our inventory file, the user to run the commands, specify a few variables used by our roles and then list the roles to execute. The variables instruct Ansible which user to create on the remote hosts. We provide the username, the hashed sudo password and the path to our public key. You’ll notice that I’ve included the password here, but in reality you should look into Ansible Vault. Once each server has provisioned you will need to SSH in with the user specified here, as the root user will be disabled.

The roles mostly map to the tasks we need to perform and the software that needs to be installed. The common role performs simple actions that don’t require a full blown role, for example installing Fail2Ban, which needs no additional configuration.

Let’s break down the Nginx role to see how roles are put together, as it contains the majority of modules used throughout the remainder of the playbook.


Handlers contain logic that should be performed after a module has finished executing and they work very similar to notifications or events. For example, when the Nginx configurations have changed run service nginx reload. It’s important to note that these events are only fired when the module state has changed. If the configuration file didn’t require any update, Nginx will not be reloaded. Let’s take a look at the Nginx handler file:

- name: restart nginx
    name: nginx
    state: restarted

- name: reload nginx
    name: nginx
    state: reloaded

You will see we have two events. One to restart Nginx and the other for reloading the configuration files.


Tasks contain the actual instructions which are to be carried out by the role. Nginx consists of the following steps:

- name: Add Nginx repo
    repo: ppa:nginx/development

The first command adds the development Nginx repository. Ubuntu by default tracks the stable version, which doesn’t currently support HTTP/2. The format for each command is simple, provide a name, the module we wish to execute and any additional parameters. In the case of apt_repository we just pass the repo we wish to add.

Next we need to install Nginx.

- name: Install Nginx
    name: nginx
    state: present
    force: yes
    update_cache: yes

The command is fairly self explanatory, but state and update_cache are worth touching upon. The state parameter indicates the desired package state, in our case we want to ensure Nginx is installed, but you could pass latest to ensure that the most current version is installed. Due to adding a new repo in the prior command we also need to ensure we run apt-get update, which the update_cache parameter handles. This will ensure the repo caches are updated, so that Nginx pulls from the develop branch and not the stable branch.

Next we now need to clone down the Nginx configuration files if we haven’t already.

- name: Check Nginx configs exist
  stat: path=/etc/nginx/.git
  register: git_exists

This checks for the existence of the .git directory. If it exists we can assume we’ve already performed the clone.

- name: Remove default Nginx configs
    path: /etc/nginx
    state: absent
  when: not git_exists.stat.exists

If the .git directory doesn’t exist, remove the default configuration files.

- name: Clone Nginx configs
    dest: /etc/nginx
    version: master
    force: yes
  when: not git_exists.stat.exists

Next, clone the repo if .git doesn’t exist.

- name: Symlink default site
    src: /etc/nginx/sites-available/default
    dest: /etc/nginx/sites-enabled/default
    state: link
  notify: reload nginx 

The file module allows us to symlink the default site into the sites-enabled directory, which configures a catch-all virtual host and ensures we only respond to enabled sites. You will also see that we notify the reload nginx handler for the changes to take effect.

- name: Set Nginx user
    dest: /etc/nginx/nginx.conf
    regexp: "^user"
    line: "user {{ username }};"
    state: present
  notify: restart nginx 

Next we use the lineinfile module to update our Nginx config. We search the /etc/nginx/nginx.conf file for a line beginning with user and replace it with user {{ username }};. The {{ username }} portion refers to the variable in our main provision.yml file. Finally we restart Nginx to ensure the new user is used for spawning processes.

That’s all there is to the Nginx role. Check out the other roles on the repo to get a feel for how they work.

Running the Playbook

To run the playbook run the following command:

ansible-playbook provision.yml

Assuming your host file is populated and the hosts are accessible your servers should begin to provision.

The process should take roughly 5 minutes to complete all three servers, which is insane when compared to the time it would take to provision them manually. Once complete the servers are ready to house your individual sites and should provide a good level of performance and security out of the box. In a future article we may look at creating an additional playbook, which will allow you to quickly create new WordPress sites. The following will need to be handled, but it should be fairly straightforward using WP-CLI:

  • MySQL user and database creation
  • Nginx virtual host config
  • Download WordPress core files
  • Create a wp-config.php file
  • Run the WordPress installation

As I’m sure you can appreciate Ansible is a very powerful tool and one which can save you a considerable amount of time. If you would like to learn more, check out Trellis by the Roots team.

Do you use Ansible for provisioning? What about other tools such as Puppet, Chef or Salt? Let us know in the comments below.

About the Author

Ashley Rich

Ashley is a PHP and JavaScript developer with a fondness for solving complex problems with simple, elegant solutions. He also has a love affair with WordPress and learning new technologies.