Automating Server Setup with Ansible

There’s no doubt that building a web server from scratch is a great learning experience, and one that I recommend all WordPress developers undertake. Doing so will give you a greater understanding of the various components required to serve a website, not just the code you write. It can also broaden your knowledge on wider security and performance topics, which often get overlooked when you’re bogged down in code.

However, once you are familiar with the process, building a new server can be a time consuming task. Wouldn’t it be great if you could automate the entire process and provision new servers in a matter of minutes with little intervention? Thankfully you can, using a tool called Ansible.

Why Ansible?

Ansible is an automation tool for provisioning, application deployment, and configuration management. Gone are the days of SSH’ing into your server to run a command or hacking together bash scripts to semi-automate laborious tasks. Whether you’re managing a single server or an entire fleet, Ansible can not only simplify the process, but save you time. So what makes Ansible so great?

Ansible is completely agent-less, meaning you don’t have to install any software on your managed hosts. All commands are run through Ansible via SSH and if Ansible needs updating you only need to update your single control machine.

Commands you execute via Ansible are idempotent, allowing you to safely run them multiple times without anything being changed, unless required. Need to ensure Nginx is installed on all hosts? Just run the command and Ansible will ensure only those that are missing the software will install it. All other hosts will remain untouched.

That’s enough of an introduction, let’s see Ansible in action.

Installation

We need to set up a single control machine which we will use to execute our commands. I’m going to install Ansible locally on OS X, but any platform with Python installed should work (excluding Windows).

First ensure that pip is installed.

sudo easy_install pip

Then install Ansible.

sudo pip install ansible

Once the installation has completed you can verify that everything installed correctly by issuing:

ansible --version

Now that Ansible is setup we need a few servers to work with. For the purpose of this article I’m going to fire up three small Digital Ocean droplets with Ubuntu 14.04.4 x64 installed. I’ve also added my public key so that it will be copied to each host during the droplet creation. This will ensure we can SSH in via Ansible using the root user without providing a password later on.

ansible-droplets-provision

Once they’ve finished provisioning you’ll be presented with the IP addresses.

ansible-droplets

Inventory Setup

Ansible uses a simple inventory system to manage your hosts. This allows you to organise hosts into logical groups and negates the need to remember individual IP addresses or domain names. Want to run a command only on your staging servers? No problem, pass the group name to the CLI command and Ansible will handle the rest.

Let’s create our inventory, but before doing so we need to create a new directory to house our Ansible logic. Anywhere is fine, but I use my home directory.

mkdir ~/wordpress-ansible

Create a new plain text file called hosts in the new directory, with the following contents:

[production]
139.59.170.69
139.59.170.70
139.59.170.79

The first line indicates the group name and the lines that follow are our hosts, which we provisioned in Digital Ocean. Multiple groups can be created using the [group name] syntax and hosts can belong to multiple groups. For example:

[staging]
139.59.170.69

[production]
139.59.170.70
139.59.170.79

[wordpress]
139.59.170.69
139.59.170.70
139.59.170.79

Now we need to instruct Ansible where our inventory file is located. Create a new file called ansible.cfg with the following contents.

[defaults]
inventory = hosts

Running Commands

With our inventory file populated we can start running basic commands on the hosts, but first let’s briefly look at modules. Modules are small plugins that are executed on the host and allow you to interact with the remote system, as if you were logged in via SSH. Common modules include: apt, service, file and lineinfile, but Ansible ships with hundreds of core modules, all of which are maintained by the core development team. Modules greatly simplify the process of running commands on your remote systems, and cut down the need to manually write shell or bash scripts. Generally, most unix commands have an associated module and if not, someone else has probably created one.

Let’s take a look at the ping module, which ensures we can connect to our hosts:

ansible production -m ping -u root

The syntax is simple, we provide the group, followed by the module we wish to execute. We also need to provide the remote SSH user (by default Ansible will attempt to connect using your local user). Assuming everything is setup correctly you should receive three success responses.

ansible-ping

You can also run any arbitrary command on the remote hosts using the a flag. For example, to view the available memory on each host:

ansible all -a "free -m" -u root

This time I haven’t provided a group, but instead passed all which will run the command across every host in your inventory file.

ansible-memory

Already you should start to see how much time Ansible can save you, but running single commands on your hosts will only get you so far. Often, you will want to perform a series of sequential actions to fully automate the process of provisioning, deploying and maintaining your servers. Let’s take a look at playbooks.

Playbooks

Playbooks allow you to chain commands together, essentially creating a blueprint or set of procedual instructions. Ansible will execute the playbook in sequence and ensure the state of each command is as desired before moving onto the next. This is what makes Ansible idempotent. If you cancel the playbook execution partway through and restart it later, only the commands that haven’t completed previously will execute. The rest will be skipped.

Playbooks allow you to create truly complex instructions, but if you’re not careful they can quickly become unwieldy (think of god classes in OOP), which brings us onto roles.

Roles add organisation to playbooks. They allow you to split your complex build instructions into smaller reusable chunks, very much like a function in programming terms. This makes it possible to share your roles across different playbooks, without duplicating code. For example, you may have a role for installing Nginx and configuring sensible defaults, which can be used across multiple hosting environments.

Provisioning a Modern Hosting Environment

For the remainder of this article I’m going to show you how to put together a playbook based on Hosting WordPress Yourself. The provisioning process will take care of the following:

  • User setup
  • SSH hardening
  • Firewall setup

It will also install the following software:

You can view the completed playbook on GitHub, but I will explain how it works below.

Organization

Let’s take a look at how our playbook is organized.

├── ansible.cfg
├── hosts
├── provision.yml
└── roles
  └── nginx
    ├── handlers
      └── main.yml
    ├── tasks
      └── main.yml

The hosts and ansible.cfg files should be familiar, but let’s take a look at the provision.yml file.

---
- hosts: production
  user: root
  vars:
    username: ashley
    password: $6$rlLdG6wd1CT8v7i$7psP8l26lmaPhT3cigoYYXhjG28CtD1ifILq9KzvA0W0TH2Hj4.iO43RkPWgJGIi60Mz0CsxWbRVBSQkAY95W0
    public_key: ~/.ssh/id_rsa.pub
  roles: 
   - common
   - ufw
   - user
   - nginx
   - php
   - mariadb
   - wp-cli
   - ssh

It’s relatively simple. We set the group of hosts from our inventory file, the user to run the commands, specify a few variables used by our roles and then list the roles to execute. The variables instruct Ansible which user to create on the remote hosts. We provide the username, the hashed sudo password and the path to our public key. You’ll notice that I’ve included the password here, but in reality you should look into Ansible Vault. Once each server has provisioned you will need to SSH in with the user specified here, as the root user will be disabled.

The roles mostly map to the tasks we need to perform and the software that needs to be installed. The common role performs simple actions that don’t require a full blown role, for example installing Fail2Ban, which needs no additional configuration.

Let’s break down the Nginx role to see how roles are put together, as it contains the majority of modules used throughout the remainder of the playbook.

Handlers

Handlers contain logic that should be performed after a module has finished executing and they work very similar to notifications or events. For example, when the Nginx configurations have changed run service nginx reload. It’s important to note that these events are only fired when the module state has changed. If the configuration file didn’t require any update, Nginx will not be reloaded. Let’s take a look at the Nginx handler file:

---
- name: restart nginx
  service: 
    name: nginx
    state: restarted

- name: reload nginx
  service: 
    name: nginx
    state: reloaded

You will see we have two events. One to restart Nginx and the other for reloading the configuration files.

Tasks

Tasks contain the actual instructions which are to be carried out by the role. Nginx consists of the following steps:

---
- name: Add Nginx repo
  apt_repository:
    repo: ppa:nginx/development

The first command adds the development Nginx repository. Ubuntu by default tracks the stable version, which doesn’t currently support HTTP/2. The format for each command is simple, provide a name, the module we wish to execute and any additional parameters. In the case of apt_repository we just pass the repo we wish to add.

Next we need to install Nginx.

- name: Install Nginx
  apt:
    name: nginx
    state: present
    force: yes
    update_cache: yes

The command is fairly self explanatory, but state and update_cache are worth touching upon. The state parameter indicates the desired package state, in our case we want to ensure Nginx is installed, but you could pass latest to ensure that the most current version is installed. Due to adding a new repo in the prior command we also need to ensure we run apt-get update, which the update_cache parameter handles. This will ensure the repo caches are updated, so that Nginx pulls from the develop branch and not the stable branch.

Next we now need to clone down the Nginx configuration files if we haven’t already.

- name: Check Nginx configs exist
  stat: path=/etc/nginx/.git
  register: git_exists

This checks for the existence of the .git directory. If it exists we can assume we’ve already performed the clone.

- name: Remove default Nginx configs
  file:
    path: /etc/nginx
    state: absent
  when: not git_exists.stat.exists

If the .git directory doesn’t exist, remove the default configuration files.

- name: Clone Nginx configs
  git:
    repo: https://github.com/A5hleyRich/wordpress-nginx.git
    dest: /etc/nginx
    version: master
    force: yes
  when: not git_exists.stat.exists

Next, clone the repo if .git doesn’t exist.

- name: Symlink default site
  file: 
    src: /etc/nginx/sites-available/default
    dest: /etc/nginx/sites-enabled/default
    state: link
  notify: reload nginx 

The file module allows us to symlink the default site into the sites-enabled directory, which configures a catch-all virtual host and ensures we only respond to enabled sites. You will also see that we notify the reload nginx handler for the changes to take effect.

- name: Set Nginx user
  lineinfile: 
    dest: /etc/nginx/nginx.conf
    regexp: "^user"
    line: "user {{ username }};"
    state: present
  notify: restart nginx 

Next we use the lineinfile module to update our Nginx config. We search the /etc/nginx/nginx.conf file for a line beginning with user and replace it with user {{ username }};. The {{ username }} portion refers to the variable in our main provision.yml file. Finally we restart Nginx to ensure the new user is used for spawning processes.

That’s all there is to the Nginx role. Check out the other roles on the repo to get a feel for how they work.

Running the Playbook

To run the playbook run the following command:

ansible-playbook provision.yml

Assuming your host file is populated and the hosts are accessible your servers should begin to provision.

The process should take roughly 5 minutes to complete all three servers, which is insane when compared to the time it would take to provision them manually. Once complete the servers are ready to house your individual sites and should provide a good level of performance and security out of the box. In a future article we may look at creating an additional playbook, which will allow you to quickly create new WordPress sites. The following will need to be handled, but it should be fairly straightforward using WP-CLI:

  • MySQL user and database creation
  • Nginx virtual host config
  • Download WordPress core files
  • Create a wp-config.php file
  • Run the WordPress installation

As I’m sure you can appreciate Ansible is a very powerful tool and one which can save you a considerable amount of time. If you would like to learn more, check out Trellis by the Roots team.

Do you use Ansible for provisioning? What about other tools such as Puppet, Chef or Salt? Let us know in the comments below.

About the Author

Ashley Rich

Ashley is a PHP and JavaScript developer with a fondness for solving complex problems with simple, elegant solutions. He also has a love affair with WordPress and learning new technologies.

  • growdigital

    Lovely stuff, thank you.

    I came to Delicious Brains via Trellis by Roots.io (as they recommend WP Migrate DB) — their Ansible setup is pretty comprehensive, I’ve only just started scratching the surface: https://github.com/roots/trellis. This post should help improve my Ansible understanding 🙂

  • Ahmed

    Looks like i failed to probarly connect,

    I edited the provison:

    – hosts: production

    user: root

    vars:

    username: _______

    password: $6________________________________________________1/ZcdUc1

    public_key: ~/.ssh/qwsa.pub

    roles:

    – common

    – ufw

    – user

    – nginx

    – php

    – mariadb

    – wp-cli

    – ssh

    =======however I get this error ====

    fatal: [1——-]: UNREACHABLE! => {“changed”: false, “msg”: “Failed to connect to the host via ssh.”, “unreachable”: true}

    [ERROR]: Could not create retry file ‘provision.retry’. The error was: [Errno 13] Permission denied: ‘provision.retry’

    ====Im on google cloud===

    please advise, Im kind novice here

    • Have you updated the hosts file correctly? The variables in the provision playbook have no effect until the provision process has completed, so it’s unlikely those are the issue. Can you ping your hosts? Also, the `provision.retry` error means you don’t have write permission from where you’re running the command. You need to use `sudo`.

    • Open Site Solutions OSS

      I’m wondering why use both password and pki keys, PKI keys are more secure and less problematic.

      I noticed a lot of you are placing the pub key on the remote server, the public key needs to be placed within the authorized key file, here’s an example in doing just that.

      – authorized_key: user=cmills key=”{{ lookup(‘file’, ‘/oss/.ssh/id_rsa.pub’) }}”
      ignore_errors: true

  • Ahmed

    I had to add : sudo: yes in the provision.yml

    But I get this error, even after changing the chmod and owner of the file to the local mac user
    the file_name ‘/root/.ssh/key1.pub’ does not exist, or is not readable”}

    anyhelp???

  • blessgv

    Hi Andrew, for my first attempt i provided both my username & hashed password in provisions.yml. Following your tutorial , all went well but I was locked out when i tried to ssh into server by username i’ve provided earlier, not to say root was locked out too.

    In my second attempt with new droplet, I thought of hardening ssh the manual way and to prevent the lock out, I did the following:

    provided only username & kept password blank in provisions.yml plus I removed ssh from from provisions.yml & ssh folder from roles directory. This time I wasn’t locked out neither as root nor as username but username with sudo asked for password and there I was at it again. Hitting return for blank password didn’t worked.

    In both cases I created droplet with the same ssh key sitting at digital ocean’s control panel. Can you give some guidance on this password issue. And you’ve written a very good series of tutorials (8 parts for setting up server, i’ve read them all & subscribed too) and this is definitely a rewarding post on ansible. Thanks for this.

    • blessgv

      Not an issue anymore, just removed password from users tasks & manually changed password by “sudo password username” after ssh into server.

  • blessgv

    Hi Andrew, can you tell me why I am getting this error:

    sudo nginx -t
    nginx: [emerg] open() “/etc/nginx/fastcgi_params” failed (2: No such file or directory) in /etc/nginx/sites-enabled/*******.com:19
    nginx: configuration file /etc/nginx/nginx.conf test failed

  • Alex Reyes

    Just my 2 cents, but it’s completely possible to do this using windows. Not as easy as on a Mac, that’s for sure, but you can set up a basic linux box on Vagrant and use that to configure ansible. It works for me like a charm.

    • Open Site Solutions OSS

      I would like to here more about how you did that.

      • I created a Vagrant Ubuntu machine and installed ansible there, I use that machine as my ansible master. But now days I guess you can use Bash on windows 10 for the same effect. I need to try it but I think it would be easier than useing the Vagrant machine method.

        • The Ubuntu subsystem for Windows 10 does indeed run Ansible just fine. I’m using it to provision VPS’s and local Raspberry Pi’s. I tend to use W10 to bootstrap a Pi to do the actual work or to develop new ansible playbooks. As a subsystem, W10 Ubuntu runs fast as well.

  • Hi Ashley, great post!

    The Nginx version with HTTP/2, have a fastcgi_cache and fastcgi_cache_purgue modules enabled?

    Tanks!

  • Excellent post, Ashley. I’ve been sliding by with ServerPilot + Digital Ocean since earlier this year. I recently worked Buddy Continuous Deployment (https://buddy.works) into my default setup. Ansible certainly seems like enterprise-grade stuff, though, and while it does get you very far in a short amount of time, the complexities involved if running a one-man show would be burdensome. EasyEngine was another thought… just not enough time in a day 🙂

    I’m eyeing Google Cloud Engine + Bitnami and/or Docker as a potential next move. I’ve already worked Pressmatic into my default workflow, so a 100% Docker-based workflow is beginning to make a lot of sense. And might I say, I love Pressmatic. Since reading Delicious Brains’ earlier post about building an add-on for Pressmatic, I tried it out immediately and decided to do it some justice and nuke my whole OS one weekend so it’d have a nice, clean space to work from.