Scaling Laravel Using AWS Elastic Beanstalk Part 3: Setting up Elastic Beanstalk

In my last article we set up the supporting services we would require for our Laravel app once we deploy it to the Elastic Beanstalk architecture. We created a VPC to keep our infrastructure secure, we created a MySQL database in RDS, and we set up ElastiCache for our Redis cache. So now that our Laravel app is decoupled and our supporting services are in place, it’s finally time to deploy our app to Elastic Beanstalk.

As with all AWS services there are multiple ways to interact with the service depending on what you are doing. I’m a visual person so I tend to use a GUI when I can, however in this article we will be using the Elastic Beanstalk Command Line Interface (EB CLI) so you should go ahead and install the EB CLI if you don’t already have it installed.

Elastic Beanstalk Environments

The first thing to know about Elastic Beanstalk is that each application can have multiple environments. There are two types of EB environments: a web environment (default) for serving web pages and a worker environment for running cron and queue jobs.

If you recall my first article, I explained a bit about the worker environment and the fact that it runs its own queue daemon that interacts with SQS. We installed a package to help Laravel interface with the Amazon queue daemon so we’re already set up to work with EB worker environments.

For demonstration purposes our application is going to have one web environment and one worker environment so that Laravel can use a queue if it needs to. However, if your app doesn’t use queues then you can ignore the worker environment.

Create the Application

Let’s get started by creating the application and our initial environment in Elastic Beanstalk. As I said above we’re going to use the EB CLI for this part. In the root of your Laravel app run:

eb init

Select the region you want to deploy EB in. If you are asked for your AWS credentials at this point make sure you have an IAM user set up with the correct permissions (AWSElasticBeanstalkFullAccess). Continue to follow the instructions answering the questions based on your preferences.

Once this process is complete we can check that our application has been created by navigating to the AWS EB console and making sure our application exists.

It’s worth noting at this point that you should see a new .elasticbeanstalk directory in your app folder. The EB CLI uses this directory to store some config values but it should not be added to your project’s Git repository. The EB CLI should add the following lines to your .gitignore for you:

# Elastic Beanstalk Files


Before we move on to creating our environments we need to have a look at how we tell EB to provision our environments. We could manually configure the environments through the console or using the EB CLI, however it’s much easier to manage environments by creating configuration files that EB will use to provision our environments. You may have heard of other configuration frameworks (or “IT Automation” frameworks to give them their proper title) such as Puppet or Ansible, but EB has it’s own framework known as .ebextensions.

.ebextensions is simply a directory you create in your app that contains some YAML configuration files that sets options or runs commands in your EB environment. The .ebextensions docs give you an idea of what these config files can be used for, suffice to say they can be used do almost anything from setting environment variables to creating files and installing packages.

One thing that isn’t overly clear from the docs is that almost everything in .ebextensions is executed in alphabetical order so it’s common practice to prefix config files and commands with an integer to specify the order it should be run in. Let’s get started by creating a config file to hold our environment variables in .ebextensions/01-environment.config:

    document_root: /public
    memory_limit: 512M
    HttpPath: /worker/queue
    APP_ENV: production
    APP_KEY: base64:44cyzPQ+pYFpDz6VLgH3G9jRGXOmTvQe7mUq/PAqDWU=
    DB_DATABASE: testdb
    DB_USERNAME: testdb
    AWS_S3_REGION: us-east-1
    AWS_S3_BUCKET: eb-example
    AWS_SQS_REGION: us-east-1

We’re doing a few things here: specifying some config values for the PHP environment and queue worker, and setting our global environment variables. These environment variables are the values you would normally put in your .env file for this environment.

The DB_HOST and REDIS_HOST values I’ve used from the services we created in the last article. Fill in the rest of the values in {} with your own values. The AWS key/secret can be from the IAM user we created earlier (make sure and give the IAM user permissions for S3 and SQS to work properly). We don’t currently have the details for the AWS_SQS_PREFIX and the AWS_SQS_QUEUE as the queue will be created for us automatically when we create our worker environment, so we can leave these for just now and fill them in later.

Next we need to create a .ebextensions/02-deploy.config to specify some commands we need to run during deployment:

    command: "php artisan migrate --force"

    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash
      echo "Making /storage writeable..."
      chmod -R 777 /var/app/current/storage

      if [ ! -f /var/app/current/storage/logs/laravel.log ]; then
          echo "Creating /storage/logs/laravel.log..."
          touch /var/app/current/storage/logs/laravel.log
          chown webapp:webapp /var/app/current/storage/logs/laravel.log

      if [ ! -d /var/app/current/public/storage ]; then
          echo "Creating /public/storage symlink..."
          ln -s /var/app/current/storage/app/public /var/app/current/public/storage

    mode: "000755"
    owner: root
    group: root
    content: |

    mode: "000644"
    owner: root
    group: root
    content: |
      RewriteEngine on
      RewriteCond %{HTTP:X-Forwarded-Proto} ^http$
      RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [R=307,L]

First we specify some container_commands to run once our files have been deployed (note this is different from normal commands that get run before our files have been deployed). We don’t need to run composer install as EB installs composer dependencies automatically for us. Note here that we’re prefixing commands with an integer 01- to control the order in which they are run as commands are run in alphabetical order.

Next we are creating a file that is a simple bash script and will be executed as an EB deployment hook. Sadly there isn’t really any proper docs for EB deployment hooks but this article explains them pretty well. Basically we’re telling EB to create a file with the given content to be run after the app has been deployed. The script itself creates the storage/logs/laravel.log file, sets the correct permissions and also symlinks the public/storage directory.

Next we’re creating a config file that enables our Laravel logs to be rotated and published to S3. This can be handy when debugging issues later and also stops your server space filling up with logs.

Finally we’re creating an Apache config file to redirect any non-HTTPS traffic to HTTPS based on the X-Forwarded-Proto header. If you remember from my first article we configured Laravel to use these headers as EB is going to be setting up a load balancer for us. If you’re following along with this tutorial and not planning on adding a custom domain with HTTPS (more on that later) then you can remove this section from the config.

Create the Environments

Once we have our .ebextensions config files in place it’s time to actually create our environments. Back in the EB CLI run the following command:

eb create {VPCID} --vpc.elbpublic --vpc.elbsubnets {VPCELBSUBNETS} --vpc.ec2subnets {VPCEC2SUBNETS} --vpc.securitygroups {VPCSG}

Here we’re specifying which VPC to launch our EB environment into. The info is from the VPC we created in the last article. The vpc.elbsubnets should be the comma separated public subnets from the VPC and the vpc.ec2subnets should be the comma separated private subnets from the VPC.

Again follow the instructions selecting the relevant options. For our purposes a classic load balancer is fine.

Once complete, the EB CLI will package up our application and send it to Elastic Beanstalk. The CLI will output event info as the deployment progresses. You can also head over to the console to monitor the progress. If anything goes wrong with the deployment you will be notified in the events and you may have to tweak some settings and run eb deploy again until you get it up and running.

Next we want to run the same command again but this time we want to specify -t worker to set up our worker environment. The environment should be identical apart from the name obviously:

eb create -t worker {VPCID} --vpc.elbpublic --vpc.elbsubnets {VPCELBSUBNETS} --vpc.ec2subnets {VPCEC2SUBNETS} --vpc.securitygroups {VPCSG}

All being well, both environments should be green in the EB console and if you visit the web environment URL you should see your Laravel site.

Configuration Tweaks

At this point I find it helpful to go into each environment using the EB console and make any changes you feel are relevant. For example, you can set up when the auto scaling is triggered, change the size of your EC2 instances, enable health checking, enable HTTPS, etc. One change I recommend making manually is changing the APP_ENV environment variable in the worker environment to distinguish it from the web environment APP_ENV. This way you can check the app environment in your code if you want something to only run on the worker servers.

Another thing we need to do at this point is go back to our .ebextensions/01-environment.config and fill in the AWS_SQS_PREFIX and AWS_SQS_QUEUE values that we left earlier. You can find these values by going to the worker environment in the EB console and clicking “View Queue” beside the environment name. Once you have updated your config file run eb deploy to deploy the update.

Enabling HTTPS

Note: This step is optional and requires that you are setting up a custom domain for your EB environment. I’m not going to explain here how to set up a custom domain but it’s a fairly simple process of setting up a CNAME to your EB web environment URL.

To get HTTPS working on the load balancer, first you need to generate a certificate using the AWS Certificate Manager. I’m not going to go through the steps of generating a certificate here but again it’s pretty straightforward.

Once your certificate has been generated, head back to your web environment in Elastic Beanstalk and under Configuration > Load Balancing update the settings to set the “Secure listener port” to 443 and select your new certificate as the “SSL certificate ID”.


Hopefully at this point you should now have a Laravel app running on Elastic Beanstalk with queue jobs in SQS, a MySQL database in RDS and a Redis cache in ElastiCache. Feel free to start tweaking the setup and config files to meet your needs. There is also plenty of information in the Elastic Beanstalk docs on potential next steps such as monitoring your environment and integrating with other AWS services.

I hope you’ve enjoyed this short series on scaling Laravel using Elastic Beanstalk. Do you have any hints or tips for using Elastic Beanstalk? Got any questions about anything we’ve touched on in this series? Feel free to ask in the comments.

About the Author

Gilbert Pellegrom

Gilbert loves to build software. From jQuery scripts to WordPress plugins to full blown SaaS apps, Gilbert has been creating elegant software his whole career. Probably most famous for creating the Nivo Slider.

  • Thanks. Found it a very interesting read.
    Would love to experiment with these things too, but I’m scared I will not be able to keep up with all the extra security-stuff (settings, apps and updates) needed for a decent production environment.
    How do you guys deal with that kind of stuff?

  • mike503

    I’d really like to see the following things covered or expanded upon.

    First, deploying an app to EB but enabling the local filesystem is counterproductive. The local “storage” directory is only useful for single-request scratch files or temporary logs until they get pushed to S3 or CloudWatch or another aggregator. The filesystem isn’t something to rely upon at all in EB, and especially if you’re operating on more than one node. Even the same node isn’t guaranteed to run forever (third comment below is related) – EB is immutable infrastructure.

    Second – how do you suggest managing cron jobs in EB?

    Third – commands such as “php artisan down” don’t actually work unless you’re on a single node EB app, and due to the fact that the filesystem will not persist, it will not maintain that “setting” since it’s just a local file touch. I had expected Laravel to manage this better (like using a feature flag) but it doesn’t. Maybe by leaving this comment, some exposure will be received on these leftover non-cloud-friendly assumptions that were made.

  • Hey mate. Nice tutorial. Just one thing: it’s very important to run `php artisan config:cache` on production, since it improves the app performance.

    I use this script to run it post deployment:

    Cheers 🙂

  • jrdn

    Great read, really helped my understanding of deployment on AWS Beanstalk. Thanks!

  • GCeng

    Thank you for this great tutorial.

    Just wondering how you handle the deployment process? Is there any direct channel that the AWS can auto pull the latest build from Bitbucket? Do I need to use their code commit service also?


  • prola

    Sorry but why are you giving 777 permissions to the storage folder? This sets a bad precedence and allows hackers to execute any file in that directory.

    chmod -R 777 /var/app/current/storage should be
    chmod -R 755 /var/app/current/storage

  • Ido

    Thanks for the turorial! It is very useful!

    Can you share the best place to run optimize commnds like: config:cache, route:cache and php artisan optimize?

  • Alex Carstens

    Hello, first of all thanks a lot for this tutorial, I found it to be great.

    I was wondering if you could please point me out in the right direction concerning how to get the worker to make the post requests to the /worker/queue, I read everything in the dusterio blog and github repo but I can’t figure it out. I’m working on a lumen project. I can dispatch the jobs to the queue, but when AWS makes the post request to my /queue/worker I get a 404 error. Any help or reference would be great.

    Thanks again and have a nice day.