Laravel on AWS: a reference architecture

Written by getlionel | Published 2017/12/05
Tech Story Tags: aws | laravel | cloudformation | devops | software-development

TLDRvia the TL;DR App

A guide to networking, security, autoscaling and high-availability

It’s not an easy task to set up durable architecture for your web application. And if you try to build it as you go, you’ll soon get tired of clicking around the AWS console. What if you had one go-to architecture and repeatable process for all your projects, while ensuring maximum security, performance and availability? Here is how you should deploy your Laravel application on AWS.

Lionel is Chief Technology Officer of London-based startup Wi5 and author of the Future-Proof Engineering Culture course. You can reach out to him on https://getlionel.com

How we will enforce security: — Create VPC subnets to deploy our application into. A VPC is your own virtual network within AWS and lets you design private subnets where instances can’t be accessed directly from outside your VPC. This is where we will deploy our web and database instances. — Use temporary bastions (also called jump boxes) that we will deploy in our public subnets when we need to connect to web and database instances, reducing the surface of attack — Enforce firewalls rules by whitelisting which servers can talk to each other, using VPC security groups (SGs). SGs are default-deny stateful firewalls applied at the instance level. — Simplify secret management by avoiding passwords where possible and instead specifying IAM roles to control access to our resources. Using IAM roles for EC2 removes the need to store AWS credentials in a configuration file. Roles use temporary security tokens under the hood which AWS takes care of rotating so we don’t have to worry about updating passwords.

How we will enforce high availability: — Span our application instances across Availability Zones (AZs below). An AZ is one or more data centers within a region that are designed to be isolated from failures in other AZs. By placing resources in separate AZs, organisations can protect their application from a service disruption impacting a single location — Serve our application from an Elastic Load Balancer. ELB is a highly available (distributed) service that distributes traffic across a group of EC2 instances in one or more AZs. ELB supports health checks to ensure traffic is not routed to unhealthy or failing instances — Host our application on ECS, describing through ECS services what minimum number of healthy application containers should be running at any given time. ECS services will start new containers if one ever crashes. — Distribute our database as a cluster across multiple AZs. RDS allows you to place a secondary copy of your database in another AZ for disaster recovery purposes. You are assigned a database endpoint in the form of a DNS name that AWS takes responsibility for resolving to a specific IP address. RDS will automatically fail over to the standby instance without user intervention.Preferably we will be using Amazon Aurora, which will maintain a read replica of our database in a separate AZ and that Amazon will promote as the primary instance should our main instance (or its AZ) fail. — Finally, we rely on as many distributed services as possible to delegate failure management to AWS: services like S3, SQS, ELB/ALB, ECR and CloudWatch are designed for maximum resiliency without us having to care for the instances they run on.

Laravel, made highly available with almost a one-click deploy!

How we will build ourselves a repeatable process:We will be deploying an empty Laravel application on a fresh domain name using Docker, CloudFormation and the AWS CLI. CloudFormation defines a templating language that can be used to describe all the AWS resources that are necessary for a workload. Templates are submitted to CloudFormation and the service will provision and configure those resources in appropriate order.Docker container images are stand-alone, executable packages of a piece of software that include everything needed to run it.With the AWS CLI, you can control all services from the command line and automate them through scripts.By combining all three, both our infrastructure and our application configuration can be written as code and as such be versioned, branched, documented.

This is the procedure I use to deploy my clients’ Laravel applications on AWS. I hope this can be helpful to deploy yours. If your use case is more complex, I provide on-going support packages ranging from mentoring your developers up to hands-on building your application on AWS. Ping me at hi@getlionel.com

**Let’s do it step by step:1. Set up your AWS credentials**Start with authenticating your command line by downloading the API key and secret for a new user in the IAM section of your AWS console. This user will need to have to have permissions to create resources for all the services we will use below. Follow the prompts from:

**2. Order SSL certificates**We need two certificates: one for our web application itself and another one for our custom domain on CloudFront. The one for your web application needs to be created in the AWS region you want to deploy your application into whereas CloudFront will only accept certificates generated in region us-east-1.AWS SSL/TLS certificates are free, automatically provisioned and renewed, even if you did not buy your domain in Route53. They seamlessly integrate with AWS load balancers, CloudFront distributions and API Gateway endpoints so you can just set them and forget them.

**3. Create a key pair to be used by your EC2 instances**It is recommended to create a new SSH key pair for all EC2 instances of this new project, still using the CLI:

Remember that AWS won’t store SSH keys for you and you are responsible for storing and sharing them securely.

**4. Launch our CloudFormation stacks**Here comes the infrastructure-as-code! Our whole deployment will be described in one master YAML template, itself referencing nested stacks YAML templates to make it more readable and reusable.This is the directory structure of our templates:

├── master.yaml # the root template├── infrastructure├── vpc.yaml _# our VPC and security groups_ ├── storage.yaml # our database cluster and S3 bucket ├── web.yaml # our ECS cluster └── services.yaml # our ECS Tasks Definitions & Services

And the complete code can be downloaded from GitHub here:

li0nel/laravelaws_laravelaws - A reference architecture to deploy Laravel on a highly available ECS cluster using CloudFormation_github.com

The vpc.yaml template defines our VPC subnets and route tables:

This is quite verbose and is everything it takes to set up public and private subnets spanning two AZs. You can see why you wouldn’t want to implement this in the AWS console!

We also need three SGs. The first one is to secure our EC2 instances and only allow inbound traffic coming from the load-balancer plus any SSH inbound traffic (remember our instances will be in a private subnet and won’t be able to receive traffic from the internet anyway):

The load balancer’s SG will allow any traffic from the internet (while only responding to HTTP and HTTPS):

Finally, the database SG only allows ingress traffic on MySQL port and coming from our EC2 instances, and nothing from the internet. Our database will also be hosted inside our private subnets so it can’t receive any traffic from outside the VPC.

Let’s now launch our storage.yaml stack:

Plus one public-read S3 bucket:

The web.yaml stack is composed of one ECS cluster and a Launch Configuration for our instances. The LC defines the bootstrap code to execute on each new instance at launch, this is called the User Data. We use here a third-party Docker credential helper that authenticates the Docker client to our ECR registry by turning the instance’s IAM role into security tokens.

In more complex setups, we can have our freshly created load balancer registering itself to Route53 so that your service is always available at the same DNS address. This design pattern is called service discovery and is not possible out of the box in CloudFormation. Instead, we will manually point our domain name to our load-balancer on Route53 in step 7 below.

In the meantime, our load balancer responds with an HTTP 503 error since it can’t find a single healthy instance returning a correct HTTP status code in our cluster pool. Of course, this will change as soon as we deploy our application in our cluster.

Our load balancer responding but with no healthy container instances behind it

5. Build and push your Laravel Docker image

In the previous step, we created one ECR registry to store both the Docker image of our Laravel application and the one of our Nginx server. ECRs are standard Docker registries which you authenticate to using tokens, that the AWS CLI can generate for us:

Below are the two Dockerfiles we use to build our Docker images:

And the command to build them:

Finally, we launch our web service with ECS.At the core level, task definitions describe which Docker images should be used to create containers, how containers should be linked together and which environment variables to run them with. At an higher level, an ECS service maintains a specified number of instances of a task definition simultaneously in an ECS cluster. The cluster is the pool of EC2 instances ie the infrastructure on which the tasks are hosted.

It will take a few seconds for our instances to be considered healthy by ELB so it starts directing traffic to them, and that what we see then is:

At least this is a Laravel page, though displaying the default HTTP 500 error message. By checking Laravel logs which are streamed to CloudWatch, we see that we’re missing the session table in the DB. So how can we now connect to one of our instances in the private subnets, across the internet, to run our database migrations?

6. Launch a bastion & run database migrations

A bastion (also called jump box) is a temporary EC2 instance that we will place in a public subnet of our VPC. It will enable us to SSH into it from outside the VPC and from there still being able to access our instances (including database instances) in private subnets.When creating the bastion, make sure to associate to it the SG allowing access to the database.

The bastion can also be a host for a SSH tunnel between our machine and our public subnet so we can connect a local mysql/pgsql client to our remote database. Below is an example for PostgreSQL:

Back to our database migrations that we just ran. Here’s how it looks now when connecting to the load balancer:

Laravel served through our load balancer URL

Yay! Our application is now served through our load balancer and our EC2 and database instances are running from the safety of a private subnet. The next step is to point our domain name to our load balancer.

**7. Migrate DNS service to AWS Route53**If you have bought your domain name outside of AWS, you usually don’t need to migrate either the registration or the DNS service to your AWS account.There is an edge case though if you want your root domain (also known as APEX) to point to your load balancer. This needs a CNAME record which is not allowed for APEXs but AWS Route53 offers a special type of ALIAS records that lets you do just that.

First we will migrate your DNS service to AWS:

Once the DNS service is assumed by Route53, we can create an ALIAS record to our ELB URL.

All done!

Domain name pointing to the load balancer, SSL certificate working

You are potentially done at this point. You can also improve your stack and deployment systems by following the steps below.

8. Speed up your application by using CloudFront

Add a CloudFront distribution in your CloudFormation template and update your stack:

You will need to create beforehand a CloudFront Origin Access Identity, which is a special CloudFront user who will be able query objects in your S3 bucket:

Create an ALIAS record to point files.yourdomain.com to your CF distribution:

Add a sub_filter Nginx directive to rewrite all URLs to your S3 buckets as links to your CF distribution instead.

**9. (Optional) Publish your Laravel workers and crons**Well done! Our Laravel application is now highly available in the cloud. This step will show how we can reuse our exact same Laravel Docker image to deploy our scheduled tasks and workers. They will run in their own containers and be managed by another ECS service so we can scale them independently to the php-fpm containers. We also make sure we have only a single instance of cron running, even if we have multiple front-end containers.

For the worker jobs, we create an SQS queue using CloudFormation, for the front-end to dispatch jobs to our workers in the background:

Finally we create two more tasks definitions in CloudFormation by starting from the same Laravel Docker image, same environment variables, but just overriding the Docker CMD (i.e. the command executed by Docker when the container starts):

The crontab file we use to call the artisan scheduler loads the container’s environment variables in the cron console session. If you don’t, Laravel won’t see your container’s env vars when called from the cron.

That’s it! We now have in our cluster a mix of Laravel front-end containers (php-fpm with Nginx as a reverse proxy), Laravel workers and one cron.

**10. (Optional) Add an ElasticSearch domain**Most web applications would need a search engine like ElasticSearch. This is how you can create a managed ES cluster with CloudFormation.

**11. (Optional) High availability for the storage tier**As we discussed previously, we only have one database instance and no read replica in a separate AZ. You can add a replica in CloudFormation with the below template:

Note that Aurora only supports instances starting at db.r4.large size for PostgreSQL whereas Aurora MySQL does start at db.t2.small instances.

12. CloudWatch alarms

Below we set up CPU, memory and replication alarms for our database:

And for the ECS instances:

13. (Optional) Updating your stack manually — vertical scaling or manual horizontal scaling

To create your CloudFormation stack the first time, use the below command:

If you later want to modify the number or size of the instances in your cluster, update the parameters ECSInstanceType and ECSDesiredCount in your command line and call the update-stack command instead. CloudFormation will un-provision your previous instances and launch the new instances without further intervention needed from you.

14. (Optional) Auto scaling

Here we will use a combination of CloudWatch alarms, ScalableTargets and ScalingPolicies to trigger scaling of both our ECS cluster size and the desired number of container instances in our ECS. Scaling will happen both ways, so our infrastructure will typically be as light as possible at night and then scale up for peak times!

Coming soon

15. (Optional) Set up Continuous Deployment with CodePipeline

This is where we’ll automate the building of our images from our GitHub repository. Once images are built and tested (using built-in Laravel unit and integration tests), they will be deployed in production without further clicking.Containers will be replaced in sequence using a deployment pattern called Blue-Green deployment, so we get absolutely no downtime.

I’ve written about how to setup CodePipeline for Laravel here!

16. (Optional) Setup SES and a mail server

If you’ve bought your domain name from Route53 instead of another domain name registrar, you don’t have a mail service ie you can’t receive emails on your new domain name. AWS has no other solution for you than letting you host a mail server on an EC2 instance and get your MX records to point at it, or to set up a custom Lambda function to redirect your incoming emails to GMail for example.

Coming soon

17. Cost containment

If you are running this architecture at scale, there are a couple ways to contain your AWS bill. First you could point your application to the Aurora Read Replicas for read-only queries, to offload your primary instance and avoid vertically scaling too much.

Then you could commit to EC2 Reserved instances and pay for some of your instances cost upfront. Doing so can reduce your EC2 bill by as much as 75%. If your traffic fluctuates a lot throughout the day, you could have reserved instances running continuously and scale up with On-Demand instances during peak times.

Finally, a more sophisticated approach would be to scale using EC2 Spot instances but it is only recommended for your background workload as Spot instances can be terminated by AWS at a short notice.

18. (Optional) Deleting your stack and free resources

Once you’re done experimenting, you can wind down all the resources you created through CloudFormation with one single command. Now you can be sure you did not forget an instance or NAT gateway somewhere silently adding to your AWS bill.

I hope that was helpful and got you to adopt infrastructure-as-code. If it has been helpful, please comment, clap or share!

Lionel is Chief Technology Officer of London-based startup Wi5 and author of the Future-Proof Engineering Culture course. You can reach out to him on https://getlionel.com


Written by getlionel | b
Published by HackerNoon on 2017/12/05