Terraform: Building out an Application Environment in AWS

Terraform: Building out an Application Environment in AWSGarrett SweeneyBlockedUnblockFollowFollowingMar 3Everyone loves the cloudToday we’re going to utilize Terraform to build out a scalable, fault tolerant infrastructure for our application.

Most of my experience is in CloudFormation and Ansible, so I figured I’d give Terraform a try as I have heard good things.

The environment will build looks like the diagram below, which has the following features:A VPC that creates an isolated environmentAn Internet Gateway to connect our VPC to the webAn internet facing Application Load Balancer to access our web serversPublic subnets for NAT GW’s and Bastion ASG’sPrivate subnets for Web Server ASG’sNAT GW’s for web servers in the private subnet to access the webASG’s running ec2 configured with nginxASG running ec2 configured with ubuntu (as our Bastion)The environment to be created with terraformThe end goal is to have a load balancer that we can hit in our browser and successfully load a web page served up by an ec2 instance running nginx.

Installing TerraformMy DockerfileI only wanted to mess with installation and configuration once, so I made a small Dockerfile that has everything I need to run terraform.

Yes, it’s fatter than it needs to be, but it works.

Onward to the stuff that matters.

Note below that I’m going to be constructing all of my terraform files in a local templates directory and add them into the /usr/local/bin/templates directory with the docker commandADD templates /usr/local/bin/templates .

My initial filesI’m going to start with 2 files inside of the templates folder.

The first, full.

tf , is the template to construct AWS resources.

The second will be a simple variables.

tf file that defines variables to use in the full.

tf .

This will allow us to be more robust and make reusable templates and minimize duplicate code.

Constructing The AWS EnvironmentThe Provider & RegionTerraform supports multiple clouds, so we need to specify that this environment is AWS and the AWS Region to create these resources.

Let’s also make region a variable.

full.

tf :variables.

tf:The VPCNext is to construct a VPC — what Amazon defines as a “logically isolated section of the AWS Cloud”.

full.

tf:variables.

tf:Public, Ingress SubnetsNext we’ll create our ingress, or public subnets.

These subnets are used for our NAT Gateways and Bastions.

I took the approach of defining 1 subnet per each AZ — which gives me n blocks of subnet code for n subnets I want to define.

There are better ways of doing this with terraform, but for this more simple demo 2 blocks of code will suffice.

I also chose to parameterize the CIDR blocks for each subnet — the same has been done for the private subnet’s created later.

I’ve put a default value, but these can simply be overridden at creation for greater flexibility.

variables.

tf:full.

tf:Private, Web Server SubnetsJust more of the same here, except that we are creating private subnets for our web servers to live in later.

variables.

tf:full.

tf:Defining our Security GroupsConsidering our architecture, I decided to create 3 security groupsOne for the Application Load Balancer (Traffic in 80, out 80 — for the health check on the instances) called garrett_terra_alb_sgOne for the Web Servers (In on port 80 from the Public CIDR block for traffic, and on port 22 from CIDR Block AZ 1 where the Bastion lives) called terra_app_server_sgOne for my Bastion (In on port 22 from my IP, out on 22 to the 2 Web Server Subnets for personal access & debugging) called terra_bastion_sgThe Launch Config & ASG for our BastionNow that there is a Security Group defined for our bastion, create a Launch Configuration and an ASG for our bastion.

Here I’ve just created 1 ASG in the Ingress AZ 1 subnet.

The following code sets up to use the terra_bastion_sg created in the last step to run a singlet2.

micro instance in the public subnet.

Some Disclaimers: I’ve also hard coded the image id to an AMI that exists in the AWS us-west-2 region — so be cautious of that if you end up using a different region than the default us-west-2 defined in the variables.

tf file.

The key_name variable is just the name of the AWS Key Pair you want to use to ssh onto the instance — I don’t define a new one, but simply use one of the existing ones I’ve saved.

Our Target Group, Load Balancer, and ListenerBefore defining the Launch Configuration and ASG for the web servers, let’s set up our Application Load Balancer, Target Group, and Listener.

First, create a Target Group with HTTP protocol on Port 80.

There is no need to associate it with anything besides the VPC originally created.

Next, define the actual Application Load Balancer, terra_alb .

Tack on the ALB specific security group created in the last step so that it can accept traffic on port 80 and also reach the necessary private subnets where our web servers live.

Unlike the Target Group, it does need to specify the subnets where traffic is routed — which are our 2 private subnets.

Lastly, define a listener to tie the 2 together.

After assigning the ALB that’s already created and specify the port/protocol, define the default action that forwards traffic to the target group.

An Internet GatewayNot to forget, connect our VPC to the web with an Internet Gateway.

Route Tables & Associations for Ingress SubnetsNow for some route table work.

Our 2 ingress subnets will have the same route table logic, so we only have to create 1 route table.

Then create 2 separate associations to assign that route table to each ingress subnet.

If you’re unfamiliar with route tables, they’re used to direct network traffic.

For example, below is a route with a CIDR block 0.

0.

0.

0/0 and a gateway_id of our internet gateway.

That translates to “any request going to an IP in CIDR block 0.

0.

0.

0/0 is directed to the internet gateway, i.

e.

the web”.

However, AWS automatically creates a local route for any traffic destined for a target within the VPC — so you will end up seeing 2 routes when this table is created.

NAT Gateway’s and their EIP’sHere we create 2 NAT Gateways — one for the use of instances in each of the private subnets.

NAT Gateways serve the purpose of allowing an instance in a private subnet to reach the internet, but keep the internet from creating a connection back to the instance.

Below is the creation of 2 EIP’s, which are then associated with the NAT GW’s as they are created.

Route Tables & Associations for Web Server SubnetsMore route tables!.Same logic applies to routes as mentioned earlier, but you’ll notice that these routes point to the NAT GW’s instead of the Internet Gateway.

As to not be too repetitive, keep in mind what’s mentioned in “NAT Gateway’s and their EIP’s” above.

Below you’ll see 2 route tables being created and 2 route table associations.

This is because each AZ’s private subnet will have it’s own NAT Gateway, so there is a need two unique route tables as opposed to sharing the route table with the Internet Gateway route like the Ingress Subnets do.

The Launch Config & ASG for our Nginx Web ServersFinally, here is the meat of the application — the web servers.

I’ve chosen to do this demo with an Nginx Ubuntu AMI I found on the AWS Marketplace.

Below is the first instance of terraform data in the templates.

This use case is incredibly neat, as it will pull the most up to date AMI that matches our filters.

It doesn’t mean our deployed AMI will stay up to date, but if redeploying it will find the newest AMI for us without manually checking.

Further down you’ll see a Launch Configuration, which uses the nginx-ubuntu data to configure our AMI.

It also utilizes the terra_app_server_sg that was created earlier in the Security Group creation step.

Lastly, you’ll see 2 autoscaling groups for the 2 AZ’s supported in this environment, both tied back to the single Launch Configuration.

Note this is where the number of instances run is specified, as well as the Target Group for these instances.

Create It!The final step is for you to apply this terraform configuration in your own AWS environment!Save your terraform files and back out of the templates directory until you see the Dockerfile.

Then try building your image:docker build .

-t terraform-exampleAfter you find the image id, run the container:docker run -it <image>Once in the container, configure your AWS CLI with your account credentials:vi ~/.

aws/credentialsThen cd over to where your templates live (/usr/local/bin/templates ) and run the command to initialize terraform (packages will download based on the provider you specified).

Be sure to specify your Key Pair and CIDR block you want to allow SSH to your bastion.

terraform init -var key_name="garrett-terraform" -var bastion_ssh_from="99.

50.

207.

70/32"You can view the resources that terraform will create with plan :terraform plan -var key_name="garrett-terraform" -var bastion_ssh_from="99.

50.

207.

70/32"Finally, use apply to apply the plan:terraform apply -var key_name="garrett-terraform" -var bastion_ssh_from="99.

50.

207.

70/32"And that’s it!.Your resources should take some time to create, but when it’s finished find your load balancer and test it out to see if it routes to an nginx web server like so:Questions/Comments/Improvements?Leave me a comment.

Let me know what you liked and didn’t!.

. More details

Leave a Reply