About Infrastructure on AWS, Automated with Terraform, Ansible and GitLab CI

The Ansible provisioner for instance would fail if we do not have the /root/.

ssh/id_rsa_terraform or /etc/ansible/become_pass files available locally.

Of course, for local usage you need to install the Ansible provisioner as well.

You find the installation instructions here.

The AWS region is set directly in the provider configuration as mentioned earlier.

If you do not want to use AWS credentials as environment variables locally, e.

g.

because they would collide with another access key already configured, you can run the following command to initialize the Terraform backend with AWS credentials instead of terraform init only, as used in the CI pipeline:terraform init -backend-config=”access_key=[…]” -backend-config=”secret_key=[…]To ensure don’t store any secrets in the GitLab repository, our .

gitignore contains the following:# Terraform local state (including secrets in backend configuration).

terraformterraform.

tfstate.

*.

backup# Ansible retry-files*.

retryConfiguring the GitLab CI PipelineThat brings us to the configuration of the GitLab CI pipeline… After creating the project, we first need to set the required secrets in our project settings:The Project’s CI/CD SecretsYou can optionally protect these variables for specific branches or alike.

Please consult the GitLab CI Docs for further information.

The pipeline itself is configured in the .

gitlab-ci.

yml file.

To keep it short for the first, the code below shows only the basic pipeline for the Terraform dev environment.

If you already know about how to configure GitLab CI pipelines you can jump to the end of the page for the full configuration!image: name: rflume/terraform-aws-ansible:lateststages: # dev environment stages – validate dev – plan dev – apply devvariables: AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY# Create files w/ required the secretsbefore_script: – echo “$ID_RSA_TERRAFORM” > /root/.

ssh/id_rsa_terraform – chmod 600 /root/.

ssh/id_rsa_terraform – echo “$ANSIBLE_VAULT_PASS” > /etc/ansible/vault_password_file – echo “$ANSIBLE_BECOME_PASS” > /etc/ansible/become_pass# Apply Terraform on dev environmentvalidate:dev: stage: validate dev script: – cd environments/dev – terraform init – terraform validateplan:dev: stage: plan dev script: – cd environments/dev – terraform init – terraform plan -out “planfile_dev” artifacts: paths: – environments/dev/planfile_devapply:dev stage: apply dev script: – cd environments/dev – terraform init – terraform apply -input=false “planfile_dev” dependencies: – plan:devFirst, we define the Docker image to run the pipeline.

We set it to my custom image for the named reasoned.

Next, the stages to be executed by the GitLab runner are first only defined, and configured in detail afterwards.

Then, we set the environment variables and create the files holding the required secrets from our GitLab CI project variables.

We also set the correct file permissions for the ssh key.

Finally, we do the actual stage configurations.

These are similar for every Terraform environment and/or the Terraform “global” project files.

We always perform three steps:terraform validateterraform plan, andterraform applyThe terraform init command is required to provide the AWS credentials to the Terraform backend.

Terraform will by default check for credentials in the environment variables.

As we have them set in the variables: section of the .

gitlab-ci.

yml file, Terraform will find these.

The terraform plan command creates a plan file.

It is passed as artifact to the next stage in order to make sure that only the changes shown in the output of the planning stage are actually applied with the terraform apply in the next stage.

At this point, we already have a fully automated pipeline!Pipeline ImprovementsHowever, we can improve the pipeline to optimize execution times and add some security mechanisms, so let have a look at the following configuration snipped: allow_failure: false only: changes: – environments/dev/**/* – modules/**/*If we appended this to the stages, we would add the following features:If a single stage fails, the whole pipeline will fail too.

This is achieved by allow_failure: falseGitLab CI allows one to limit the execution of stages to changes in specific files or directories (/**/* notation).

In the example above, the stages would only be run if files in the environments/dev directory changed in a commit, or if any of our Terraform modules were added, updated, deleted, etc.

This improves the runtime of the pipeline, because not every stage is executed in every pipeline.

To add a security layer we updated our apply stages with this: only: refs: – master changes: – environments/dev/**/* – modules/**/* when: manualThe refs: -master statement ensures that changes are only applied when merged into the master branch.

Combined with the enforcement of a certain amount of required merge request approvals we ensure that the changes are reviewed before being applied.

A good thing with this is that changes to the infrastructure can be seen directly from the logs of the planning stage(s) and a review of the Terraform code is not required.

The Final PipelineFinally, we ended up with a different combination of all these given pipeline features for the different stages, which brings us to our final pipeline configuration (without staging stages, which are equal to dev):image: name: rflume/terraform-aws-ansible:lateststages: # 'global' stages – validate global – plan global – apply global # Dev env stages – validate dev – plan dev – apply dev [.

STAGING .

] # Prod env stages – validate prod – plan prod – apply prodvariables: AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY# create files w/ required secrets (so that they’re not stored inthe docker image!)before_script: – echo “$ID_RSA_TERRAFORM” > /root/.

ssh/id_rsa_terraform – chmod 600 /root/.

ssh/id_rsa_terraform – echo “$ANSIBLE_VAULT_PASS” > /etc/ansible/vault_password_file – echo “$ANSIBLE_BECOME_PASS” > /etc/ansible/become_pass# Global# ——validate:global: stage: validate global script: – cd global – terraform init – terraform validate only: changes: – global/**/* # no modules are included in 'global', so we do not need '- modules/**/*' hereplan:global: stage: plan global script: – cd global – terraform init – terraform plan -out “planfile_global” artifacts: paths: – global/planfile_global only: changes: – global/**/*apply:global: stage: apply global script: – cd global – terraform init – terraform apply -input=false “planfile_global” dependencies: – plan:global when: manual allow_failure: false only: changes: – global/**/*# DEV ENV# ——-validate:dev: stage: validate dev script: – cd environments/dev – terraform init – terraform validate only: changes: – environments/dev/**/* – modules/**/*plan:dev: stage: plan dev script: – cd environments/dev – terraform init – terraform plan -out “planfile_dev” artifacts: paths: – environments/dev/planfile_dev only: changes: – environments/dev/**/* – modules/**/*apply:dev: stage: apply dev script: – cd environments/dev – terraform init – terraform apply -input=false “planfile_dev” dependencies: – plan:dev allow_failure: false only: refs: – master changes: – environments/dev/**/* – modules/**/*[.

STAGING .

]# PROD ENV# —-validate:prod: stage: validate prod script: – cd environments/prod – terraform init – terraform validate only: changes: – environments/prod/**/* – modules/**/*plan:prod: stage: plan prod script: – cd environments/prod – terraform init – terraform plan -out “planfile_prod” – echo “CHANGES WON’T BE APPLIED UNLESS MERGED INTO ‘MASTER’!.artifacts: paths: – environments/prod/planfile_prod only: changes: – environments/prod/**/* – modules/**/*apply:prod: stage: apply prod script: – cd environments/prod – terraform init – terraform apply -input=false “planfile_prod” dependencies: – plan:prod when: manual allow_failure: false only: refs: – master changes: – environments/prod/**/* – modules/**/*Security ConcernsWith creating the secrets in the Docker container only from within the pipeline, protecting them in the project settings, and using Terraform’s file() function to read them only while executing Terraform instead of passing them as variables, secrets are not revealed in the output of the GitLab CI pipeline logs.

However, the Ansible become_pass (the terraform user’s password on the remote hosts) is read by Terraform and then passed to the provisioner.

It is therefore revealed in plaintext in the pipeline logs and you keep that in mind when using the provisioner.

I have created an issue regarding this concern that you can follow to be notified about updates.

SummaryWe have created a custom Docker image which is smaller than the official “full” tagged Terraform image, but extended the “light” image with the Terraform AWS provider and radekg’s Ansible provisioner to enable us to automate our Terraform workflow with GitLab CI pipelines.

Within Terraform we created a AWS EC2 instance which is not only automatically created by Terraform, but also automatically provisioned with Ansible.

I hope I was able to provide some useful information to help you improve your Terraform workflow by including the Ansible provisioner and creating automatic pipelines in GitLab CI!.. More details

Leave a Reply