Frontend Deployment Pipelines — The Easy Way

Frontend Deployment Pipelines — The Easy WayMario BrendelBlockedUnblockFollowFollowingFeb 10AWS CloudFront — the easiest way to deliver your frontendIf you ever deployed your frontend without deployment pipelines you know how quickly work can get tedious.

In this blog entry, I try to show you how easy it actually is to create pipelines for your frontend so that your code will automatically build, tests will be executed and the created files will be pushed to CloudFront.

What technology will be used?For our pipelines, we try to keep it simple but I’ll cover everything that you need to know to use the pipelines in a production environment.

If you don’t use any of the technologies given below this blog entry might still be valuable for you since it should also give you a basic understanding of deployment pipelines.

Regarding our technologies, we will use any NPM dependent frontend project, in my case Vue JS, but you can of course use Angular, React or something similar.

Furthermore, we will use bitbucket in combination with bamboo.

You get bamboo out of the box if you are in the Atlassian cloud and the integration is pretty flawless in my opinion.

While using Bamboo we will also look a little bit at Docker but don’t be afraid we will not create a Docker container for now :).

Lastly, we will let bamboo push the resources to CloudFront.

This doesn’t sound like a lot and it actually isn’t.

How will the pipeline look likePipelineAs you can see the pipeline itself is pretty straight forward.

Every time we push to our bitbucket repository we want to activate the pipeline hook which then will deploy our code to AWS.

The cool thing about the pipeline is that it is extremely small and any technology can be interchanged.

So let’s see how to do this :).

Setting up the PipelinesI assume you’ve already created a frontend project of your choice and connected it with Bitbucket.

If so you can create the bitbucket-pipelines.

yml file within the root directory of your project.

image: node:10.

15.

0pipelines: branches: master: – step: caches: – node script: – apt-get update && apt-get install -y python-dev – curl -O https://bootstrap.

pypa.

io/get-pip.

py – python get-pip.

py – pip install awscli – npm install – npm run build – aws s3 sync dist s3://$PROD_S3_BUCKET – aws configure set preview.

cloudfront true – aws cloudfront create-invalidation –distribution-id $PROD_CLOUDFRONT_DISTRIBUTIION_ID –paths "/*"Now let's digest this file line by line :).

image: node:10.

15.

0The first line describes the docker image that you want to use.

In my case, it is the latest version of node available on docker hub.

But you can use any version that your environment might depend on.

pipelines: branches: master:In the next block, we actually define what will happen for each branch.

These pipelines will run as soon as your repository is updated by a push.

In our case, the pipeline will always run if the master branch gets updated.

If you don’t want that to happen you can also define custom pipelines which will only be run manually.

For more information about custom pipelines look here.

caches: – nodeThe next step defines a cache for node.

What's happening here is that the dependencies that were loaded via the npm install will be cached so that the build time will be decreased for the next builds.

– apt-get update && apt-get install -y python-dev- curl -O https://bootstrap.

pypa.

io/get-pip.

py- python get-pip.

py- pip install awscliThese are just utility scripts to get the aws cli that we need later on.

It might be reasonable to encapsulate these lines in a docker container but since docker isn’t in the scope of this blog entry I will add these lines as part of the script.

If you are interested in this topic leave a comment and I’ll show you how to work with docker :).

– npm install- npm run buildThese lines will trigger the build process of your project.

I think you know what these do :).

And lastly- aws s3 sync dist s3://$PROD_S3_BUCKET- aws cloudfront create-invalidation –distribution-id $PROD_CLOUDFRONT_DISTRIBUTIION_ID –paths "/*"These 2 lines will automatically push your code to CloudFront and S3.

If you are not familiar with CloudFront and S3 here is a quick summary.

S3 is the object storage of AWS, you can pretty much think of it like directories on a computer with files in it.

CloudFront is responsible to serve a given S3 directory as fast as possible for the end user.

This is why you should have your frontend right here if you are using AWS.

Of course, this is a little bit oversimplified but again this is out of scope :).

Now let's take a look at each line of the AWS deployment.

aws s3 sync web s3://$PROD_S3_BUCKETThis line syncs the dist directory with a given S3 Bucket.

As you can see I’m using a neat feature of bitbucket here: Repository variables.

You can define these variables within the settings of your project.

We will look at the different repository variables that are necessary in a couple of seconds.

aws cloudfront create-invalidation –distribution-id $PROD_CLOUDFRONT_DISTRIBUTIION_ID –paths "/*"This line will invalidate our current CloudFront.

But what does this mean?.Basically, you have different “Points of Presence” which are servers that will serve the files given on your frontend.

If you now refresh your S3 this doesn’t mean that your CloudFront will also be refreshed.

So your S3 files and CloudFront might now be different.

To ensure this doesn’t happen you can invalidate your files.

With the paths parameter “/*” you say that every file will be invalidated and will be re-cached by CloudFront.

You can of your change the path parameter if you only want to have certain files to be invalidated.

So this was actually the most important step of our build process.

The last thing we have to do is to connect everything.

To do this we will at first take a look at the repository variables.

Repository variablesAs I’ve already said you have the possibility to define repository variables for your deployment pipelines.

You can even hide values if you don’t want anybody to see the values.

These variables can be accessed like $PROD_S3_BUCKET within your bitbucket-pipelines.

yml.

So let's see what variables I’ve defined.

As you can see I’m already adding the AWS access credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).

These variables will be used by the aws cli.

If you don’t already have a user you can create one via the IAM option in aws.

Please make sure to set the correct policies so that this user only has access to S3 and CloudFront.

If you are done creating the user you will get both variables.

The prod variables will be used for this- aws s3 sync dist s3://$PROD_S3_BUCKET- aws cloudfront create-invalidation –distribution-id $PROD_CLOUDFRONT_DISTRIBUTIION_ID –paths "/*"so that the S3 bucket actually gets the data and the CloudFront will be invalidated.

And that's it.

You now have created your own frontend pipeline that automatically pushes the newest version of your repository to CloudFront.

If you don’t know how to create a CloudFront here is a little help.

At first, you create an S3 Bucket within S3.

You give this S3 Bucket public read access — you have menu within the bucket where you can set the bucket rules.

You can create these rules by using the policy generator.

The configuration could look like this:and the json that is generated looks like this:Afterwards, you open and CloudFront and create a new Distribution.

The Origin Domain Name will be your S3 bucket — you will get a dropdown where you can choose it.

Now you only have to wait for the status to be completed.

What to watch out forA continuous deployment pipeline is really nice but you have to keep in mind that you need a really good test coverage to be confident that you won’t break anything.

I would recommend you to not allow anyone to push to the master and to use pull requests for that.

This way you ensure that hopefully only tested code will be included in your production environment :).

Any questions left? Feel free to write a comment :).

.

. More details

Leave a Reply