Gitlab Continuous Deployment Pipeline to GKE with Helm

Gitlab Continuous Deployment Pipeline to GKE with HelmRaju DawadiBlockedUnblockFollowFollowingMar 30I shared about using Google Cloud Build service from Gitlab Community Edition(CE) using the shell runner.

You can grab the idea from this post on building docker image and storing them on Google Container Registry(GCR).

In this post, we will get familiar with the pipeline from docker build to deployment to GKE(Google Kubernetes Engine) which is applicable for both CE and EE(Enterprise Edition) of Gitlab.

image source: blog.


comSo, the flow would be:Create a Kubernetes Cluster on GKEConfigure service account credential for accessing cluster and use cloud build serviceCreate gitlab project, add gitlab-ci pipline along with cloudbuild & dockerfileInstall helm chart for easing the deploymentGetting continuous deployment up and runningSecuring the GKE clusterLet’s start first by creating GKE cluster.

Head over to the cluster creation page on Google Cloud Project, choose favorite zone, add nodes(you may prefer auto-scaling and preemptible nodes if using for test), better enable VPC native cluster, tune rest of settings as per need and hit create.

Create GKE ClusterIn few minutes, the cluster will be ready.

Now, time to create service account for accessing the GKE resources from gitlab along with triggering codebuild.

Head over to service account addition page of IAM(Identity and Access Management) and admin for adding new service account.

Give a suitable name for it, attach permissions and create a json key and hit done.

Time to go Gitlab !!Let’s create a new project on gitlab.

com and add few files on it: Dockerfile, cloudbuild.

yaml, .


yml and project resource file.

Alongside, we will be keeping the service account credential file as Gitlab environment variable.

As multi line is not supported, let’s do base64 encryption of the file and use the encoded value on the environment variable which we will decode while triggering gcloud resources.

Encode the file:base64 /path/to/credential.

json | tr -d nAdd the variableHere is our simple Dockerfile:I am using multi-stage build with artifact running on distroless base image which is lightweight and contain only the application and runtime dependencies.

Language focused docker images, minus the operating systemCloud Build build configuration file with tasks to trigger Google Cloud Build service.

We are pushing the docker image to Google Container Registry(GCR).

And a simple go http server, main.

go:We can give a test to the dockerfile from local with a build and run:$ docker build -t gitlab-gke .

$ docker run -d -p 8080:8080 –name gitlab-gke gitlab-gkeSend few requests to the server from browser: http://localhost:8080/rajuResponse: Hello, raju!Configuring Gitlab CIWe need two main steps on gitlab-ci.

yml which is defacto file for using Gitlab pipeline service:Publish: This steps uses cloudbuild spec file for performing build on cloud build environment which uses Dockerfile for building image and later pushed to gcr.

Deploy: For the deployment of newly built image to GKE cluster, we configure it to run when there’s new commit on master branch.

You can create trigger based on your need like, after creating tag, manual trigger etc.

The following stage publishes the image:publish-image: stage: publish image: dwdraju/alpine-gcloud script: – echo $GCLOUD_SERVICE_KEY | base64 -d > ${HOME}/gcloud-service-key.

json – gcloud auth activate-service-account –key-file ${HOME}/gcloud-service-key.

json – gcloud config set project $GCP_PROJECT_ID – gcloud container builds submit .


yaml –substitutions BRANCH_NAME=$CI_COMMIT_REF_NAME only: – masterHere, we are using a simple alpine linux based dwdraju/alpine-gcloud which has google cloud sdk and can access gcloud resources by adding the credential file.

If all is good and merged all codes on master branch, it should trigger new job and publish new image on google container registry.

Create Helm ChartHelm is a package manager for Kubernetes which eases the creation as well as versioning and other management of k8s resources.

It has been under CNCF few months ago.

If helm is new for you, here is guide for install helm with rbac.

Then, create a new chart for our application$ helm create gitlabgkeIt adds few Kubernetes manifest files.

Get Kubernetes Cluster credential for accessing GKE and installing helm chart$ gcloud container clusters get-credentials [cluster-name] –zone [cluster-zone] –project [project-name]Response:Fetching cluster endpoint and auth data.

kubeconfig entry generated for gitlab-cluster.

Apply the tiller RBAC for installing helm:$ kubectl apply -f tiller-rbac-config.

yaml$ helm init –service-account tillerResponse: Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Now, we need to adjust few configs on the default helm chart:Service Type: NodePortInternalPort: 8080(or as per your application port)Add default route if you don’t have a domain name and have to access the service with IP address of global loadbalancerImage repository and tag(We are using gcr.

io/[project-name]/gitlab-gke:master) for nowHere is the commit for the changes: https://gitlab.

com/dwdraju/gitlab-gke/commit/9f02cf83f39be71b62a8ed07589bfc538bc43349Time to install Helm Charthelm install –name gitlabgke .

In few minutes, it will get the pod running, health check passed, new ingress ip which can be obtained by kubectl get ing .

You can now access the ip to get hello :)http://[ingress-ip]/mynameBack to Gitlab CI for Continuous DeploymentWe need to add new stage deploy as we already have publish stage which sends new image to container registry.

deploy-image: stage: deploy image: dwdraju/gke-kubectl-docker script: – echo $GCLOUD_SERVICE_KEY | base64 -d > ${HOME}/gcloud-service-key.

json – gcloud auth activate-service-account –key-file ${HOME}/gcloud-service-key.

json – gcloud config set project $GCP_PROJECT_ID – gcloud container clusters get-credentials $CLUSTER_NAME –zone $CLUSTER_ZONE –project $GCP_PROJECT_ID – kubectl set env deployment/$K8S_DEPLOYMENT CI_COMMIT_SHA=$CI_COMMIT_SHA – kubectl set image deployment/$K8S_DEPLOYMENT $K8S_IMAGE=gcr.

io/$GCP_PROJECT_ID/$IMAGE_NAME:$CI_COMMIT_REF_NAME only: – masterSo, our final gitlab-ci.

yml file looks like this:If everything is setup correctly, we will get the gitlab jobs succeeding.

We simply changed the image here after setting the environment variable CI_COMMIT_SHA but we can use helm to upgrade the version by changing the tag value on values.

yml file of the chart.

$ helm upgrade gitlabgke .

You can try with helm upgrade on the pipeline.

For that, you can use my helm-docker image which can be accessed through this github repo for usage example.

Securing ClusterIf you are using self hosted CE gitlab, enable Master authorized networks on the GKE cluster and whitelist gitlab ip address.

For giving access to the pods, its better to create specific service account for so.

Here is the commit for configuring that.

We are using distroless base image in this example which might not fit for all case, but its better to use minimal docker image like alpine to reduce the chance of attack and minimize docker image size.

GKE has started offering istio enabled cluster which provides security features with strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA).

Give a try and you will love istio.

That’s all for now, if you have better way to robustness, feel free to drop words on comment.

You can find me on linkedin and twitter.


. More details

Leave a Reply