Deploy a Kubernetes Cluster on OpenStack using Kubespray

Deploy a Kubernetes Cluster on OpenStack using KubesprayRobertNBlockedUnblockFollowFollowingMar 12Photo by Albin Berlin from PexelsKubernetes has quickly become the open-source standard solution for deployment, scaling and management of container applications.

It offers a high degree of flexibility and versatility.

But what leads to a large and powerful documentation, this can be overwhelming for one or the other, when trying to find the relevant sections for his installation.

Which is why Kubernetes has a steep learning curve.

After the planning of the cluster follows the installation which also has its pitfalls.

For this reason, there are deployment tools such as Kubespray that make this work easier.

This story is about the automatic deployment of a Kubernetes cluster, using Kubespray on an OpenStack Cloud (Open Telekom Cloud).

Kubespray uses for the automatic deployment of Kubernetes, the provisioning, configuration and application deployment tool Ansible.

Kubepsray also provides a library for provisioning resources on different cloud platforms.

For this purpose, the Infrastructure as Code Tool Terraform is used.

The Kubespray project currently offers Terraform support for the cloud providers AWS, OpenStack and Packet.

This tool, in conjunction with the OpenStack library, is used to provide the infrastructure in this story.

RequirementsFirst, we take a look at the prerequisites for the deployment.

These are divided into the requirements of Kubespray and provider library.

Kubespray requires the following components:Python 2.

7 (or newer)Ansible 2.

7 (or newer)Jinja 2.

9 (or newer)OpenStack provider library requirements:Terraform 0.

11 (or newer)To install Terraform it is necessary to download a suitable package from the Hashicorp website.

And unpack it.

Then the path to the unpacked binary has to be stored in the PATH variable.

With the command “terraform” you can test if the installation was successful.

Additional information can be found under the following link.

Depending on the operating system, Ansible can be installed with only a few commands.

Please refer to the following Ansible documentation.

In this story I use Ubuntu and the installation of Ansible is done as follows.

sudo yum updatesudo yum install ansibleAfterwards, the dependencies of Kubespray have to be installed.

This is done with the following command.

The repository needs to be cloned first.

git clone https://github.

com/kubernetes-sigs/kubespraysudo pip install -r requirements.

txtTo use the Open Telekom Cloud it is necessary to set your access data using the.

ostackrc in the home directory and load the environment variables.

Cluster PlanningDue to its high flexibility, Kubernetes offers many possibilities to adapt the cluster to your own needs.

The consideration of the multitude of possibilities is not part of this story.

But can be read in the Kubernetes documentation under — Creating a Custom Cluster from Scratch.

For the exemplary deployment we will create a cluster consisting of a master, on which the etcd also runs, and two worker nodes.

Likewise, the cluster will not have a floating IP and will therefore not be accessible from the Internet.

Another choice to make is that of the CNI (Container Network Interface).

There are several choices (cilium, calico, flannel, weave net, …).

For our example we use flannel, this works out-of-the-box.

Calico would also be a possibility, only OpenStack neutron ports have to be configured to allow service and pod subnets.

In order to control the cluster with the Kubernetes Dashboard after deployment, we will also have it installed.

Setup Cluster ConfigurationThe following commands have to be executed in the repository directory, therefore it is important to fill the variable $CLUSTER with a meaningful name.

cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTERcd inventory/$CLUSTERln -s .

/.

/contrib/terraform/openstack/hostsln -s .

/.

/contribAfter running the commands it is necessary to edit the inventory/$CLUSTER/cluster.

tf file.

Gist by Robert NeumannThe description of the variables can be found under the following link.

For this example, we will create a cluster with one Kubernetes master and 2 worker nodes.

They will based on the latest CentOS 7 and the “s2.

exlarge.

4”-flavor.

The etcd will also installed on the master.

Infrastructure DeploymentNow we are ready to deploy our cluster infrastructure with Terraform.

For a better overview the following diagram shows how it looks like after deployment.

This will be extended during the story.

Image by Robert NeumannTo start the Terraform deployment change to the inventory/$CLUSTER/ directory and run the following commands.

Fist we need to install the required plugins.

This will be done by the init argument and the path to the plugins.

terraform init .

/.

/contrib/terraform/openstackThis finsh really fast.

At this stage Terraform is ready to deploy the infrastructure.

Which can be performed by the apply argument.

terraform apply -var-file=cluster.

tf .

/.

/contrib/terraform/openstackAfter some seconds Terraform should show the result like the following and the instances are reachable.

Apply complete!.Resources: 3 added, 0 changed, 0 destroyed.

to check if the servers are reachable, the following Ansible command can be executed.

Before we have to change to the root folder of the repository.

$ ansible -i inventory/$CLUSTER/hosts -m ping allexample-k8s_node-1 | SUCCESS => { "changed": false, "ping": "pong"}example-etcd-1 | SUCCESS => { "changed": false, "ping": "pong"}example-k8s-master-1 | SUCCESS => { "changed": false, "ping": "pong"}Kubernetes Cluster DeploymentThe infrastructure is deployed, and the next step is the installation of the Kubernetes Cluster.

First, we need to configure the cluster variables.

One of these files is inventory/$CLUSTER/group_vars/all/all.

yml.

Importent in this file is to set the cloud_provider to “openstack” and bin_dir to the path where the binaries will be installed.

For the example cluster we use the following config.

Gist by Robert NeumannNext, we need to configure the inventory/$CLUSTER/group_vars/k8s-cluster/k8s-cluster.

yml file.

Edit the kube_network_plugin variable to flannel or calico (requires to configure OpenStack Neutron ports).

In our example we use flannel which works out-of-the-box.

We have also configure the variable resolvconf_mode we will use “docker_dns”.

By this value Kubespray will set up the docker daemon flags.

The example configuration for our cluster is shown below.

Gist by Robert NeumannLast, we need to edit the inventory/$CLUSTER/group_vars/k8s-cluster/addons.

yml to enable the dashboard installation by setting the variable dashboard_enabled to “true”.

You can use the example configuration below.

Gist by Robert NeumannAfter configuration editing we need to run the Ansible-Playbook with the parameter to the configuration by the following command.

ansible-playbook –become -i inventory/$CLUSTER/hosts cluster.

ymlAnsible goes through several steps, if all steps are successful, your cluster looks like in the following diagram.

Image by Robert NeumannTestingFor testing you cluster you have to login to the Kubernetes Master, switch to the root user and use the kubectl tool to get the cluster information by the kubectl cluster-info command.

It will show the endpoint information of the master and the services in the cluster.

If your cluster looks good you need to create a Kubernetes dashboard user by the following commands.

# Create service accountkubectl create serviceaccount cluster-admin-dashboard-sa# Bind ClusterAdmin role to the service accountkubectl create clusterrolebinding cluster-admin-dashboard-sa –clusterrole=cluster-admin –serviceaccount=default:cluster-admin-dashboard-sa# Parse the tokenkubectl describe secret $(kubectl -n kube-system get secret | awk '/^cluster-admin-dashboard-sa-token-/{print $1}') | awk '$1=="token:"{print $2}'With the token it is now possible to login to the dashboard.

But first you need to create a tunnel to your Kubernetes Master because the Dashboard is still open for localhost at port 8001.

After that you can reach the dashboard under the URL localhost:8001.

Now you can use your token to login by selecting “Token and enter it.

Image by Robert NeumannNow we are ready to work with the Kubernetes Cluster.

This tutorial shows how easy it is to deploy a Kubernetes cluster on an OpenStack cloud and how to take care of it.

.. More details

Leave a Reply