Docker’s Voting App on Swarm, Kubernetes and Nomad

This is all it needs to have a Swarm cluster.

A single-node cluster, but still a Swarm cluster with all its associated processes.

Deployment of the applicationAmong the Compose files, available in the Voting App’s GitHub repository, docker-stack.

yml is the one which needs to be used to deploy the application on a Swarm.

$ docker stack deploy -c docker-stack.

yml appCreating network app_backendCreating network app_defaultCreating network app_frontendCreating service app_visualizerCreating service app_redisCreating service app_dbCreating service app_voteCreating service app_resultCreating service app_workerAs I run the stack on Docker for Mac, I have access to the application directly from the localhost.

It’s possible to select CATS or DOGS from the vote interface (port 5000) and to see the result on port 5001.

I won’t go into the details here, but I just wanted to show you how easy this application can be deployed on a Swarm.

In case you want a more in-depth guide on how to deploy this same application on a multi-node Swarm, you can check out the following article.

Deploy the Voting App on a Docker Swarm using Compose version 3With Docker 1.

13, it’s now possible to deploy a stack from a docker-compose file.

Let’s test that and deploy the Voting…medium.

comKubernetesKubernetes is an open source system for automating deployment, scaling, and management of containerized applications.

Kubernetes conceptsA Kubernetes cluster is composed of one of several Masters and Nodes.

The Masters handle the cluster’s control plane (managing cluster’s state, scheduling tasks, reacting to cluster’s events).

The Nodes (previously called Minions) provide the runtime to execute the application containers (through Pods).

Architecture of a Kubernetes clusterIn order to run commands against a Kubernetes cluster, the kubectl command line tool is used.

We will see several example of its usage below.

There are several high level Kubernetes’ objects we need to know to understand how to deploy an application:A Pod is the smallest unit that can be deployed on a Node.

It’s a group of containers which must run together.

Quite often however, a Pod only contains one container.

A ReplicaSet ensures that a specified number of pod replicas are running at any given time.

A Deployment manages ReplicaSet and allows the handling of rolling updates, blue/green deployment, canary testing, etc.

A Service defines a logical set of Pods and a policy by which to access them.

In this chapter, we will use a Deployment and a Service object for each service of the Voting App.

Installing kubectlkubectl is the command line tool used to deploy and manage application on Kubernetes.

Install and Set Up kubectlProduction-Grade Container Orchestrationkubernetes.

ioIt can be easily installed following the official documentation.

For instance, to install it on MacOS, the following commands need to be run.

$ curl -LO https://storage.

googleapis.

com/kubernetes-release/release/$(curl -s https://storage.

googleapis.

com/kubernetes-release/release/stable.

txt)/bin/darwin/amd64/kubectl$ chmod +x .

/kubectl$ sudo mv .

/kubectl /usr/local/bin/kubectlInstalling MinikubeMinikube is an all-in-one setup of Kubernetes.

It creates a local VM, for instance on VirtualBox, and runs a single-node cluster that runs all the Kubernetes processes.

It’s obviously not a tool that should be used to setup a production cluster, but it’s really convenient for development and testing purposes.

kubernetes/minikubeminikube – Run Kubernetes locallygithub.

comCreation of a single node clusterOnce Minikube is installed, we just need to issue the start command to setup our single node Kubernetes cluster.

$ minikube startStarting local Kubernetes v1.

7.

0 cluster…Starting VM…Downloading Minikube ISO 97.

80 MB / 97.

80 MB [==============================================] 100.

00% 0sGetting VM IP address…Moving files into cluster…Setting up certs…Starting cluster components…Connecting to cluster…Setting up kubeconfig…Kubectl is now configured to use the cluster.

Kubernetes descriptorsOn Kubernetes, containers are not run directly, but through ReplicaSet managed by a Deployment.

Below is an example of a .

yml file describing a Deployment.

A ReplicaSet will ensure that two replicas of a Pod using Nginx are running.

As we will see below, in order to create a Deployment we need to use the kubectl command line tool.

To define a whole micro-services application in Kubernetes we need to create a Deployment file for each service.

We can do this manually or we can use Kompose to help us in this task.

Using Kompose to Create Deployments and ServicesKompose is a great tool which converts Docker Compose files into descriptor files (for Deployments and Services) used by Kubernetes.

It is very convenient and it really accelerates the process of migration.

Kubernetes + Compose = Komposekompose is a tool to help users familiar with docker-compose move to Kubernetes.

It takes a Docker Compose file and…kompose.

ioNotes:Kompose does not have to be used, as descriptor files can be written manually, but it sure speeds up the deployment when it is used.

Kompose does not take into account all the options used in a Docker Compose file.

The following commands install Kompose version 1.

0.

0 on Linux or MacOS.

# Linux$ curl -L https://github.

com/kubernetes/kompose/releases/download/v1.

0.

0/kompose-linux-amd64 -o kompose# macOS$ curl -L https://github.

com/kubernetes/kompose/releases/download/v1.

0.

0/kompose-darwin-amd64 -o kompose$ chmod +x kompose$ sudo mv .

/kompose /usr/local/bin/komposeBefore applying Kompose on the original docker-stack.

yml file, we will modify that file, and remove the deploy key of each service.

This key is not taken into account and can raise errors when generating descriptor files.

We can also remove the information regarding the networks.

We will then use the following file, renamed to docker-stack-k8s.

yml, to feed to Kompose.

From the docker-stack-k8s.

yml file, we can generate the descriptors of the Voting App using the following command.

$ kompose convert –file docker-stack-k8s.

ymlWARN Volume mount on the host "/var/run/docker.

sock" isn't supported – ignoring path on the hostINFO Kubernetes file "db-service.

yaml" createdINFO Kubernetes file "redis-service.

yaml" createdINFO Kubernetes file "result-service.

yaml" createdINFO Kubernetes file "visualizer-service.

yaml" createdINFO Kubernetes file "vote-service.

yaml" createdINFO Kubernetes file "worker-service.

yaml" createdINFO Kubernetes file "db-deployment.

yaml" createdINFO Kubernetes file "db-data-persistentvolumeclaim.

yaml" createdINFO Kubernetes file "redis-deployment.

yaml" createdINFO Kubernetes file "result-deployment.

yaml" createdINFO Kubernetes file "visualizer-deployment.

yaml" createdINFO Kubernetes file "visualizer-claim0-persistentvolumeclaim.

yaml" createdINFO Kubernetes file "vote-deployment.

yaml" createdINFO Kubernetes file "worker-deployment.

yaml" createdWe can see that for each service, a deployment and a service file are created.

We only got one warning linked to the visualizer service, as the Docker socket cannot be mounted.

However, we will not try to run this service and focus on the other ones.

Deployment of the applicationUsing kubectl, we will create all the components defined in the descriptor files.

We indicate the files that are located in the current folder.

$ kubectl create -f .

persistentvolumeclaim "db-data" createddeployment "db" createdservice "db" createddeployment "redis" createdservice "redis" createddeployment "result" createdservice "result" createdpersistentvolumeclaim "visualizer-claim0" createddeployment "visualizer" createdservice "visualizer" createddeployment "vote" createdservice "vote" createddeployment "worker" createdservice "worker" createdunable to decode "docker-stack-k8s.

yml":.

Note: as we left the modified compose file in the current folder, we get an error as it can’t be parsed.

This error can be ignored without risk.

The commands below show the services and deployments created.

$ kubectl get servicesNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEdb None <none> 55555/TCP 3mkubernetes 10.

0.

0.

1 <none> 443/TCP 4mredis 10.

0.

0.

64 <none> 6379/TCP 3mresult 10.

0.

0.

121 <none> 5001/TCP 3mvisualizer 10.

0.

0.

110 <none> 8080/TCP 3mvote 10.

0.

0.

142 <none> 5000/TCP 3mworker None <none> 55555/TCP 3m$ kubectl get deploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdb 1 1 1 1 3mredis 1 1 1 1 3mresult 1 1 1 1 3mvisualizer 1 1 1 1 3mvote 1 1 1 1 3mworker 1 1 1 1 3mExpose the application to the outside worldIn order to access the vote and result interfaces, we need to slightly modify the services created for them.

The file below is the descriptor generated for vote.

apiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: io.

kompose.

service: vote name: votespec: ports: – name: "5000" port: 5000 targetPort: 80 selector: io.

kompose.

service: votestatus: loadBalancer: {}We will modify the type of service and change the default type, ClusterIP, toNodePort instead.

While ClusterIP allows a service to be accessible internally, NodePort allows the publication of a port on each node of the cluster, and makes it available to the outside world.

We will do the same for result as we want both vote and result to be accessible from the outside.

apiVersion: v1kind: Servicemetadata: labels: io.

kompose.

service: vote name: votespec: type: NodePort ports: – name: "5000" port: 5000 targetPort: 80 selector: io.

kompose.

service: voteOnce the modification is done for both services (vote and result), we can recreate them.

$ kubectl delete svc vote$ kubectl delete svc result$ kubectl create -f vote-service.

yamlservice "vote" created$ kubectl create -f result-service.

yamlservice "result" createdAccess the applicationLet’s now get the details of the vote and result services and retrieve the port each one exposes.

$ kubectl get svc vote resultNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEvote 10.

0.

0.

215 <nodes> 5000:30069/TCP 15mresult 10.

0.

0.

49 <nodes> 5001:31873/TCP 8mVote is available on port 30069 and result on port 31873.

We can now vote and see the result.

With some basic understanding of Kubernetes’ components, we managed to deploy the Voting App very easily.

Kompose really helped us in the process.

Hashicorp’s NomadNomad is a tool for managing a cluster of machines and running applications on them.

Nomad does away with abstract machines and the location of applications, and instead enables users to declare what they want to run.

Nomad then handles where they should run, and how to run them.

Nomad’s conceptA Nomad cluster is composed of agents which can run in Server or Client mode.

Servers take on the responsibility of being part of the consensus protocol, which allows the servers to perform leader election and state replication.

Client nodes are very lightweight, as they interface with the server nodes and maintain very little state of their own.

Client nodes are where tasks are run.

Architecture of a Nomad clusterSeveral types of tasks can run on a Nomad cluster.

Docker workload can run using the docker driver.

This is the driver we will use to run the Voting App.

There are several concepts (Stanza in Nomad vocabulary) we need to understand first, in order to deploy an application on Nomad:A job is a declarative specification of tasks that Nomad should run.

It is defined in a job file (text file in HCL, Hashicorp Configuration Language).

A job can have one of many groups of tasks.

Jobs are submitted by users and represent a desired state.

A group contains a set of tasks that are co-located on a machine.

A task is a running process; a Docker container in our example.

The mapping of tasks in a job to clients is done using allocations.

An allocation is used to declare that a set of tasks in a job should be run on a particular node.

There are a lot more Stanza described in Nomad’s documentation.

The setupIn this example, we will run the application on a Docker Host created with Docker Machine.

Its local IP is 192.

168.

1.

100.

We will start by running Consul, used for the service registration and discovery.

We’ll then start Nomad, and will deploy the services of the Voting App as Nomad jobs.

Getting Consul for service registration and discoveryIn order to ensure service registration and discovery, it is recommended to use a tool, such as Consul, which does not run as a Nomad job.

Consul can be downloaded from the website below.

Download Consul – Consul by HashiCorpDownload Consulwww.

consul.

ioThe following command launches a Consul server locally.

$ consul agent -dev -client=0.

0.

0.

0 -dns-port=53 -recursor=8.

8.

8.

8Let’s get some more details on the options used:-dev is a convenient flag which sets up a Consul cluster with a server and a client.

This option must not be used except for dev and testing purposes-client=0.

0.

0.

0 allows us to reach consul services (API and DNS servers) from any interfaces of the host.

This is needed as Nomad will connect to Consul on the localhost interface, while containers will connect through the Docker bridge (often something like 172.

17.

x.

x).

-dns-port=53 specifies the port used by Consul’s DNS server (it defaults to 8600).

We’ve set it to the standard 53 port, so Consul DNS can be used from within the containers.

-recursor=8.

8.

8.

8 specifies another DNS server which will serve requests that cannot be handled by Consul.

Getting NomadNomad is a single binary, written in Go, which can be downloaded from the following location.

Download Nomad — Nomad by HashiCorpDownload Nomadwww.

nomadproject.

ioCreation of a single node clusterOnce Nomad is downloaded, we can run an agent with the following configuration.

// nomad.

hclbind_addr = "0.

0.

0.

0"data_dir = "/var/lib/nomad"server { enabled = true bootstrap_expect = 1}client { enabled = true network_speed = 100}The agent will run both as a server and a client.

We specify the bind_addr to listen on any interfaces, so tasks can be accessed from the outside.

Let’s start a Nomad agent with this configuration:$ nomad agent -config=nomad.

hcl==> WARNING: Bootstrap mode enabled!.Potentially unsafe operation.

Loaded configuration from nomad-v2.

hcl==> Starting Nomad agent.

==> Nomad agent configuration:Client: true Log Level: INFO Region: global (DC: dc1) Server: true Version: 0.

6.

0==> Nomad agent started!.Log data will stream in below:Note: by default, Nomad connects to the local Consul instance.

We have just setup a single node cluster.

The information on the unique member is listed below.

$ nomad server-membersName Address Port Status Leader Protocol Build Datacenter Regionneptune.

local.

global 192.

168.

1.

100 4648 alive true 2 0.

6.

0 dc1 globalDeployment of the applicationFrom the previous examples, we saw that, in order to deploy the Voting App on a Swarm, the Compose file can be used directly.

When deploying the application on Kubernetes, descriptor files can be created from this same Compose file.

Let’s see now how our Voting App can be deployed on Nomad.

First, there is no tool like Kompose in the Hashicorp world that can smooth the migration of a Docker Compose application to Nomad.

This might be an idea for future open source projects.

Files describing jobs, groups, tasks (and other Nomad Stanzas) need to be written manually.

We will go into the details of defining jobs for the Redis and the vote services of our application.

The process will be quite similar for the other services.

Definition of the Redis jobThe following file define the Redis part of the application.

// redis.

nomadjob "redis-nomad" { datacenters = ["dc1"] type = "service" group "redis-group" { task "redis" { driver = "docker" config { image = "redis:3.

2" port_map { db = 6379 } } resources { cpu = 500 # 500 MHz memory = 256 # 256MB network { mbits = 10 port "db" {} } } service { name = "redis" address_mode = "driver" port = "db" check { name = "alive" type = "tcp" interval = "10s" timeout = "2s" } } } }}Let’s explain this a little bit more.

The name of the job is redis-nomad.

The job is of the type service (which means a long running task).

A group is defined, with an arbitrary name; it contains a single task.

A task named Redis is using the Docker driver, meaning this one will run in a container.

The Redis task is configured to use the redis:3.

2 Docker image, and expose port 6379, labeled db, within the cluster.

Within the resources block, some cpu and memory constraints are defined.

In the network block we specify that port db should be dynamically allocated.

The service block defines how the registration will be handled in Consul; the service name, the IP address which should be specified (IP of the container), and the definition of the health check.

To check if this job can be run correctly, we first use the plan command.

$ nomad plan redis.

nomad+ Job: "nomad-redis"+ Task Group: "cache" (1 create) + Task: "redis" (forces create)Scheduler dry-run:- All tasks successfully allocated.

Job Modify Index: 0To submit the job with version verification run:nomad run -check-index 0 redis.

nomadWhen running the job with the check-index flag, the job will only be run if the server side version matches the job modify index returned.

If the index has changed, another user has modified the job and the plan's results are potentially invalid.

Everything seems fine, let’s now see if the job can deploy the task.

$ nomad run redis.

nomad==> Monitoring evaluation "1e729627" Evaluation triggered by job "nomad-redis" Allocation "bf3fc4b2" created: node "b0d927cd", group "cache" Evaluation status changed: "pending" -> "complete"==> Evaluation "1e729627" finished with status "complete"From this output, we can see that an allocation is created.

Let’s see the status of this one.

$ nomad alloc-status bf3fc4b2ID = bf3fc4b2Eval ID = 1e729627Name = nomad-redis.

cache[0]Node ID = b0d927cdJob ID = nomad-redisJob Version = 0Client Status = runningClient Description = <none>Desired Status = runDesired Description = <none>Created At = 08/23/17 21:52:03 CESTTask "redis" is "running"Task ResourcesCPU Memory Disk IOPS Addresses1/500 MHz 6.

3 MiB/256 MiB 300 MiB 0 db: 192.

168.

1.

100:21886Task Events:Started At = 08/23/17 19:52:03 UTCFinished At = N/ATotal Restarts = 0Last Restart = N/ARecent Events:Time Type Description08/23/17 21:52:03 CEST Started Task started by client08/23/17 21:52:03 CEST Task Setup Building Task Directory08/23/17 21:52:03 CEST Received Task received by clientThe Redis task (the container) seems to run correctly.

Let’s check the Consul DNS server and make sure the service is correctly registered.

$ dig @localhost SRV redis.

service.

consul; <<>> DiG 9.

10.

3-P4-Ubuntu <<>> @localhost SRV redis.

service.

consul; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35884;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;redis.

service.

consul.

IN SRV;; ANSWER SECTION:redis.

service.

consul.

0 IN SRV 1 1 6379 ac110002.

addr.

dc1.

consul.

;; ADDITIONAL SECTION:ac110002.

addr.

dc1.

consul.

0 IN A 172.

17.

0.

2;; Query time: 0 msec;; SERVER: 127.

0.

0.

1#53(127.

0.

0.

1);; WHEN: Wed Aug 23 23:08:36 CEST 2017;; MSG SIZE rcvd: 103We can see that the task was allocated the IP 172.

17.

0.

2 (on Docker’s bridge) and its port is 6379, as we defined.

Definition of the vote jobLet’s now define the job for the vote service.

We will use the following job file.

// job.

nomadjob "vote-nomad" { datacenters = ["dc1"] type = "service" group "vote-group" { task "vote" { driver = "docker" config { image = "dockersamples/examplevotingapp_vote:before" dns_search_domains = ["service.

dc1.

consul"] dns_servers = ["172.

17.

0.

1", "8.

8.

8.

8"] port_map { http = 80 } } service { name = "vote" port = "http" check { name = "vote interface running on 80" interval = "10s" timeout = "5s" type = "http" protocol = "http" path = "/" } } resources { cpu = 500 # 500 MHz memory = 256 # 256MB network { port "http" { static = 5000 } } } } }}There are a couple of differences from the job file we used for Redis.

The vote task connects to Redis using only the name of the task.

The example below is an except of the app.

py file used in the vote service.

// app.

pydef get_redis(): if not hasattr(g, 'redis'): g.

redis = Redis(host="redis", db=0, socket_timeout=5) return g.

redisIn this case, the vote’s container needs to use the Consul DNS to get the IP of the Redis container.

DNS requests for a container are handled through Docker bridge (172.

17.

0.

1).

The dns_search_domains is also specified, as service X is registered as X.

service.

dc1.

consul within Consul.

We defined a static port so that the vote service can be accessed on port 5000 from outside the cluster.

We can pretty much use the same configuration for the other services: worker, postgres and result.

Access the applicationOnce all the jobs have been launched, we can check the status and should see all of them running.

$ nomad statusID Type Priority Status Submit Datenomad-postgres service 50 running 08/23/17 22:12:04 CESTnomad-redis service 50 running 08/23/17 22:11:46 CESTresult-nomad service 50 running 08/23/17 22:12:10 CESTvote-nomad service 50 running 08/23/17 22:11:54 CESTworker-nomad service 50 running 08/23/17 22:13:19 CESTWe can also see all the services registered and healthy in Consul’s interface.

From the node IP (192.

168.

1.

100 in this example), we can access the vote and result interfaces.

SummaryDocker’s Voting App is a great application for demo purposes.

I was curious to see if it could be deployed, without changes in the code, on some of the main orchestration tools.

The answer is yes — and without too many tweaks.

I hope this article helped in understanding some of the basics of Swarm, Kubernetes and Nomad.

I’d love to hear about how you run Docker workload and which orchestration tool you are using.

.

. More details

Leave a Reply