The basics of deploying Logstash pipelines to Kubernetes

The quickest way to tell is by tailing the logs of the Pod.

> k logs -f pod/apache-log-pipeline-5cbbc5b879-kbkmbIf the pipeline is running correctly the last log line you should see says that the Logstash API has been created successfully.

[2019-01-20T11:12:03,409][INFO ][logstash.

agent] Successfully started Logstash API endpoint {:port=>9600}When I initially built this pipeline I came across two errors.

The first was when the configuration file wasn’t in a readable format due to the formatting, the error was printed and visible when tailing the Pod logs.

The second was when ConfigMap wasn’t mounted correctly and the pipeline would run, stop then restart, again this was printed and visible by tailing the logs.

So, we have a fully functioning Logstash pipeline running in Kubernetes.

But its not actually doing anything.

What we need to do now is run Filebeat.

To make sure this worked correctly I had two Terminal windows open, one tailing the logs of the Pod and the other for the FileBeat command I was about to run.

sudo .

/filebeat -e -c filebeat.

yml -d "publish" -strict.

perms=falseWhen this command is run, Filebeat will come to life and read the log file specified in in the filebeat.

yml configuration file.

The other flags are talked about in the tutorial mentioned at the beginning at the article.

In the pipeline configuration file we included the stdout plugin so messages received are printed to the console.

With this in mind, we should see messages printed out in the Terminal window tailing the Pods logs and you should see something similar in the window running the Filebeat command.

{ "@timestamp" => 2019-01-20T11:35:36.

042Z, "request" => "/style2.

css", "prospector" => { "type" => "log" }, "response" => "200", "httpversion" => "1.

1", "offset" => 18005, "bytes" => "4877", "tags" => [ [0] "beats_input_codec_plain_applied" ], "timestamp" => "04/Jan/2015:05:24:57 +0000", "clientip" => "81.

220.

24.

207", "referrer" => ""http://www.

semicomplete.

com/blog/geekery/ssl-latency.

html" "Mozilla/5.

0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.

73.

11 (KHTML, like Gecko) Version/7.

0.

1 Safari/537.

73.

11"", "ident" => "-", "agent" => ""Mozilla/5.

0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.

73.

11 (KHTML, like Gecko) Version/7.

0.

1 Safari/537.

73.

11"", "beat" => { "version" => "6.

5.

4", "name" => "local", "hostname" => "local" }, "auth" => "-", "@version" => "1", "host" => { "name" => "local" }, "verb" => "GET", "input" => { "type" => "log" }, "source" => "/filebeat-6.

5.

4-darwin-x86_64/logstash-tutorial.

log"}If we’ve seen the messages printed in the console we can almost guarantee that the message have been delivered into ElasticSearch.

There are two ways to check this, call the ElasticSearch API with some parameters or use Kibana.

I’m a big fan of Kibana, so that’s the route we’re heading down.

Fire up Kibana and head to the Discover section.

In our pipeline configuration, more specifically the ElasticSearch output, we specify the Index that’s to be created to be a pattern made up of metadata which includes the Filebeat version and the date.

We use this index pattern to retrieve the data from ElasticSearch.

In this example the Index that I defined was called filebeat-6.

5.

4–2019.

01.

20 as this was the Index that was created by Logstash.

Next, we configure the Time Filter field.

This field is used when we want to filter our data by time.

Some logs will have multiple time fields so that’s why we have to specify it.

Once these steps have been carried out we should be able to view the logs.

If we go back into the Discover section once we have defined the Index the logs should be visible.

If the logs are visible, give yourself a pat on the back! Well done and good effort! Have a celebratory dab if you want :)So a couple of cool Kibana related things before I wrap up.

We can monitor our Logstash pipelines from the Monitoring section of Kibana.

We can get insights into event rates such as emitted and received.

We also get Node information such as CPU utilization and JVM metrics.

Last but not least is the Logs section of Kibana.

I think this is my favorite section of Kibana at the moment.

It allows you to view streaming logs in near-real time and look back at historical logs.

To see the Logs section in action, head into the Filebeat directory and run sudo rm data/registry, this will reset the registry for our logs.

Once this has been done we can start Filebeat up again.

sudo .

/filebeat -e -c filebeat.

yml -d "publish" -strict.

perms=falseIf you place the Terminal you’re running Filebeat in next to the browser you have Kibana in you’ll see the logs streaming in near-real time, cool eh?As the title says, this is a basic article.

It guides you on how to get something up and working quickly so there is bound to be improvements/changes that could be made to make this better on all fronts.

Many thanks as always for reading my articles, it’s really appreciated.

Any thoughts, comments or questions drop me a tweet.

Cheers ????????Dannyhttps://twitter.

com/danieljameskay.. More details

Leave a Reply