How to monitor OpenShift ElasticSearch logging with ElasticHQ

Not quite.

Understanding ElasticSearch DeploymentIn the standard installation, the internal logging ES doesn’t expose the health API directly, so the only way to query the API is curling the localhost URL inside the ES container (like demonstrated in the previous snippet).

This approach is necessary because only the ES admin has access to the health API.

Even if you expose the ES API via a new route, the health endpoints won’t be accessible even to an user with the cluster-admin role.

The only way to access this API is using the ES administrator certificate key.

This key is available in the secrets section of the logging namespace named as logging-elasticsearch.

Mounting this secret into a container inside the logging namespace would be possible to access the health API via the ES service (svc):curl -s –key admin-key –cert admin-cert –cacert admin-ca https://logging-es.

logging.

svc.

cluster.

local:9200/_cat/health?vUnfortunately (at the time of this writing), ElasticHQ doesn’t have support for client certificate authentication.

This limitation would make things harder to connect the ElasticHQ to the OpenShift logging ES.

Deploying ElasticHQ on OpenShiftWith the ElasticHQ limitation to connect to a client certificate protected endpoint, and to avoid exposing the ES API to outside of the cluster for security reasons, our approach was to add a reverse proxy between the ElasticHQ container and the ES service.

This proxy would be responsible to use the ES administrator keys to authenticate and forward requests to the ES container:ElasticHQ deployment on OpenShiftThe proxy implementation used was NGINX, which has a supported container image by Red Hat.

NGINX configuration was pretty straightforward once the secrets had been mounted into the container’s filesystem:location / { proxy_set_header X-Forwarded-For $remote_addr; proxy_pass https://logging-es.

logging.

svc.

cluster.

local:9200; proxy_ssl_trusted_certificate /opt/app-root/src/elasticsearch/secret/admin-ca; proxy_ssl_certificate /opt/app-root/src/elasticsearch/secret/admin-cert; proxy_ssl_certificate_key /opt/app-root/src/elasticsearch/secret/admin-key;}The root location is responsible to proxy every request to the ES service using the administrator keys.

This way, the monitoring tool could easily connect to this proxy internally and make the right API calls to retrieve the ES metrics.

To deploy this container along with the ElasticHQ one, we implemented a sidecar container pattern:Deploy components of an application into a separate process or container to provide isolation and encapsulation.

In this use case is a deployment of a pod with two containers: the proxy is responsible to interface with the ES secure service and the monitoring container to delivery the desired functionality.

To read more about how multiple containers are managed together within a pod, go to the Kubernetes documentation.

The proxy container exposes the port 8080 internally, which means that only the containers inside the pod would be able to connect to it.

This avoids unnecessary exposure of the ElasticSearch API.

Only the ElasticHQ backend services would be able to connect to it.

To connect the ElasticHQ container to the proxy was pretty simple, just a matter of adding the http://localhost:8080 URL to the dashboard:ElasticHQ connectionVoilà!.You have your OpenShift logging ES monitored by ElasticHQ.

Play around with the dashboard to discover the nice functionality exposed by it.

ElasticHQ on OpenShift: Metrics ViewEven with Prometheus, having ElasticHQ could add some value to your daily operations, saving you time to create your unique monitoring view or even aggregating other ElasticSearch clusters to the same dashboard.

A full functional ElasticHQ deployment on OpenShift with every technical details is on this GitHub repository (star us!).

If you have any issues, please let me know by opening an issue there or leaving a comment on this article.

See you next time!.

. More details

Leave a Reply