Centralized logging in Kubernetes

You can use it for this tutorial.You can deploy this whole setup by executing the shell script <Centralized_logging_resources>/deployment/deploy.shBut I’m gonna explain a bit about certain parts of the deployment below.The following YAML file(<Centralized_logging_resources>/deployment/centralized-logging-deployment.yaml) contains the Kubernetes deployment for Elasticsearch and Kibana.apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: wso2-elastic-searchspec: replicas: 1 minReadySeconds: 30 template: metadata: labels: deployment: wso2-elastic-search spec: initContainers: – name: init-sysctl image: busybox:1.27.2 command: – sysctl – -w – vm.max_map_count=262144 securityContext: privileged: true containers: – name: wso2-elastic-search image: docker.elastic.co/elasticsearch/elasticsearch:6.5.3 livenessProbe: tcpSocket: port: 9200 initialDelaySeconds: 30 periodSeconds: 5 readinessProbe: httpGet: path: /_cluster/health port: http initialDelaySeconds: 20 periodSeconds: 5 imagePullPolicy: Always resources: requests: cpu: 0.25 limits: cpu: 1 ports: – containerPort: 9200 protocol: "TCP" name: http – containerPort: 9300 protocol: "TCP" env: – name: discovery.type value: "single-node" – name: ES_JAVA_OPTS value: -Xms256m -Xmx256m – name: network.host valueFrom: fieldRef: fieldPath: status.podIP – name: PROCESSORS valueFrom: resourceFieldRef: resource: limits.cpu—apiVersion: v1kind: Servicemetadata: name: wso2-elasticsearch-servicespec: selector: deployment: wso2-elastic-search ports: – name: http-1 protocol: TCP port: 9200 – name: http-2 protocol: TCP port: 9300—apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: wso2-kibanaspec: replicas: 1 minReadySeconds: 30 template: metadata: labels: deployment: wso2-kibana spec: initContainers: – name: init-wso2-elasticsearch-service image: busybox command: ['sh', '-c', 'until nslookup wso2-elasticsearch-service; do echo waiting for wso2-elasticsearch-service; sleep 2; done;'] containers: – name: wso2-kibana image: docker.elastic.co/kibana/kibana:6.5.3 livenessProbe: tcpSocket: port: 5601 initialDelaySeconds: 20 periodSeconds: 5 readinessProbe: httpGet: path: /api/status port: http initialDelaySeconds: 10 periodSeconds: 5 imagePullPolicy: Always ports: – containerPort: 5601 protocol: "TCP" name: http volumeMounts: – name: kibana-yml mountPath: /usr/share/kibana/config/kibana.yml subPath: kibana.yml env: – name: NODE_IP valueFrom: fieldRef: fieldPath: status.podIP volumes: – name: kibana-yml configMap: name: kibana-yml—apiVersion: v1kind: Servicemetadata: name: wso2-kibana-servicespec: selector: deployment: wso2-kibana ports: – name: http protocol: TCP port: 5601—apiVersion: extensions/v1beta1kind: Ingressmetadata: name: wso2-kibana-ingress annotations: kubernetes.io/ingress.class: "nginx"spec: rules: – host: wso2-kibana http: paths: – path: / backend: serviceName: wso2-kibana-service servicePort: 5601Next the WSO2 APIM and Logstash deployment..Please refer to the YAML <Centralized_logging_resources>/deployment/wso2apim-deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: wso2apimspec: replicas: 1 minReadySeconds: 30 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: deployment: wso2apim spec: initContainers: – name: init-wso2-elasticsearch-service image: busybox command: ['sh', '-c', 'until nc -z wso2-elasticsearch-service 9200; do echo waiting for wso2-elasticsearch-service; sleep 2; done;'] containers: – name: wso2apim image: wso2/wso2am:2.6.0 livenessProbe: exec: command: – /bin/bash – -c – nc -z localhost 9443 initialDelaySeconds: 150 periodSeconds: 10 readinessProbe: exec: command: – /bin/bash – -c – nc -z localhost 9443 initialDelaySeconds: 150 periodSeconds: 10 imagePullPolicy: Always ports: – containerPort: 8280 protocol: "TCP" – containerPort: 8243 protocol: "TCP" – containerPort: 9763 protocol: "TCP" – containerPort: 9443 protocol: "TCP" – containerPort: 5672 protocol: "TCP" – containerPort: 9711 protocol: "TCP" – containerPort: 9611 protocol: "TCP" – containerPort: 7711 protocol: "TCP" – containerPort: 7611 protocol: "TCP" volumeMounts: – name: shared-logs mountPath: /home/wso2carbon/wso2am-2.6.0/repository/logs/ – name: logstash image: maanadev/logstash:6.5.3-custom volumeMounts: – name: shared-logs mountPath: /usr/share/logstash/mylogs/ – name: logstash-yml mountPath: /usr/share/logstash/config/logstash.yml subPath: logstash.yml – name: logstash-conf mountPath: /usr/share/logstash/pipeline/logstash.conf subPath: logstash.conf env: – name: NODE_ID value: "wso2-apim" – name: NODE_IP valueFrom: fieldRef: fieldPath: status.podIP volumes: – name: shared-logs emptyDir: {} – name: logstash-yml configMap: name: logstash-yml – name: logstash-conf configMap: name: logstash-conf—apiVersion: v1kind: Servicemetadata: name: wso2apim-servicespec: # label keys and values that must match in order to receive traffic for this service selector: deployment: wso2apim ports: # ports that this service should serve on – name: pass-through-http protocol: TCP port: 8280 – name: pass-through-https protocol: TCP port: 8243 – name: servlet-http protocol: TCP port: 9763 – name: servlet-https protocol: TCP port: 9443—apiVersion: extensions/v1beta1kind: Ingressmetadata: name: wso2apim-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"spec: tls: – hosts: – wso2apim – wso2apim-gateway rules: – host: wso2apim http: paths: – path: / backend: serviceName: wso2apim-service servicePort: 9443 – host: wso2apim-gateway http: paths: – path: / backend: serviceName: wso2apim-service servicePort: 8243If you pay attention to the Config maps mounted to the Logstash, you will be able to see that I have mounted the logstash.conf file.input { file { add_field => { instance_name => "${NODE_ID}-${NODE_IP}" } type => "wso2" path => [ '/usr/share/logstash/mylogs/wso2carbon.log' ] codec => multiline { pattern => "^TID" negate => true what => "previous" } }}filter { if [type] == "wso2" { grok { match => [ "message", "TID:%{SPACE}[%{INT:tenant_id}]%{SPACE}[]%{SPACE}[%{TIMESTAMP_ISO8601:timestamp}]%{SPACE}%{LOGLEVEL:level}%{SPACE}{%{JAVACLASS:java_class}}%{SPACE}-%{SPACE}%{JAVALOGMESSAGE:log_message}%{SPACE}{%{JAVACLASS:java_class_duplicate}}%{GREEDYDATA:stacktrace}" ] match => [ "message", "TID:%{SPACE}[%{INT:tenant_id}]%{SPACE}[]%{SPACE}[%{TIMESTAMP_ISO8601:timestamp}]%{SPACE}%{LOGLEVEL:level}%{SPACE}{%{JAVACLASS:java_class}}%{SPACE}-%{SPACE}%{JAVALOGMESSAGE:log_message}%{SPACE}{%{JAVACLASS:java_class_duplicate}}" ] } date { match => [ "timestamp", "ISO8601" ] } }}output { elasticsearch { hosts => ['wso2-elasticsearch-service'] user => "elastic" password => "changeme" index => "${NODE_ID}-${NODE_IP}-%{+YYYY.MM.dd}" }}With this configuration, I’m telling Logstash,How to detect multiline log eventHow to parse a log event..Based on INFO or ERROR.How to name the index(${NODE_ID}-${NODE_IP}-%{+YYYY.MM.dd})Elastic search endpointNext let’s talk a bit about the shared-logs volume,WSO2 APIM and the Logstash containers run in the same Pod..Where the log files are shared between the containers via the shared volume named shared-logs and type emptyDir.volumes: – name: shared-logs emptyDir: {}Browsing the logsIf everything went well, you can get the Kibana and APIM Ingress info by running the following command,kubectl get ing -n wso2Get the IP for wso2-kibana hostname and add a new entry in the /etc/hosts file.sudo echo “<IP> wso2-kibana” >> /etc/hostsNow goto Kibana Dashboard http://wso2-kibanaGoto Management dashboard and then Kibana index patternsYou should be able to see the existing index pattern as wso2-apim-<IP>–<Date> ..Add an index pattern to capture all with wso2-apim-* and click Next step.Select timestamp as the time filter filed from the drop-down menu and click create index patternNow go to the Discover tab where you can see log events..If you cannot see any events try adjusting the time range. More details

Leave a Reply