Hello!
In this post, I want to show you how you can monitor Kibana with OpenTelemetry Collector. We will gather two types of data: logs and metrics. Both of them require modifications in the Kibana configuration. For logs, it simply involves changes in the Kibana config file, while for metrics, we would need to install the Prometheus exporter.
Kibana
Logs Configuration
Let’s start with logs. First, we need to update the Kibana config to output logs in JSON format. Here is the config I am using, where logs are written to the /var/log/kibana/
folder in JSON format. I have also implemented a policy that creates a new log file if the current one exceeds 50MB. By default, Kibana stores the last 7 log files, deleting the older ones.
logging:
appenders:
file:
type: rolling-file
fileName: /var/log/kibana/kibana.log
layout:
type: json
policy:
type: size-limit
size: 50mb
root:
appenders: [file]
Metrics Configuration
To expose Prometheus metrics, we just need to install one plugin. You can install it with a single command. Make sure to replace package_version
with the correct Kibana version (e.g., 8.8.1 in my case).
/usr/share/kibana/bin/kibana-plugin install https://github.com/pjhampton/kibana-prometheus-exporter/releases/download/{{ package_version }}/kibanaPrometheusExporter-{{ package_version }}.zip
Kibana is now ready. Let’s move on to the OTEL configuration.
OTEL
I am using OTEL 0.79.0 because it allows the use of dynamic indexes in Elasticsearch.
Filelog Receiver
For logs, we are going to use the filelog receiver. In the config, we need to specify which files to read and where to get the timestamp. I am also going to parse the log messages into separate fields in Elasticsearch.
receivers:
filelog/kibana:
include:
- /var/log/kibana/*.log
include_file_name: false
include_file_path: true
operators:
- type: json_parser
timestamp:
layout: '2006-01-02T15:04:05.999-07:00'
parse_from: attributes["@timestamp"]
layout_type: gotime
- field: attributes["elasticsearch.index.suffix"]
type: add
value: kibana
- parse_from: body
parse_to: attributes
type: json_parser
In this config, I first specify which files to read. Then, I use operators. The first operator is used to extract the correct timestamp from the log entry by specifying the layout and the JSON field representing the timestamp. Next, I add an additional field to the log entry with the name "elasticsearch.index.suffix"
and the value kibana
. The Elasticsearch exporter will read this field and use the value kibana
in the index name. Finally, the last operator reads the JSON message and creates a field for every field in the JSON.
Metrics Receiver
Logs are ready, so let’s move to metrics. We are going to use the Prometheus receiver to scrape the Kibana endpoint.
prometheus/kibana:
config:
scrape_configs:
- job_name: 'kibana'
scrape_interval: '15s'
metrics_path: '_prometheus/metrics'
static_configs:
- targets: ['localhost:5601']
basic_auth:
username: USERNAME
password: PASSWORD
The config for this receiver is very simple. As the target, I am using localhost because Kibana is running on the same host as OTEL. The metrics path is taken from the plugin documentation. If you have authentication, add your username and password.
After configuring the receivers, we need to configure the exporters. In my case, I use Prometheus and Elasticsearch.
Hostmetrics Receiver
I am also using the hostmetrics receiver to get instance-level metrics such as CPU and memory. Here is the config I have, which lists the desired metrics.
hostmetrics:
collection_interval: 15s
scrapers:
cpu:
disk:
filesystem:
load:
memory:
network:
paging:
process:
mute_process_exe_error: true
mute_process_io_error: true
Exporters
Here are my exporters. Replace the endpoints and credentials with your own. As for logs_index
, I am using the value logs-
. I have the hyphen because logs_dynamic_index
is enabled, and OTEL will take the value from the elasticsearch.index.suffix
log entry and add it to the data stream name. So my final data stream will be logs-kibana
.
exporters:
elasticsearch:
endpoints:
- ELASTIC_ENDPOINT
logs_dynamic_index:
enabled: true
logs_index: logs-
password: ELASTIC_PASSWORD
user: ELASTIC_USER
prometheusremotewrite:
auth:
authenticator: sigv4auth
endpoint: PROMETHEUS_ENDPOINT
resource_to_telemetry_conversion:
enabled: true
timeout: 30s
Pipeline
Everything is ready, and we can create the OTEL pipeline. The pipeline describes where to take the values, adds some processing, and sends them to the exporter.
service:
pipelines:
metrics:
receivers:
- prometheus/kibana
- hostmetrics
processors:
- batch
exporters:
- prometheusremotewrite
logs/kibana:
exporters:
- elasticsearch
processors:
- batch
receivers:
- filelog/kibana
Complete OTEL Config
You can find the complete OTEL config here
receivers:
filelog/kibana:
include:
- /var/log/kibana/*.log
include_file_name: false
include_file_path: true
operators:
- type: json_parser
timestamp:
layout: "2006-01-02T15:04:05.999-07:00"
parse_from: attributes["@timestamp"]
layout_type: gotime
- field: attributes["elasticsearch.index.suffix"]
type: add
value: kibana
- parse_from: body
parse_to: attributes
type: json_parser
prometheus/kibana:
config:
scrape_configs:
- job_name: "kibana"
scrape_interval: "10s"
metrics_path: "_prometheus/metrics"
static_configs:
- targets: ["localhost:5601"]
basic_auth:
username: anonymous
password: anonymous
hostmetrics:
collection_interval: 10s
scrapers:
cpu:
disk:
filesystem:
load:
memory:
network:
paging:
process:
mute_process_exe_error: true
mute_process_io_error: true
processors:
batch:
exporters:
elasticsearch:
endpoints:
- ELASTIC_ENDPOINT
logs_dynamic_index:
enabled: true
logs_index: logs-
password: ELASTIC_PASSWORD
user: ELASTIC_USER
prometheusremotewrite:
auth:
authenticator: sigv4auth
endpoint: PROMETHEUS_ENDPOINT
resource_to_telemetry_conversion:
enabled: true
timeout: 30s
service:
pipelines:
metrics:
receivers:
- prometheus/kibana
- hostmetrics
processors:
- batch
exporters:
- prometheusremotewrite
logs/kibana:
exporters:
- elasticsearch
processors:
- batch
receivers:
- filelog/kibana