As a support person or a developer helping somebody debug a Kafka cluster, you may get Kafka logs from somewhere or somebody and want to analyze what happened. Instead of grepping across multiple files and thousands of lines or code, wouldn't it be nice to just have a tool to put it all together into Elasticsearch and have a Kibana instance to search through it or slice and dice that data up in visualizations?
This setup is one step towards that that cuts down on the setup of all that.
What things you need to install the software and how to install them
- Docker for desktop
A step by step series of examples that tell you how to get a development env running
- Clone this repo
git clone ...
- Remove the log files from the input folder
rm input/*
- Copy your log files into the input folder
cp <path to log files>/*.log input/
- Run the docker-compose with setup.yml
docker-compose -f setup.yml up
- Run the docker-compose up
docker-compose up
The above steps will create the necessary setup to:
- Ingest the logs in input/ to the Elasticsearch instance
- Setup the ingest node pipeline to parse the Kafka logs correctly (timestamp, level, component, etc)
- Setup Kibana with the index patterns and setup a sample dashboard with the logs ingested
To explore the data, one can simply go to Kibana and go to the Discover tab (the icon that looks like a compass). The dashboards are available from the Dashboard tab (the icon that looks a set of rectangles) and searching for the word "Kafka".
Run the following steps:
docker-compose -f docker-compose.yml -f setup.yml down -v
- Filebeat (with the Kafka module)
- Elasticsearch
- Kibana
- Docker
This project is licensed under the MIT License - see the LICENSE.md file for details
The Elastic stack repo