Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filebeat, elasticsearch, docker logging loop #760

Closed
superdump opened this issue Jan 18, 2016 · 5 comments
Closed

filebeat, elasticsearch, docker logging loop #760

superdump opened this issue Jan 18, 2016 · 5 comments
Labels
Filebeat Filebeat

Comments

@superdump
Copy link

How can one avoid an infinite logging loop when using filebeat in a docker container to watch docker container logs and ship then off to elasticsearch? An example configuration:

filebeat:
  prospectors:
    -
      paths:
        - /var/lib/docker/containers/*/*.log
      encoding: utf-8
      input_type: log

output:
  elasticsearch:
    hosts: ["http://100.100.100.100:9200"]

filebeat and elasticsearch are both running in docker containers and writing their output also to some file that would be matched by filebeat. How can this be avoided?

If it helps to find a solution, there is a file called config.json in the same directory as the log file that contains information that can identify that that log file pertains to filebeat or elasticsearch.

@ruflin
Copy link
Contributor

ruflin commented Jan 18, 2016

The upcoming filebeat release as an exclude_files feature, so you can specific files which should not be added: #563 This helps if you know the names of the log files. In case you don't an other option could be to write the filebeat log file somewhere else?

It would be nice if we could move this discussion to https://discuss.elastic.co/c/beats/filebeat as we try to keep the github issue for bugs etc.

@superdump
Copy link
Author

I don't think that will work unless I can look at config.json in the same directory or talk to the docker API to identify which containers are part of some blacklist.

The log file names are not known as they are like: /var/lib/docker/containers/3221f3eec85af46e9360d777eac2db6c9610c11b60d6bf342ad61650c4333c2b/3221f3eec85af46e9360d777eac2db6c9610c11b60d6bf342ad61650c4333c2b-json.log where that hash is the container ID.

I can't write the log to another file either as docker doesn't have such an option and anything involving doing the detection and writing the log files to another directory with another process would be a rather clunky solution I think.

A less efficient solution which I see is also going to be in 1.1.0 is exclude_lines: #430 . With that I can apply a regex that filters out all log lines which match a specific pattern and there is information in each log line that allows me to do that. However, running a regex on each log line is rather inefficient.

Any better ideas?

@tsg
Copy link
Contributor

tsg commented Jan 19, 2016

How about instead of letting Filebeat log to stdout you configure it to write to rotating files? See the logging section, you can then choose the filename and such and you might be able to write to the same place that the docker daemon would.

@superdump
Copy link
Author

I have done so actually, but all of the other logging containers in the chain (filebeat -> logstash -> elasticsearch) that print out something when they receive a message need to also be handled.

@ruflin
Copy link
Contributor

ruflin commented Apr 27, 2016

I think the best current solution to this is using the upcoming filtering to exclude files / lines which are coming from beat: #451 A further improvement here could if beat writes its logs as JSON and the JSON input can be used, so filtering gets even easier.

We discussed in the past some special cases where additional information about log files should be loaded from the path or an additional file like config.json above. Perhaps we can find here in the long run a generic solution to make this possible.

I'm going to close this issue as I think the problem can potentially be solved with filtering and the necessary steps beyond that are not necessarly issues which we will tackle in the near term in filebeat.

@superdump Thank you for bringing this up. It gives us deeper insights into the potential issues with filebeat and docker containers. We will take this into account in our further development.

@ruflin ruflin closed this as completed Apr 27, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Filebeat Filebeat
Projects
None yet
Development

No branches or pull requests

4 participants