-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
filebeat, elasticsearch, docker logging loop #760
Comments
The upcoming filebeat release as an exclude_files feature, so you can specific files which should not be added: #563 This helps if you know the names of the log files. In case you don't an other option could be to write the filebeat log file somewhere else? It would be nice if we could move this discussion to https://discuss.elastic.co/c/beats/filebeat as we try to keep the github issue for bugs etc. |
I don't think that will work unless I can look at config.json in the same directory or talk to the docker API to identify which containers are part of some blacklist. The log file names are not known as they are like: I can't write the log to another file either as docker doesn't have such an option and anything involving doing the detection and writing the log files to another directory with another process would be a rather clunky solution I think. A less efficient solution which I see is also going to be in 1.1.0 is Any better ideas? |
How about instead of letting Filebeat log to stdout you configure it to write to rotating files? See the logging section, you can then choose the filename and such and you might be able to write to the same place that the docker daemon would. |
I have done so actually, but all of the other logging containers in the chain (filebeat -> logstash -> elasticsearch) that print out something when they receive a message need to also be handled. |
I think the best current solution to this is using the upcoming filtering to exclude files / lines which are coming from beat: #451 A further improvement here could if beat writes its logs as JSON and the JSON input can be used, so filtering gets even easier. We discussed in the past some special cases where additional information about log files should be loaded from the path or an additional file like config.json above. Perhaps we can find here in the long run a generic solution to make this possible. I'm going to close this issue as I think the problem can potentially be solved with filtering and the necessary steps beyond that are not necessarly issues which we will tackle in the near term in filebeat. @superdump Thank you for bringing this up. It gives us deeper insights into the potential issues with filebeat and docker containers. We will take this into account in our further development. |
How can one avoid an infinite logging loop when using filebeat in a docker container to watch docker container logs and ship then off to elasticsearch? An example configuration:
filebeat and elasticsearch are both running in docker containers and writing their output also to some file that would be matched by filebeat. How can this be avoided?
If it helps to find a solution, there is a file called config.json in the same directory as the log file that contains information that can identify that that log file pertains to filebeat or elasticsearch.
The text was updated successfully, but these errors were encountered: