When we generate more log events than we can send #170
Replies: 1 comment 1 reply
-
Thanks for creating this! Overall, the behavior of all the sinks will drop the oldest logs whether it is in memory or file once the maximum limit is reached. Now there are two use cases that I have come across when using
In both of these scenarios, based on the sink settings, the http sink can fall behind in pushing the logs to the destination based on the size and number of logs. In this case, the older logs would get dropped, causing the logs to be lost. Is there something that can be done here to improve the throughput of sending logs over the network? Maybe using multiple threads to send in parallel? I believe you previously mentioned that the above two use cases seem to be pretty rare. It also seems unlikely for applications to generate logs faster than they are sent to the destination. And if they are, it's probably something that the app teams are okay with. However, in my case, logs are starting to be viewed as a potential source for some data analytics and being able to prevent log loss is becoming more important. Also, I'd like to make sure that I understand the behavior for period. The period is the time waited between the Http Requests, right? So, if the period is 10 ms, then this would be the behavior: Time | Action |
Beta Was this translation helpful? Give feedback.
-
@vaibhavepatel we'll continue the discussion here. Let's keep this discussion on what will happen when we generate more log events than we can send over the network.
We can create the sink in three ways. The first one is
Http
which creates a sink that holds the log events in memory until they are being sent over the network. We can set a limit on the number of events we hold in memory, and if we reach this limit new log events will be dropped. This configuration favours old log events.The second one is creating the sink using
DurableHttpUsingTimeRolledBuffers
. This one is storing log events on disk and rotates the buffer files based on time. Each time slot has a buffer with a max size, and if that max size is reached new log events within the same slot is dropped.The third and final one is created using
DurableHttpUsingFileSizeRolledBuffers
. This one is also storing log events on disk and rotates the buffer files based on file size. If the buffer overflows, log events in old buffer files are dropped, not overwritten, in favour of new.Just wanted to clarify the behaviour of the different sink configurations.
Beta Was this translation helpful? Give feedback.
All reactions