-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OutOfMemoryException #201
Comments
Are you using the non-durable sink? If yes, have you configured a queue limit. |
@FantasticFiasco Yes, we are using non durable mode without the queue limit. |
There you have it 😄 If the log server is down, or simply cannot receive log events in a pace that is expected by the producer, the number of log events that have been created, but yet not sent to the log server, will increase. At that point you'll have to ask yourself whether it is ok to drop log events? If yes, set a queue limit. If no, either increase the memory, or preferably use the durable version of the sink that persists log events on disk, and make sure that you have sufficient disk to store the log events during bursts or log server outage. |
@FantasticFiasco Just wondering, with the durability on, is it possible to find the queue size (events logged to the file, but not successfully yet sent to the http server)? |
No, there doesn't exist something like that. What we have is serilog-sinks-http-diagnostics, and you might also be able to observe the number of buffer files on disk. Once all log events from a buffer file has been sent the file is deleted. |
Another indicator of catastrophic events is the Serilog SelfLog. |
Even thought it’s well documented, I think it worth changing |
Yes, a limited default value would probably fix the out-of-memory problem, assuming that the default value is low enough to clear the memory constrains placed on the environment the application is running on. But what would a same value be? Too low and we would drop log events when we really shouldn't. High enough and we risk causing the same out-of-memory problem. Do you have a proposal, backed with sound reasoning? |
Providing that this is NON-durable implementation and logs not suppose to survive logstash downtime, i would set I understand that this, for example, can suit for microservice, but not for huge monolith. But that's why this is tunable, right? |
The unit of I'll add a task that we should change the non durable sink to respect a total memory size, instead of respecting the number of events. That would make great sense. As v8 isn't currently official yet, we should aim to include this breaking change in the release. Thanks for the feedback! |
Closed in favour of #203 |
Describe the bug
We are using this sink to push data into Logstash, however we get out of memory exception. I want to understand if this exception can be raised when logstash is slow in processing data, we do push data from many PCs. Also, this exception ends up crashing the application.
Is there anything in 8.0 version that could help?
Desktop (please complete the following information):
Expected behaviour
It should not crash, rather error should be logged in SelfLog
The text was updated successfully, but these errors were encountered: