Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OutOfMemoryException #201

Closed
harvinders opened this issue May 19, 2021 · 11 comments
Closed

OutOfMemoryException #201

harvinders opened this issue May 19, 2021 · 11 comments
Assignees
Labels
bug Something isn't working

Comments

@harvinders
Copy link

Describe the bug
We are using this sink to push data into Logstash, however we get out of memory exception. I want to understand if this exception can be raised when logstash is slow in processing data, we do push data from many PCs. Also, this exception ends up crashing the application.

Is there anything in 8.0 version that could help?

2021-05-18T11:01:25.5710337Z Exception while emitting periodic batch from Serilog.Sinks.Http.Private.Sinks.HttpSink: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
   at System.Text.StringBuilder.ToString()
   at System.IO.StringWriter.ToString()
   at Serilog.Sinks.Http.Private.Sinks.HttpSink.FormatPayload(IEnumerable`1 logEvents)
   at Serilog.Sinks.Http.Private.Sinks.HttpSink.<EmitBatchAsync>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink.<OnTick>d__21.MoveNext()
2021-05-18T11:01:28.2768308Z Exception while emitting periodic batch from Serilog.Sinks.Http.Private.Sinks.HttpSink: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
   at System.Text.StringBuilder.ToString()
   at System.IO.StringWriter.ToString()
   at Serilog.Sinks.Http.Private.Sinks.HttpSink.FormatPayload(IEnumerable`1 logEvents)
   at Serilog.Sinks.Http.Private.Sinks.HttpSink.<EmitBatchAsync>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink.<OnTick>d__21.MoveNext()
2021-05-18T11:01:38.9878217Z Exception while emitting periodic batch from Serilog.Sinks.Http.Private.Sinks.HttpSink: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
   at System.Text.StringBuilder.ToString()
   at System.IO.StringWriter.ToString()
   at Serilog.Sinks.Http.Private.Sinks.HttpSink.FormatPayload(IEnumerable`1 logEvents)
   at Serilog.Sinks.Http.Private.Sinks.HttpSink.<EmitBatchAsync>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink.<OnTick>d__21.MoveNext()

Desktop (please complete the following information):

  • OS: Windows 10 1909
  • Version: 7.2.0

Expected behaviour

It should not crash, rather error should be logged in SelfLog

@harvinders harvinders added the bug Something isn't working label May 19, 2021
@FantasticFiasco
Copy link
Owner

Are you using the non-durable sink? If yes, have you configured a queue limit.

@harvinders
Copy link
Author

@FantasticFiasco Yes, we are using non durable mode without the queue limit.

@FantasticFiasco
Copy link
Owner

FantasticFiasco commented May 19, 2021

There you have it 😄

If the log server is down, or simply cannot receive log events in a pace that is expected by the producer, the number of log events that have been created, but yet not sent to the log server, will increase. At that point you'll have to ask yourself whether it is ok to drop log events? If yes, set a queue limit. If no, either increase the memory, or preferably use the durable version of the sink that persists log events on disk, and make sure that you have sufficient disk to store the log events during bursts or log server outage.

@harvinders
Copy link
Author

@FantasticFiasco Just wondering, with the durability on, is it possible to find the queue size (events logged to the file, but not successfully yet sent to the http server)?
This would help me fine tune our setup of logstash and elastic.

@FantasticFiasco
Copy link
Owner

FantasticFiasco commented May 31, 2021

No, there doesn't exist something like that. What we have is serilog-sinks-http-diagnostics, and you might also be able to observe the number of buffer files on disk. Once all log events from a buffer file has been sent the file is deleted.

@FantasticFiasco
Copy link
Owner

Another indicator of catastrophic events is the Serilog SelfLog.

@AntonSmolkov
Copy link
Contributor

Even thought it’s well documented, I think it worth changing queueLimit default value from infinitely to some sane value. To avoid such situations is future.

@FantasticFiasco
Copy link
Owner

Yes, a limited default value would probably fix the out-of-memory problem, assuming that the default value is low enough to clear the memory constrains placed on the environment the application is running on. But what would a same value be? Too low and we would drop log events when we really shouldn't. High enough and we risk causing the same out-of-memory problem.

Do you have a proposal, backed with sound reasoning?

@AntonSmolkov
Copy link
Contributor

infinitely can cause OOM kill. Application itself and it's logs in memory will be lost. So i think any limit would be better.

Providing that this is NON-durable implementation and logs not suppose to survive logstash downtime, i would set
queueLimit = batchPostingLimit as a default. Which is at most 256 MiB of memory(with default eventBodyLimitBytes).

I understand that this, for example, can suit for microservice, but not for huge monolith. But that's why this is tunable, right?

@FantasticFiasco
Copy link
Owner

The unit of queueLimit is currently number of events and not total size in bytes, which would be a lot better, and easier to reason about.

I'll add a task that we should change the non durable sink to respect a total memory size, instead of respecting the number of events. That would make great sense. As v8 isn't currently official yet, we should aim to include this breaking change in the release.

Thanks for the feedback!

@FantasticFiasco
Copy link
Owner

Closed in favour of #203

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants