Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AsyncProducer: runtime: out of memory(OOM) #1372

Closed
qiangmzsx opened this issue May 12, 2019 · 2 comments
Closed

AsyncProducer: runtime: out of memory(OOM) #1372

qiangmzsx opened this issue May 12, 2019 · 2 comments

Comments

@qiangmzsx
Copy link
Contributor

qiangmzsx commented May 12, 2019

Versions

Please specify real version numbers or git SHAs, not just "Latest" since that changes fairly regularly.
Sarama Version:1.22.0
Kafka Version:1.1.0
Go Version:1.10

Configuration

What configuration values are you using for Sarama and Kafka?

conf := sarama.NewConfig()
	conf.Version = sarama.V1_1_0_0
	conf.Net.MaxOpenRequests = 5
	conf.Net.ReadTimeout = 5s
	conf.Net.DialTimeout = 5s
	conf.Net.WriteTimeout = 5s
	conf.Metadata.Retry.Max = 0
	conf.Producer.RequiredAcks = -1
	conf.Producer.Timeout = 5s
	conf.Producer.Flush.Bytes = 16MB
	conf.Producer.Flush.Frequency = 5s
	conf.Producer.Return.Errors = true
	conf.Producer.Return.Successes = true
Logs
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x7de4b2, 0x16)
        /export/go/src/runtime/panic.go:616 +0x81
runtime.sysMap(0xc459710000, 0x100000, 0x7f8ab7ffec00, 0xbd0b98)
        /export/go/src/runtime/mem_linux.go:216 +0x20a
runtime.(*mheap).sysAlloc(0xbb7540, 0x100000, 0x7f8a9e1c61d0)
        /export/go/src/runtime/malloc.go:470 +0xd4
runtime.(*mheap).grow(0xbb7540, 0x1, 0x0)
        /export/go/src/runtime/mheap.go:907 +0x60
runtime.(*mheap).allocSpanLocked(0xbb7540, 0x1, 0xbd0ba8, 0x7f8a9e1c61d0)
        /export/go/src/runtime/mheap.go:820 +0x301
runtime.(*mheap).alloc_m(0xbb7540, 0x1, 0x7f8ab7ff0056, 0x7f8a9e1c61d0)
        /export/go/src/runtime/mheap.go:686 +0x118
runtime.(*mheap).alloc.func1()
        /export/go/src/runtime/mheap.go:753 +0x4d
runtime.(*mheap).alloc(0xbb7540, 0x1, 0xc420010056, 0x7f8ab7ffedf0)
        /export/go/src/runtime/mheap.go:752 +0x8a
runtime.(*mcentral).grow(0xbb9e10, 0x0)
        /export/go/src/runtime/mcentral.go:232 +0x94
runtime.(*mcentral).cacheSpan(0xbb9e10, 0x1ff)
        /export/go/src/runtime/mcentral.go:106 +0x2e4
runtime.(*mcache).refill(0x7f8b215ec000, 0xc420022056)
        /export/go/src/runtime/mcache.go:123 +0x9c
runtime.(*mcache).nextFree.func1()
        /export/go/src/runtime/malloc.go:556 +0x32
runtime.systemstack(0x0)
        /export/go/src/runtime/asm_amd64.s:409 +0x79
runtime.mstart()
        /export/go/src/runtime/proc.go:1175

goroutine 58 [running]:
runtime.systemstack_switch()
        /export/go/src/runtime/asm_amd64.s:363 fp=0xc42022cb48 sp=0xc42022cb40 pc=0x45ed70
runtime.(*mcache).nextFree(0x7f8b215ec000, 0xc459708756, 0x7f8a9e92093f, 0x775202, 0x79)
        /export/go/src/runtime/malloc.go:555 +0xa9 fp=0xc42022cba0 sp=0xc42022cb48 pc=0x41a929
runtime.mallocgc(0x1000, 0x77c500, 0xc458c31f01, 0xc459441000)
        /export/go/src/runtime/malloc.go:710 +0x79f fp=0xc42022cc40 sp=0xc42022cba0 pc=0x41b27f
runtime.growslice(0x77c500, 0xc4596b8000, 0x100, 0x100, 0x101, 0xa, 0x0, 0x0)
        /export/go/src/runtime/slice.go:179 +0x14a fp=0xc42022cca8 sp=0xc42022cc40 pc=0x44b15a
gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.(*MessageSet).addMessage(...)
        /export/data/gopath/src/gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama/message_set.go:107
gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.(*produceSet).add(0xc4595cc440, 0xc4596e10e0, 0x0, 0xc420148188)
        /export/data/gopath/src/gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama/produce_set.go:108 +0x851 fp=0xc42022cde0 sp=0xc42022cca8 pc=0x685501
gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.(*brokerProducer).run(0xc42020a720)
        /export/data/gopath/src/gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama/async_producer.go:744 +0x653 fp=0xc42022cfa0 sp=0xc42022cde0 pc=0x644ed3
gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.(*brokerProducer).(gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.run)-fm()
        /export/data/gopath/src/gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama/async_producer.go:651 +0x2a fp=0xc42022cfb8 sp=0xc42022cfa0 pc=0x69652a
gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.withRecover(0xc4201408e0)
        /export/data/gopath/src/gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama/utils.go:45 +0x43 fp=0xc42022cfd8 sp=0xc42022cfb8 pc=0x68fda3
runtime.goexit()
        /export/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc42022cfe0 sp=0xc42022cfd8 pc=0x4618c1
created by gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama.(*asyncProducer).newBrokerProducer
        /export/data/gopath/src/gitlab.xxxxxxx.com/zeroteam/ddkafka/vendor/github.com/Shopify/sarama/async_producer.go:651 +0x1b8
Problem Description
@d1egoaz
Copy link
Contributor

d1egoaz commented Aug 22, 2019

is this reproducible?
can you take a CPU/MEM profile to see what's going on.

I'm also noticing that you're using Go Version:1.10, I think you can use latest 1.12.x version without problems

@d1egoaz
Copy link
Contributor

d1egoaz commented Aug 22, 2019

closing as duplicate of #1358

@d1egoaz d1egoaz closed this as completed Aug 22, 2019
wanwenli added a commit to wanwenli/sarama that referenced this issue Dec 5, 2024
This commit adds an optional configuration to Sarama's retry mechanism to limit the size of the retry buffer.
The change addresses issues IBM#1358 and IBM#1372 by preventing unbounded memory growth when retries are backlogged or brokers are unresponsive.

Key updates:
- Added `Producer.Retry.MaxBufferLength` configuration to control the maximum number of messages stored in the retry buffer.
- Implemented logic to handle overflow scenarios, ensuring non-flagged messages are either retried or sent to the errors channel, while flagged messages are re-queued.

This enhancement provides a safeguard against OOM errors in high-throughput or unstable environments while maintaining backward compatibility (unlimited buffer by default).

Signed-off-by: Wenli Wan <[email protected]>
wanwenli added a commit to wanwenli/sarama that referenced this issue Dec 6, 2024
This commit adds an optional configuration to Sarama's retry mechanism to limit the size of the retry buffer.
The change addresses issues IBM#1358 and IBM#1372 by preventing unbounded memory growth when retries are backlogged or brokers are unresponsive.

Key updates:
- Added `Producer.Retry.MaxBufferLength` configuration to control the maximum number of messages stored in the retry buffer.
- Implemented logic to handle overflow scenarios, ensuring non-flagged messages are either retried or sent to the errors channel, while flagged messages are re-queued.

This enhancement provides a safeguard against OOM errors in high-throughput or unstable environments while maintaining backward compatibility (unlimited buffer by default).

Signed-off-by: Wenli Wan <[email protected]>
wanwenli added a commit to wanwenli/sarama that referenced this issue Dec 6, 2024
This commit adds an optional configuration to Sarama's retry mechanism to limit the size of the retry buffer.
The change addresses issues IBM#1358 and IBM#1372 by preventing unbounded memory growth when retries are backlogged or brokers are unresponsive.

Key updates:
- Added `Producer.Retry.MaxBufferLength` configuration to control the maximum number of messages stored in the retry buffer.
- Implemented logic to handle overflow scenarios, ensuring non-flagged messages are either retried or sent to the errors channel, while flagged messages are re-queued.

This enhancement provides a safeguard against OOM errors in high-throughput or unstable environments while maintaining backward compatibility (unlimited buffer by default).

Signed-off-by: Wenli Wan <[email protected]>
wanwenli added a commit to wanwenli/sarama that referenced this issue Dec 6, 2024
This commit adds an optional configuration to Sarama's retry mechanism to limit the size of the retry buffer.
The change addresses issues IBM#1358 and IBM#1372 by preventing unbounded memory growth when retries are backlogged or brokers are unresponsive.

Key updates:
- Added `Producer.Retry.MaxBufferLength` configuration to control the maximum number of messages stored in the retry buffer.
- Implemented logic to handle overflow scenarios, ensuring non-flagged messages are either retried or sent to the errors channel, while flagged messages are re-queued.

This enhancement provides a safeguard against OOM errors in high-throughput or unstable environments while maintaining backward compatibility (unlimited buffer by default).

Signed-off-by: Wenli Wan <[email protected]>
wanwenli added a commit to wanwenli/sarama that referenced this issue Dec 6, 2024
This commit adds an optional configuration to Sarama's retry mechanism to limit the size of the retry buffer.
The change addresses issues IBM#1358 and IBM#1372 by preventing unbounded memory growth when retries are backlogged or brokers are unresponsive.

Key updates:
- Added `Producer.Retry.MaxBufferLength` configuration to control the maximum number of messages stored in the retry buffer.
- Implemented logic to handle overflow scenarios, ensuring non-flagged messages are either retried or sent to the errors channel, while flagged messages are re-queued.

This enhancement provides a safeguard against OOM errors in high-throughput or unstable environments while maintaining backward compatibility (unlimited buffer by default).

Signed-off-by: Wenli Wan <[email protected]>
dnwe pushed a commit that referenced this issue Dec 19, 2024
This commit adds an optional configuration to Sarama's retry mechanism to limit the size of the retry buffer.
The change addresses issues #1358 and #1372 by preventing unbounded memory growth when retries are backlogged or brokers are unresponsive.

Key updates:
- Added `Producer.Retry.MaxBufferLength` configuration to control the maximum number of messages stored in the retry buffer.
- Implemented logic to handle overflow scenarios, ensuring non-flagged messages are either retried or sent to the errors channel, while flagged messages are re-queued.

This enhancement provides a safeguard against OOM errors in high-throughput or unstable environments while maintaining backward compatibility (unlimited buffer by default).

Signed-off-by: Wenli Wan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants