-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dot/network: create blockResponseBuf pool #1913
Conversation
@@ -46,7 +46,7 @@ const ( | |||
blockAnnounceID = "/block-announces/1" | |||
transactionsID = "/transactions/1" | |||
|
|||
maxMessageSize = 1024 * 63 // 63kb for now | |||
maxMessageSize = 1024 * 1024 * 4 // 4mb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you change this back? BlockResponseMessages
are significantly larger than all other messages, so other messages won't use this much space, and this will increase the memory usage unneedlessly I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I appreciate the feedback, I wasn't quite sure of the repercussions of increasing the size that much. I will see if I can have one buffer pool handler but different sizes. Thanks
@@ -66,9 +66,11 @@ func (s *Service) receiveBlockResponse(stream libp2pnetwork.Stream) (*BlockRespo | |||
s.blockResponseBufMu.Lock() | |||
defer s.blockResponseBufMu.Unlock() | |||
|
|||
buf := s.blockResponseBuf | |||
// buf := s.blockResponseBufPool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather leave the maxMessageSize
as it is and create another buffer pool that deals with [maxBlockResponseSize]byte
buffers. unfortunately this will involve copy-pasting the existing buffer pool code and changing the size afaik, maybe there's a nicer way to do that but not sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@noot why do we have this sizedBufferPool
? Can't we just use a sync.Pool
? Lock contention wise, the sync.Pool
scales much better than using channels or mutexes.
@qdm12 cause stuff gets garbage collected from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@noot @zjhuber is there a point in doing this? We were using a single shared byte slice as buffer and it was working fine right? Unless the implementation was wrong before and multiple threads were accessing that buffer erroneously, then yes a sizedBufferPool makes sense. What do you thinK?
var blockResponseBufPool *sizedBufferPool | ||
if cfg.noPreAllocate { | ||
blockResponseBufPool = &sizedBufferPool{ | ||
c: make(chan *[maxMessageSize]byte, cfg.MinPeers*3), | ||
} | ||
} else { | ||
blockResponseBufPool = newSizedBufferPool(cfg.MinPeers*3, cfg.MaxPeers*3) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit just some small simplifications 😉
var blockResponseBufPool *sizedBufferPool | |
if cfg.noPreAllocate { | |
blockResponseBufPool = &sizedBufferPool{ | |
c: make(chan *[maxMessageSize]byte, cfg.MinPeers*3), | |
} | |
} else { | |
blockResponseBufPool = newSizedBufferPool(cfg.MinPeers*3, cfg.MaxPeers*3) | |
} | |
poolSize := 3 * cfg.MaxPeers | |
var preAlloc int | |
if !cfg.noPreAllocate { | |
preAlloc = 3 * cfg.MaxPeers | |
} | |
blockResponseBufPool := newSizedBufferPool(preAlloc, poolSize) |
@qdm12 honestly the code works fine as it is with the 1 buffer used for block responses, so I don't know if this is necessary. it would only be needed for the case when we're syncing at the tip and have multiple requests going and want to receive them all concurrently, but I don't think that even is needed as they aren't processed concurrently |
gonna close this for now as I don't actually think it's needed. sorry about that! |
Changes
Tests
Issues
-Fixes #1858
Primary Reviewer