-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replication task processor shutdown improvements and start/stop unit tests #5996
Replication task processor shutdown improvements and start/stop unit tests #5996
Conversation
Pull Request Test Coverage Report for Build 018f5bab-298d-44df-b8e1-67f02e0ff1dfDetails
💛 - Coveralls |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files
... and 4 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
f15b6f0
to
577782d
Compare
case p.requestChan <- &request{ | ||
token: &types.ReplicationToken{ | ||
ShardID: int32(p.shard.GetShardID()), | ||
LastRetrievedMessageID: p.lastRetrievedMessageID, | ||
LastProcessedMessageID: p.lastProcessedMessageID, | ||
}, | ||
respChan: respChan, | ||
}: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previously this was done inside sendFetchMessageRequest
which was not checking p.done
and blocking shutdowns
What changed?
Replication task processor was not waiting for underlying goroutines when stopped. Added a waitgroup to fix that and validated with a unit test using
goleak.VerifyNone
Also added similar leak check in history engine's start/stop test.
Why?
Improve shutdowns and code coverage
How did you test it?
unit test