-
Notifications
You must be signed in to change notification settings - Fork 975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge increase in memory allocation #5712
Comments
The |
Yeah that is the idea. At least on our part because we don't plan to change the |
If users wish to reduce the bucket size, they wouldn't be penalized. And for those willing to have a higher bucket size, are there alternatives to allow defining the bucket size at run time? |
I don't think so. Personally, I only see the On our end we are going to patch our fork with the The question is what could be the maximum smart |
This sounds good! I am not aware of any live kademlia system using a value larger than |
Summary
We have a server dedicated to stress testing our network and since the recent rebase of our fork with the libp2p master branch, this server gets OOM killed a lot.
In order to find out what was happening, our team has discovered that the configurability of the Kademlia bucket size (#5414) has introduced a huge increase in memory allocation, which imply an increase in memory fragmentation.
More precisely, it seems to be related to this code change : 417968e#diff-25d9554b592e6280d215bfc8b0bdb5a61eea3a6ca795849e6c7400414cb00c1cR288.
Previously, there was no heap allocation but now there is.
Expected behavior
Don't have heap allocation at every
get_closest_peers
request.Actual behavior
An allocation happens at every
get_closest_peers
request.Relevant log output
No response
Possible Solution
Maybe use a
SmallVec
with the defaultK_VALUE
in order to have the same behaviour than before ?Or maybe use const generic ? It would not allow users to define the
K_VALUE
at runtime but it would allow them to define it in there code base.Version
No response
Would you like to work on fixing this bug ?
Maybe
The text was updated successfully, but these errors were encountered: