-
Notifications
You must be signed in to change notification settings - Fork 20.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
p2p/discover: persistent node database #793
Conversation
cbf18b1
to
6def110
Compare
This is almost what I had in mind, but not quite. I didn't explain it well enough before you started. The purpose of Calling it My idea going forward is as follows:
I suggest that the implementation of the DB should store metadata items as separate keys prefixed with the node ID. Example: for a single node with ID
With this scheme, deleting a node simply means deleting everything with the ID as prefix. API-wise, it could look like this:
|
I updated the comment above with stricter name-spacing of keys. |
The database structure and details are imho perfectly reasonable and fine. One issue I'm seeing with the API proposal however are circular dependencies. The moment you move this Edit: One potential solution I can imagine is to have the based nodedb database for storing, querying and expiring items according to some schema, and then have client packages (e.g. |
We will address that when we get there. My guess is that |
I've updated the design to use the fancier db layout/schema. However, entry expiration is not yet done, neither have I spend time to even marginally test it beside passing the system tests. If you have time @fjl , take a glace to make sure it's going in the right direction. PS: Since leveldb doesn't have any querying mechanism other than iterating over the entire database, the current seed query is very sub-optimal. Ideas? Edit: I have to run, so I won't have time until Monday to finish up this new version. |
return time.Time{} | ||
} | ||
var unix int64 | ||
if err := rlp.DecodeBytes(blob, &unix); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Integer values don't need to use RLP. Note also that package rlp will refuse to encode or decode int64
. Let's use binary.BigEndian.Uint64
or go fancy and use binary.Varint
(as for the version number).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should forgo having {fetch,store}Time
and only have {fetch,store}Int64
instead. It's
easy to do time.Unix(db.fetchInt64(key(...)), 0)
and db.storeInt64(key(...), t.Unix())
This looks good. 🎉
It doesn't matter how efficient the query is. If it becomes a problem, we can track the most recent nodes by maintaining an index (with a different key prefix). We could also roll the seed query into the initial expiration because it needs to scan the database anyway. Discovery startup can take up to 2 seconds because it waits for package nat to figure out the external IP address. We can run the query concurrently at that time. I doubt it'll take more than a second to scan all nodes. |
leveldb supports prefixes and range sets if you need them |
field = string(item[len(id):]) | ||
|
||
return id, field | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two don't need to be methods. They can be plain functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dunno why GitHub doesn't close this diff, it's been updated.
👍 |
Ah, good catch with the lockup. Didn't know about the blocking behavior. Will update in 3 mins. |
PTAL |
p2p/discover: persistent node database
…port EIP-4337 Bundled Transactions (ethereum#945) * added new api to support conditional transactions (EIP-4337) (ethereum#700) * Refactored the code and updated the miner to check for the validity of options (ethereum#793) * refactored the code and updated the miner to check for the validity of options * added new errors -32003 and -32005 * added unit tests * addressed comments * Aa 4337 update generics (ethereum#799) * poc * minor bug fix * use common.Hash * updated UnmarshalJSON function (reference - tynes) * fix * done * linters * with test * undo some unintentional changes --------- Co-authored-by: Pratik Patil <[email protected]> * handelling the block range and timestamp range, also made timestamp a pointer --------- Co-authored-by: Evgeny Danilenko <[email protected]> * Added filtering of conditional transactions in txpool (ethereum#920) * added filtering of conditional transactions in txpool * minor fix in ValidateKnownAccounts * bug fix * Supporting nil knownAccounts * lints * bundled transactions are not announced/broadcasted to the peers * fixed after upstream merge * few fixes * sentry reject conditional transaction * Changed the namespace of conditional transaction API from `eth` to `bor` (ethereum#985) * added conditional transaction to bor namespace * test comit * test comit * added conditional transaction * namespapce changed to bor * cleanup * cleanup * addressed comments * reverted changes in ValidateKnownAccounts * addressed comments and removed unwanted code * addressed comments * bug fix * lint * removed licence from core/types/transaction_conditional_test.go --------- Co-authored-by: Evgeny Danilenko <[email protected]>
This PR introduces a seed cache database containing all the nodes that passed the discovery ping-pong procedure. Whenever ethereum starts up and there are no known nodes, the first 10 seeds are retrieved (and deleted) from the cache; and are used beside the bootstrap servers for connecting to the network.
The reason for the immediate deletion of the seed nodes is self cleanup: all seeds are evacuated when probing, but live ones get added back after the ping-pong, resulting in stale data gradually disappearing.
It might make sense to put an additional upper bound on the total number of peers we'd like to cache and always drop the oldest ones, but I'd vote to see how this mechanism behaves and polish it afterwards.
@fjl Please check if this is what you had in mind :)