From 5028075b7ff57cf6c2058ef014e01eca58ec941c Mon Sep 17 00:00:00 2001 From: Quorum Bot <46820074+quorumbot@users.noreply.github.com> Date: Tue, 23 Aug 2022 15:23:26 +0100 Subject: [PATCH] [Upgrade] Go-Ethereum release v1.10.3 (#1469) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * eth: implement eth66 (#22241) * eth/protocols/eth: split up the eth protocol handlers * eth/protocols/eth: define eth-66 protocol messages * eth/protocols/eth: poc implement getblockheaders on eth/66 * eth/protocols/eth: implement remaining eth-66 handlers * eth/protocols: define handler map for eth 66 * eth/downloader: use protocol constants from eth package * eth/protocols/eth: add ETH66 capability * eth/downloader: tests for eth66 * eth/downloader: fix error in tests * eth/protocols/eth: use eth66 for outgoing requests * eth/protocols/eth: remove unused error type * eth/protocols/eth: define protocol length * eth/protocols/eth: fix pooled tx over eth66 * protocols/eth/handlers: revert behavioural change which caused tests to fail * eth/downloader: fix failing test * eth/protocols/eth: add testcases + fix flaw with header requests * eth/protocols: change comments * eth/protocols/eth: review fixes + fixed flaw in RequestOneHeader * eth/protocols: documentation * eth/protocols/eth: review concerns about types * p2p/dnsdisc: fix hot-spin when all trees are empty (#22313) In the random sync algorithm used by the DNS node iterator, we first pick a random tree and then perform one sync action on that tree. This happens in a loop until any node is found. If no trees contain any nodes, the iterator will enter a hot loop spinning at 100% CPU. The fix is complicated. The iterator now checks if a meaningful sync action can be performed on any tree. If there is nothing to do, it waits for the next root record recheck time to arrive and then tries again. Fixes #22306 * les: renamed lespay to vflux (#22347) * cmd/utils: disable caching preimages by default * travis, appveyor, build: bump Go to 1.16 * les: fix balance expiration (#22343) * les/lespay/server: fix balance expiration and add test * les: move client balances to a new db * les: rename lespayDb to lesDb * tests/fuzzers/les: add fuzzer for les server handler (#22282) * les: refactored server handler * tests/fuzzers/les: add fuzzer for les server handler * tests, les: update les fuzzer tests: update les fuzzer tests/fuzzer/les: release resources tests/fuzzer/les: pre-initialize all resources * les: refactored server handler and fuzzer Co-authored-by: rjl493456442 * les: clean up server handler (#22357) * cmd/geth: add db commands stats, compact, put, get, delete (#22014) This PR introduces: - db.put to put a value into the database - db.get to read a value from the database - db.delete to delete a value from the database - db.stats to check compaction info from the database - db.compact to trigger a db compaction It also moves inspectdb to db.inspect. * internal/ethapi: reject non-replay-protected txs over RPC (#22339) This PR prevents users from submitting transactions without EIP-155 enabled. This behaviour can be overridden by specifying the flag --rpc.allow-unprotected-txs=true. * accounts/abi/bind: fix up Go mod files for Go 1.16 * Dockerfile: bump to Go 1.16 base images * travis: bump Android NDK version * travis: bump builders to Bionic * travis: manually install Android since Travis is stale (#22373) * cmd/utils: remove deprecated command line flags (#22263) This removes support for all deprecated flags except --rpc*. * eth/protocols/snap: lower abortion and resumption logs to debug * cmd, eth, les: enable serving light clients when non-synced (#22250) This PR adds a more CLI flag, so that the les-server can serve light clients even the local node is not synced yet. This functionality is needed in some testing environments(e.g. hive). After launching the les server, no more blocks will be imported so the node is always marked as "non-synced". * les, light: improve txstatus retrieval (#22349) Transaction unindexing will be enabled by default as of 1.10, which causes tx status retrieval will be broken without this PR. This PR introduces a retry mechanism in TxStatus retrieval. * all: add support for EIP-2718, EIP-2930 transactions (#21502) This adds support for EIP-2718 typed transactions as well as EIP-2930 access list transactions (tx type 1). These EIPs are scheduled for the Berlin fork. There very few changes to existing APIs in core/types, and several new APIs to deal with access list transactions. In particular, there are two new constructor functions for transactions: types.NewTx and types.SignNewTx. Since the canonical encoding of typed transactions is not RLP-compatible, Transaction now has new methods for encoding and decoding: MarshalBinary and UnmarshalBinary. The existing EIP-155 signer does not support the new transaction types. All code dealing with transaction signatures should be updated to use the newer EIP-2930 signer. To make this easier for future updates, we have added new constructor functions for types.Signer: types.LatestSigner and types.LatestSignerForChainID. This change also adds support for the YoloV3 testnet. Co-authored-by: Martin Holst Swende Co-authored-by: Felix Lange Co-authored-by: Ryan Schneider * cmd/devp2p: add eth66 test suite (#22363) Co-authored-by: Martin Holst Swende * les: move server pool to les/vflux/client (#22377) * les: move serverPool to les/vflux/client * les: add metrics * les: moved ValueTracker inside ServerPool * les: protect against node registration before server pool is started * les/vflux/client: fixed tests * les: make peer registration safe * all: define Berlin hard fork spec * rpc: add separate size limit for websocket (#22385) This makes the WebSocket message size limit independent of the limit used for HTTP requests. The new limit for WebSocket messages is 15MB. * accounts/keystore: use github.com/google/uuid (#22217) This replaces the github.com/pborman/uuid dependency with github.com/google/uuid because the former is only a wrapper for the latter (since v1.0.0). Co-authored-by: Felix Lange * core/state: fix eta calculation on pruning (#22386) * les: UDP pre-negotiation of available server capacity (#22183) This PR implements the first one of the "lespay" UDP queries which is already useful in itself: the capacity query. The server pool is making use of this query by doing a cheap UDP query to determine whether it is worth starting the more expensive TCP connection process. * core/rawdb: fix the transaction indexer (#22395) * core, eth: unship EIP 2315 * core/vm/runtime: more unshipping * cmd/geth: put allowUnsecureTx flag in RPC section (#22412) * params: update chts (#22418) * cmd/utils: fix txlookuplimit for archive node (#22419) * cmd/utils: fix exclusive check for archive node * cmd/utils: set the txlookuplimit to 0 * core/forkid, params: unset Berlin fork number (#22413) * les: fix nodiscover option on the client side (#22422) * cmd: retire whisper flags (#22421) * cmd: retire whisper flags * cmd/geth: remove whisper configs * tests: update to latest tests (#22290) This updates the consensus tests to commit 31d6630 and adds support for access list transactions in the test runner. Co-authored-by: Martin Holst Swende * params: release geth 1.10.0 stable * params: begin v1.10.1 release cycle * Revert "core/forkid, params: unset Berlin fork number (#22413)" This reverts commit ba999105ef89473cfe39e5e53354f7099e67a290. * build: fix PPA failure due to updated debsrc * build: add support for Ubuntu Hirsute Hippo * tests: update reference tests with 2315 removed from Berlin * params: release Geth v1.10.1 * params: begin v1.10.2 release cycle * core/types: reduce allocations in GasPriceCmp (#22456) * les: fix errors in metric namespace (#22459) * les: add trailing slash to metric namespace * les: omit '.' in metric namespace * cmd: extend dumpgenesis to support network flags on the cmd (#22406) * p2p/enr: fix decoding of incomplete lists (#22484) Given a list of less than two elements DecodeRLP returned rlp.EOL, leading to issues in outer decoders. * cmd/geth: add ancient datadir flag to snapshot subcommands (#22486) * cmd/devp2p: better testcase failure output for ethtests (#22482) * eth, les: properly init statedb accesslist during tracing (#22480) * eth/state, les/state: properly init statedb accesslist when tracing, fixes #22475 * eth: review comments * eth/tracers: fix compilation err * eth/tracers: apply @karalabe's suggested fix * cmd/geth, eth/downloader: remove copydb command (#22501) * cmd/geth: remove copydb command * eth/downloader: remove fakepeer * tests/fuzzers: fix goroutine leak in les fuzzer (#22455) The oss-fuzz fuzzer has been reporting some failing testcases for les. They're all spurious, and cannot reliably be reproduced. However, running them showed that there was a goroutine leak: the tests created a lot of new clients, which started an exec queue that was never torn down. This PR fixes the goroutine leak, and also a log message which was erroneously formatted. * core/types: improve comments in new EIP-2718 code (#22402) Responding to these comments: https://github.com/ethereum/go-ethereum/pull/21502/files#r579010962 https://github.com/ethereum/go-ethereum/pull/21502/files#r579021565 https://github.com/ethereum/go-ethereum/pull/21502/files#r579023510 https://github.com/ethereum/go-ethereum/pull/21502/files#r578983734 * cmd/clef: docs - link to ethereum org repo (#22400) * cmd/clef (docs): fix image background (#22399) Flatten the image so we do not have dark text on dark background * core/rawdb: fix transaction indexing/unindexing hashing error (#22457) * core/rawdb: more verbose error logs + better hashing * core/rawdb: add failing testcase * core/rawdb: properly hash transactions while indexing/unindexing * core/rawdb: exit on error + better log msg * les: fix UDP connection query (#22451) This PR fixes multiple issues with the UDP connection pre-negotiation feature: - the enable condition was wrong (it checked the existence of the DiscV5 struct where it wasn't initialized yet, disabling the feature even if discv5 was enabled) - the server pool queried already connected nodes when the discovery iterators returned them again - servers responded positively before they were synced and really willing to accept connections Metrics are also added on the server side that count the positive and negative replies to served connection queries. * les: fix UDP connection query (#22451) This PR fixes multiple issues with the UDP connection pre-negotiation feature: - the enable condition was wrong (it checked the existence of the DiscV5 struct where it wasn't initialized yet, disabling the feature even if discv5 was enabled) - the server pool queried already connected nodes when the discovery iterators returned them again - servers responded positively before they were synced and really willing to accept connections Metrics are also added on the server side that count the positive and negative replies to served connection queries. * les: allow either full enode strings or raw hex ids in the API (#22423) * eth/protocols/snap, eth/downloader: don't use bloom filter in snap sync * eth/protocols/snap: fix typo (#22530) * cmd/devp2p/internal/ethtest: return request ID in BlockHeaders response (#22508) This PR fixes an issue with the eth66 test suite where, during a readAndServe when the test is manually responding to GetBlockHeader requests, it now responds with a BlockHeaders eth66 packet that includes the inbound request ID. * ethclient: fix error handling for header test (#22514) The wantErr field was disused, and the error returned by HeaderByNumber was not properly tested. This simplifies the error checking using errors.Is and asserts that getting an expected missing header returns ethereum.NotFound. Also adds a nil check condition for header.Number before using big.Int's Sign method. * accounts/abi/bind: add NoSend transact option (#22446) This adds a new option to avoid sending the transaction which is created by calling a bound contract method. * go.mod: upgrade goleveldb to commit 64b5b1c (#22436) This pulls in a fix for a corruption issue when the process crashes while a new manifest file is being added. * go.mod: upgrade goupnp to commit 0ca76305 (#22479) This pulls in a fix to skip the broadcast on interfaces which are down. * consensus/ethash: remove unnecessary variable definition (#22512) * cmd/devp2p: use AWS-SDK v2 (#22360) This updates the DNS deployer to use AWS SDK v2. Migration is relatively seamless, although there were two locations that required a slightly different approach to achieve the same results. In particular, waiting for DNS change propagation is very different with SDK v2. This change also optimizes DNS updates by publishing all changes before waiting for propagation. * p2p/dnsdisc: fix flaw in dns size calculation (#22533) This fixes the calculation of the tree branch factor. With the new formula, we now creat at most 13 children instead of 30, ensuring the TXT record size will be below 370 bytes. * core: fix potential race in chainIndexerTest (#22346) * cmd/devp2p/internal/ethtest: skip eth/66 tests when v66 not supported (#22460) * cmd/devp2p: add flag for AWS region (#22537) * cmd/devp2p: fix error in updating the cursor when collecting records for route53 (#22538) This PR fixes a regression introduced in #22360, when we updated to the v2 of the AWS sdk, which causes current crawler to just get the same first 100 results over and over, and get stuck in a loop. * cmd/devp2p: add old block announcement test to eth test suite (#22474) Add old block announcement test to eth test suite, checks to make sure old block announcement isn't propagated * cmd/utils: fix compilation issue on openbsd (#22511) * core: fix method comment for `txpool.requestReset` (#22543) * accounts: eip-712 signing for ledger (#22378) * accounts: eip-712 signing for ledger * address review comments * all: add read-only option to database (#22407) * all: add read-only option to database * all: fixes tests * cmd/geth: migrate flags * cmd/geth: fix the compact * cmd/geth: fix the format * cmd/geth: fix log * cmd: add chain-readonly * core: add readonly notion to freezer * core/rawdb: add log * core/rawdb: fix freezer close * cmd: fix * cmd, core: construct db * core: update tests * cmd/geth: check block range against chain head in export cmd (#22387) Check the input parameters against the actual head block, exit on error * core/state/snapshot: fix panic on missing parent * internal/web3ext, node: migrate node admin API (Start|Stop)RPC->HTTP (#22461) * internal/web3ext,node: migrate node admin API (Start|Stop)RPC->HTTP Corresponding CLI flags --rpc have been moved to --http. This moves the admin module HTTP RPC start/stop methods to an equivalent namespace. Rel https://github.com/ethereum/go-ethereum/pull/22263 Date: 2021-03-08 08:00:11-06:00 Signed-off-by: meows * internal/web3ext: fix startRPC/HTTP param count (4->5) Date: 2021-03-16 06:13:23-05:00 Signed-off-by: meows * cmd/devp2p: skip ENR field tails properly in nodeset filter (#22565) In Geth v1.10, we changed the structure of the "les" ENR entry. As a result, the DHT crawler that creates the DNS lists no longer recognizes the les nodes, which is fixed in this commit. * cmd/devp2p: skip ENR field tails properly in nodeset filter * cmd/devp2p: fix tail decoder for snap as well * les: fix tail decoding in "eth" ENR entry * p2p: fix minor typo and remove fd parameter in checkInboundConn (#22547) * p2p/dnsdisc: rate limit resolving before checking cache (#22566) This makes the rate limit apply regardless of whether the node is already cached. * eth/protocols/snap: fix the flaws in the snap sync (#22553) * eth/protocols/snap: fix snap sync * eth/protocols/snap: fix tests * eth: fix tiny * eth: update tests * eth: update tests * core/state/snapshot: testcase for #22534 * eth/protocols/snap: fix boundary loss on full-but-proven range * core/state/snapshot: lintfix * eth: address comment * eth: fix handler Co-authored-by: Martin Holst Swende Co-authored-by: Péter Szilágyi * eth/tracers, core: use scopecontext in tracers, provide statedb in capturestart (#22333) Fixes the CaptureStart api to include the EVM, thus being able to set the statedb early on. This pr also exposes the struct we used internally in the interpreter to encapsulate the contract, mem, stack, rstack, so we pass it as a single struct to the tracer, and removes the error returns on the capture methods. * core: fix condition on header verification * cmd/devp2p: fix comparison of TXT record value (#22572) * cmd/devp2p: fix comparison of TXT record value The AWS API returns quoted DNS strings, so we must encode the new value before comparing it against the existing record content. * cmd/devp2p: add test * cmd/devp2p: fix typo and rename val -> newValue * eth: dump rpc gas cap and tx fee cap (#22574) * eth/protocols, metrics, p2p: add handler performance metrics * eth/protocols, metrics: use resetting histograms for rare packets * eth: fix corner case in sync head determination (#21695) This avoids synchronisation failures when the local header is ahead of the local full block. * cmd/geth, consensus/ethash: add support for --miner.notify.full flag (#22558) The PR implements the --miner.notify.full flag that enables full pending block notifications. When this flag is used, the block notifications sent to mining endpoints contain the complete block header JSON instead of a work package array. Co-authored-by: AlexSSD7 Co-authored-by: Martin Holst Swende * metrics/influxdb: don't push empty histograms, no measurement != 0 * eth/protocols/snap: add peer id and req id to the timeout logs * cmd/devp2p: update to newer cloudflare API client (#22588) This upgrades the cloudflare client dependency to v0.14.0. The new version changes the API because all methods now require a context parameter. This change also reduces the log level of the 'Skipping...' message to debug, following a similar change in the AWS deployer. * core/state/pruner: move the compaction out of the pruning procedure (#22579) The main idea behind it is: the range compaction is very expensive which can take a few hours to finish. During this long procedure, a lot of exceptions can occur, e.g. - Geth is killed manually - Geth is killed because of machine crash - etc In order to minimize the effect of the exceptions, the compaction is moved out of the pruning. So that even the compaction is not finished, the pruning is regarded as done. * eth/protocols/snap: try to prevent requests timing out * core: add BlockGen.GetBalance method (#22589) * cmd/puppeth: specify working directory for nodejs 15 (#22549) * cmd/geth: add db dumptrie command (#22563) Adds the command "geth db dumptrie ", to better help investigate the trie data * core/vm: fix Byzantium address list (#22603) * ethstats: avoid creating subscriptions on background goroutine (#22587) This fixes an issue where the ethstats service could crash if geth was started and then immediately stopped due to an internal error. The cause of the crash was a nil subscription being returned by the backend, because the background goroutine creating them was scheduled after the backend had already shut down. Moving the creation of subscriptions into the Start method, which runs synchronously during startup of the node, means the returned subscriptions can never be 'nil'. Co-authored-by: Felix Lange Co-authored-by: Martin Holst Swende * core/state/snapshot, ethdb: track deletions more accurately (#22582) * core/state/snapshot, ethdb: track deletions more accurately * core/state/snapshot: don't reset the iterator, leveldb's screwy * ethdb: don't mess with the insert batches for now * rpc: tighter shutdown synchronization in client subscription (#22597) This fixes a rare issue where the client subscription forwarding loop would attempt send on the subscription's channel after Unsubscribe has returned, leading to a panic if the subscription channel was already closed by the user. Example: sub, _ := client.Subscribe(..., channel, ...) sub.Unsubscribe() close(channel) The race occurred because Unsubscribe called quitWithServer to tell the forwarding loop to stop sending on sub.channel, but did not wait for the loop to actually come down. This is fixed by adding an additional channel to track the shutdown, on which Unsubscribe now waits. Fixes #22322 * all: fix miner hashRate -> hashrate on API calls * core/state/snapshot: fix data race in diff layer (#22540) * internal/ethapi: fix eth_chainId method (#22243) This removes the duplicated definition of eth_chainID in package eth and updates the definition in internal/ethapi to treat chain ID as a bigint. Co-authored-by: Felix Lange * graphql: add support for tx types and tx access lists (#22491) This adds support for EIP-2718 access list transactions in the GraphQL API. Co-authored-by: Amit Shah Co-authored-by: Felix Lange * internal/debug: add JSON log format and rename logging flags (#22341) This change adds support for logging JSON records when the --log.json flag is given. The --debug and --backtrace flags are deprecated and replaced by --log.debug and --log.backtrace. While changing this, it was noticed that the --memprofilerate and --blockprofilerate were ineffective (they were always overridden even if --pprof.memprofilerate was not set). This is also fixed. Co-authored-by: Felix Lange * cmd/utils: move cache sanity check to SetEthConfig (#22510) Move the cache sanity check to the SetEthConfig function to allow the config file to load. * consensus/ethash: replace a magic number with it's constant (#22618) * les: move client pool to les/vflux/server (#22495) * les: move client pool to les/vflux/server * les/vflux/server: un-expose NodeBalance, remove unused fn, fix bugs * tests/fuzzers/vflux: add ClientPool fuzzer * les/vflux/server: fixed balance tests * les: rebase fix * les/vflux/server: fixed more bugs * les/vflux/server: unexported NodeStateMachine fields and flags * les/vflux/server: unexport all internal components and functions * les/vflux/server: fixed priorityPool test * les/vflux/server: polish balance * les/vflux/server: fixed mutex locking error * les/vflux/server: priorityPool bug fixed * common/prque: make Prque wrap-around priority handling optional * les/vflux/server: rename funcs, small optimizations * les/vflux/server: fixed timeUntil * les/vflux/server: separated balance.posValue and negValue * les/vflux/server: polish setup * les/vflux/server: enforce capacity curve monotonicity * les/vflux/server: simplified requestCapacity * les/vflux/server: requestCapacity with target range, no iterations in SetCapacity * les/vflux/server: minor changes * les/vflux/server: moved default factors to balanceTracker * les/vflux/server: set inactiveFlag in priorityPool * les/vflux/server: moved related metrics to vfs package * les/vflux/client: make priorityPool temp state logic cleaner * les/vflux/server: changed log.Crit to log.Error * add vflux fuzzer to oss-fuzz Co-authored-by: rjl493456442 * eth, les: fix tracers (#22473) * eth, les: fix tracer * eth: isolate live trie database in tracer * eth: fix nil * eth: fix * eth, les: add checkLive param * eth/tracer: fix * core, eth, internal/ethapi: create access list RPC API (#22550) * core/vm: implement AccessListTracer * eth: implement debug.createAccessList * core/vm: fixed nil panics in accessListTracer * eth: better error messages for createAccessList * eth: some fixes on CreateAccessList * eth: allow for provided accesslists * eth: pass accesslist by value * eth: remove created acocunt from accesslist * core/vm: simplify access list tracer * core/vm: unexport accessListTracer * eth: return best guess if al iteration times out * eth: return best guess if al iteration times out * core: docstring, unexport methods * eth: typo * internal/ethapi: move createAccessList to eth package * internal/ethapi: remove reexec from createAccessList * internal/ethapi: break if al is equal to last run, not if gas is equal * internal/web3ext: fixed arguments * core/types: fixed equality check for accesslist * core/types: no hardcoded vals * core, internal: simplify access list generation, make it precise * core/vm: fix typo Co-authored-by: Martin Holst Swende Co-authored-by: Péter Szilágyi * eth: fix tracing state retrieval if requesting the non-dirty genesis * params: update CHTs for v1.10.2 * params: release go-ethereum v1.10.2 stable * params: begin v1.10.3 release cycle * eth, les: drop support for eth/64, fix eth/66 tests * accounts: documentation fixes (#22645) * replaces `an chance` with `a chance` * replaces `SignHashWithPassphrase` with `SignTextWithPassphrase` as there was no SignHashWithPasspharse function in the file * cmd/geth: add db-command to inspect freezer index (#22633) This PR makes it easier to inspect the freezer index, which could be useful to investigate things like #22111 * cmd/faucet: support testnet flags in the faucet (#22545) Co-authored-by: Felix Lange Co-authored-by: Martin Holst Swende * eth/fetcher: avoid spurious timer events at startup (#22652) Co-authored-by: Felix Lange * core, eth: faster snapshot generation (#22504) * eth/protocols: persist received state segments * core: initial implementation * core/state/snapshot: add tests * core, eth: updates * eth/protocols/snapshot: count flat state size * core/state: add metrics * core/state/snapshot: skip unnecessary deletion * core/state/snapshot: rename * core/state/snapshot: use the global batch * core/state/snapshot: add logs and fix wiping * core/state/snapshot: fix * core/state/snapshot: save generation progress even if the batch is empty * core/state/snapshot: fixes * core/state/snapshot: fix initial account range length * core/state/snapshot: fix initial account range * eth/protocols/snap: store flat states during the healing * eth/protocols/snap: print logs * core/state/snapshot: refactor (#4) * core/state/snapshot: refactor * core/state/snapshot: tiny fix and polish Co-authored-by: rjl493456442 * core, eth: fixes * core, eth: fix healing writer * core, trie, eth: fix paths * eth/protocols/snap: fix encoding * eth, core: add debug log * core/state/generate: release iterator asap (#5) core/state/snapshot: less copy core/state/snapshot: revert split loop core/state/snapshot: handle storage becoming empty, improve test robustness core/state: test modified codehash core/state/snapshot: polish * core/state/snapshot: optimize stats counter * core, eth: add metric * core/state/snapshot: update comments * core/state/snapshot: improve tests * core/state/snapshot: replace secure trie with standard trie * core/state/snapshot: wrap return as the struct * core/state/snapshot: skip wiping correct states * core/state/snapshot: updates * core/state/snapshot: fixes * core/state/snapshot: fix panic due to reference flaw in closure * core/state/snapshot: fix errors in state generation logic + fix log output * core/state/snapshot: remove an error case * core/state/snapshot: fix condition-check for exhausted snap state * core/state/snapshot: use stackTrie for small tries * core/state/snapshot: don't resolve small storage tries in vain * core/state/snapshot: properly clean up storage of deleted accounts * core/state/snapshot: avoid RLP-encoding in some cases + minor nitpicks * core/state/snapshot: fix error (+testcase) * core/state/snapshot: clean up tests a bit * core/state/snapshot: work in progress on better tests * core/state/snapshot: polish code * core/state/snapshot: fix trie iteration abortion trigger * core/state/snapshot: fixes flaws * core/state/snapshot: remove panic * core/state/snapshot: fix abort * core/state/snapshot: more tests (plus failing testcase) * core/state/snapshot: more testcases + fix for failing test * core/state/snapshot: testcase for malformed data * core/state/snapshot: some test nitpicks * core/state/snapshot: improvements to logging * core/state/snapshot: testcase to demo error in abortion * core/state/snapshot: fix abortion * cmd/geth: make verify-state report the root * trie: fix failing test * core/state/snapshot: add timer metrics * core/state/snapshot: fix metrics * core/state/snapshot: udpate tests * eth/protocols/snap: write snapshot account even if code or state is needed * core/state/snapshot: fix diskmore check * core/state/snapshot: review fixes * core/state/snapshot: improve error message * cmd/geth: rename 'error' to 'err' in logs * core/state/snapshot: fix some review concerns * core/state/snapshot, eth/protocols/snap: clear snapshot marker when starting/resuming snap sync * core: add error log * core/state/snapshot: use proper timers for metrics collection * core/state/snapshot: address some review concerns * eth/protocols/snap: improved log message * eth/protocols/snap: fix heal logs to condense infos * core/state/snapshot: wait for generator termination before restarting * core/state/snapshot: revert timers to counters to track total time Co-authored-by: Martin Holst Swende Co-authored-by: Péter Szilágyi * core/types: drop some relice data types * all: make logs a bit easier on the eye to digest (#22665) * all: add thousandths separators for big numbers on log messages * p2p/sentry: drop accidental file * common, log: add fast number formatter * common, eth/protocols/snap: simplifty fancy num types * log: handle nil big ints * eth/protocols/snap: use ephemeral channels to avoid cross-sync delveries * core: add TestGenesisHashes and fix YoloV3 (#22559) This adds simple unit test checking if the hard-coded genesis hash values in package params match the actual genesis block hashes. * log: fix formatting of big.Int (#22679) * log: fix formatting of big.Int The implementation of formatLogfmtBigInt had two issues: it crashed when the number was actually large enough to hit the big integer case, and modified the big.Int while formatting it. * log: don't call FormatLogfmtInt64 for int16 * log: separate from decimals back, not front Co-authored-by: Péter Szilágyi * les/vflux/server: fix priority cornercase causing fuzzer timeout (#22650) * les/vflux/server: fix estimatePriority corner case * les/vflux/server: simplify inactiveAllowance == 0 case * trie: make stacktrie not mutate input values (#22673) The stacktrie is a bit un-untuitive, API-wise: since it mutates input values. Such behaviour is dangerous, and easy to get wrong if the calling code 'forgets' this quirk. The behaviour is fixed by this PR, so that the input values are not modified by the stacktrie. Note: just as with the Trie, the stacktrie still references the live input objects, so it's still _not_ safe to mutate the values form the callsite. * core/state/snapshot: avoid copybytes for stacktrie * eth/catalyst: add catalyst API prototype (#22641) This change adds the --catalyst flag, enabling an RPC API for eth2 integration. In this initial version, catalyst mode also disables all peer-to-peer networking. Co-authored-by: Mikhail Kalinin Co-authored-by: Felix Lange * cmd/devp2p: add support for -limit option in nodeset filter command (#22694) The new -limit option makes the filter operate on top N nodes by score. This also adds ENR attribute stats in the nodeset info command. Node set commands are now documented in README. * cmd/devp2p: add dns nuke-route53 command (#22695) * core: nuke legacy snapshot supporting (#22663) * ethash: no block reward in catalyst mode (#22697) * trie: make stacktrie support binary marshal/unmarshal (#22685) * go.mod: upgrade gopsutils to v3.21.4 (#22693) This fixes the OpenBSD/arm64 build. * tests: disable blockchain tests based on general state tests (#22704) * eth, internal: extend the TraceCall API (#22245) Adds an an optional parameter `overrides *map[common.Address]account` to the `eth_call` API in order for the caller to can customize the state. * eth/tracers, internal/ethapi: fix typos causing lint issue (#22711) * les: fix goroutine leaks in tests (#22707) * trie: improve the node iterator seek operation (#22470) This change improves the efficiency of the nodeIterator seek operation. Previously, seek essentially ran the iterator forward until it found the matching node. With this change, it skips over fullnode children and avoids resolving them from the database. * accounts/external, signer/core: add support for EIP-2930 transactions (#22585) This adds support for signing EIP-2930 with clef. * rpc: add HTTPError type for HTTP error responses (#22677) The new error type is returned by client operations contains details of the response error code and response body. Co-authored-by: Felix Lange * eth/protocols, prp/tracker: add support for req/rep rtt tracking (#22608) * eth/protocols, prp/tracker: add support for req/rep rtt tracking * p2p/tracker: sanity cap the number of pending requests * pap/tracker: linter <3 * p2p/tracker: disable entire tracker if no metrics are enabled * cmd/devp2p/internal/ethtest: run test suite as Go unit test (#22698) This change adds a Go unit test that runs the protocol test suite against the go-ethereum implementation of the eth protocol. * core/state/snapshot, true: reuse dirty data instead of hitting disk when generating (#22667) * core/state/snapshot: reuse memory data instead of hitting disk when generating * trie: minor nitpicks wrt the resolver optimization * core/state/snapshot, trie: use key/value store for resolver * trie: fix linter Co-authored-by: Péter Szilágyi * cmd/devp2p/internal/ethtest: add more tx propagation tests (#22630) This adds a test for large tx announcement messages, as well as a test to check that announced tx hashes are requested by the node. * p2p/discover: improve discv5 handling of IPv4-in-IPv6 addresses (#22703) When receiving PING from an IPv4 address over IPv6, the implementation sent back a IPv4-in-IPv6 address. This change makes it reflect the IPv4 address. * core: remove old conversion to shuffle leveldb blocks into ancients * core/rawdb: fix datarace in freezer (#22728) The Append / truncate operations were racy. When a datafile reaches 2Gb, a new file is needed. For this operation, we require a writelock, which is not needed in the 99.99% of all cases where the data does fit in the current head-file. This transition from readlock to writelock was incorrect, and as the readlock was released, a truncate operation could slip in between, and truncate the data. This would have been fine, however, the Append operation continued writing as if no truncation had occurred, e.g writing item 5 where item 0 should reside. This PR changes the behaviour, so that if when we run into the situation that a new file is needed, it aborts, and retries, this time with a writelock. The outcome of the situation described above, running on this PR, would instead be that the Append operation exits with a failure. * les: polish code (#22625) * les: polish code * les/vflus/server: fixes * les: fix lint * build: upgrade to golangci-lint v1.39.0 (#22696) * build: upgrade to golangci-lint v1.39.0 * consensus/ethash: fix go vet warning regarding reflect.SliceHeader * eth/catalyst: fix lint issue * consensus/ethash: fix bug in memoryMapFile * cmd/puppeth: add support for authentication via ssh agent (#22634) * build: upgrade -dlgo version to Go 1.16.3 (#22746) * core/vm: make gas cost reporting to tracers correct (#22702) Previously, the makeCallVariantGasCallEIP2929 charged the cold account access cost directly, leading to an incorrect gas cost passed to the tracer from the main execution loop. This change still temporarily charges the cost (to allow for an accurate calculation of the available gas for the call), but then afterwards refunds it and instead returns the correct total gas cost to be then properly charged in the main loop. * eth/protocols/snap: generate storage trie from full dirty snap data (#22668) * eth/protocols/snap: generate storage trie from full dirty snap data * eth/protocols/snap: get rid of some more dead code * eth/protocols/snap: less frequent logs, also log during trie generation * eth/protocols/snap: implement dirty account range stack-hashing * eth/protocols/snap: don't loop on account trie generation * eth/protocols/snap: fix account format in trie * core, eth, ethdb: glue snap packets together, but not chunks * eth/protocols/snap: print completion log for snap phase * eth/protocols/snap: extended tests * eth/protocols/snap: make testcase pass * eth/protocols/snap: fix account stacktrie commit without defer * ethdb: fix key counts on reset * eth/protocols: fix typos * eth/protocols/snap: make better use of delivered data (#44) * eth/protocols/snap: make better use of delivered data * squashme * eth/protocols/snap: reduce chunking * squashme * eth/protocols/snap: reduce chunking further * eth/protocols/snap: break out hash range calculations * eth/protocols/snap: use sort.Search instead of looping * eth/protocols/snap: prevent crash on storage response with no keys * eth/protocols/snap: nitpicks all around * eth/protocols/snap: clear heal need on 1-chunk storage completion * eth/protocols/snap: fix range chunker, add tests Co-authored-by: Péter Szilágyi * trie: fix test API error * eth/protocols/snap: fix some further liter issues * eth/protocols/snap: fix accidental batch reuse Co-authored-by: Martin Holst Swende * p2p/tracker: properly clean up fulfilled requests * p2p/tracker: only reschedule wake if previous didn't run * cmd/devp2p, eth/protocols/eth: fix tests + make sanity checks earlier (#22749) * eth/gasprice: improve stability of estimated price (#22722) This PR makes the gas price oracle ignore transactions priced at `<=1 wei`. * tests/fuzzers: crypto/bn256 and crypto/bls12381 tests against gnark-crypto (#22755) Add more cross-fuzzers to fuzz bls with gnark versus geth's own bls12-381 library * les, tests: fix les clientpool (#22756) * les, tests: fix les clientpool * tests: disable debug mode * les: polish code * eth/protocols/snap: lower the packet size to avoid overloading link * cmd/devp2p: fix flaky SameRequestID test (#22754) * trie: remove redundant returns + use stacktrie where applicable (#22760) * trie: add benchmark for proofless range * trie: remove unused returns + use stacktrie * core, eth, ethdb, trie: simplify range proofs * eth: restore eth_hashrate API endpoint * catalyst: check if block exists in assemble-block call with unknown parent-hash (#22770) * add myself as code owner for catalyst (#22778) * docs: fix docstring on read head block (#22776) * evm: remove unused errors left after EIP-2315 removal (#22767) * github: add note about screenshots in issue template (#22764) * core/types: add license header (#22781) * core/vm: fix typo in comment (#22785) * core: remove unused else branch in reorg (#22783) * core/vm: replace repeated string with variable in tests (#22774) * core: fix typo in comment (#22773) * README.md: update commands table, add note about web3.js version (#22748) * eth/filters: fix comment on PublicFilterAPI timeoutLoop (#22782) * core/state: remove toAddr helper in tests (#22772) * core, eth: abort snapshot generation on snap sync and resume later * eth/protocols/snap: use storage batch, not account batch in st task * cmd/devp2p: fix flakey tests in CI (#22757) This PR fixes a couple of issues in the eth test suite that caused flakiness when run in the CI. * core/vm: clean up contract creation error handling (#22766) Do not keep separate flag for "max code size exceeded" case, but instead assign appropriate error for it sooner. * core/vm: fix interpreter comments (#22797) * Fix interpreter comment * Fix comment * params: remove dependency on crypto (#22788) * params: remove dependency on crypto Package params should not depend on package crypto because building crypto requires cgo. Since build/ci.go needs package params to get the go-ethereum version number, C code must be compiled in order to run the build tool, which is annoying for certain cross-compilation setups. * params: add SectionHead * cmd/utils: don't crash on nonexistent datadir (#22738) * eth: don't print db upgrade warning on db init * cmd/utils: use eth DNS tree for snap discovery (#22808) This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree. Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support snap, we have decided to retire the dedicated index for snap and just use the eth tree instead. The dial iterators of eth and snap now use the same DNS tree in the default configuration, so both iterators should use the same DNS discovery client instance. This ensures that the record cache and rate limit are shared. Records will not be requested multiple times. While testing the change, I noticed that duplicate DNS requests do happen even when the client instance is shared. This is because the two iterators request the tree root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the change also adds a singleflight.Group instance in the client. When one iterator attempts to resolve an entry which is already being resolved, the singleflight object waits for the existing resolve call to finish and returns the entry to both places. * build: improve cross compilation setup (#22804) This PR cleans up the CI build system and fixes a couple of issues. - The go tool launcher code has been moved to internal/build. With the new toolchain functions, the environment of the host Go (i.e. the one that built ci.go) and the target Go (i.e. the toolchain downloaded by -dlgo) are isolated more strictly. This is important to make cross compilation and -dlgo work correctly in more cases. - The -dlgo option now skips the download and uses the host Go if the running Go version matches dlgoVersion exactly. - The 'test' command now supports -dlgo, -cc and -arch. Running unit tests with foreign GOARCH is occasionally useful. For example, it can be used to run 32-bit tests on Windows. It can also be used to run darwin/amd64 tests on darwin/arm64 using Rosetta 2. - The 'aar', 'xcode' and 'xgo' commands now use a slightly different method to install external tools. They previously used `go get`, but this comes with the annoying side effect of modifying go.mod. They now use `go install` instead, which is the recommended way of installing tools without modifying the local module. - The old build warning about outdated Go version has been removed because we're much better at keeping backwards compatibility now. * go.mod: go mod tidy (#22814) This updates go.mod for the addition of golang.org/x/sync. * build: fix iOS framework build (#22813) This fixes a regression introduced in #22804. * appveyor.yml: upgrade to VisualStudio 2019 image (#22811) * build: fix windows installer build for NSIS v3.05 (#22821) With the update to a newer AppVeyor build image, creating the Windows installer no longer worked because of a string quoting error in EnvVarUpdate.nsh. This applies the fix recommended in https://stackoverflow.com/questions/62081765. * cmd/devp2p/internal/ethtest: send simultaneous requests on one connection (#22801) This changes the SimultaneousRequests test to send the requests from the same connection, as it doesn't really make sense to test whether a node can respond to two requests with different request IDs from separate connections. * params: go-ethereum v1.10.3 stable * Merge branch 'master' into upgrade/go-ethereum/v1.10.3-2022805130510 * fix merge * restore geth 1.10.3 modifications * fix argument * fix catalyst flag * fix diff * fix diff * fix diff * Update cmd/geth/chaincmd.go * fix github action with deprecated ubuntu 18 to 20 Signed-off-by: meows Co-authored-by: Martin Holst Swende Co-authored-by: Felix Lange Co-authored-by: Felföldi Zsolt Co-authored-by: Péter Szilágyi Co-authored-by: rjl493456442 Co-authored-by: Marius van der Wijden Co-authored-by: rene <41963722+renaynay@users.noreply.github.com> Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com> Co-authored-by: Ryan Schneider Co-authored-by: michael1011 Co-authored-by: ligi Co-authored-by: wuff1996 <33193253+wuff1996@users.noreply.github.com> Co-authored-by: meowsbits Co-authored-by: Martin Redmond <21436+reds@users.noreply.github.com> Co-authored-by: ucwong Co-authored-by: jacksoom Co-authored-by: Quest Henkart Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com> Co-authored-by: Tobias Hildebrandt <79341166+tobias-hildebrandt@users.noreply.github.com> Co-authored-by: Derek Chiang Co-authored-by: MrChico Co-authored-by: Chen Quan Co-authored-by: Zou Guangxian Co-authored-by: AlexSSD7 Co-authored-by: nebojsa94 Co-authored-by: Edgar Aroutiounian Co-authored-by: piersy Co-authored-by: AmitBRD <60668103+AmitBRD@users.noreply.github.com> Co-authored-by: Amit Shah Co-authored-by: Peter Simard Co-authored-by: Evolution404 <35091674+Evolution404@users.noreply.github.com> Co-authored-by: Balaji Shetty Pachai <32358081+balajipachai@users.noreply.github.com> Co-authored-by: Mudit Gupta Co-authored-by: xD AKA Rapper King Of cn background diablo & revelations <33193253+r1cs@users.noreply.github.com> Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com> Co-authored-by: Mikhail Kalinin Co-authored-by: ryanc414 Co-authored-by: Nishant Das Co-authored-by: Gregory Markou <16929357+GregTheGreek@users.noreply.github.com> Co-authored-by: Gautam Botrel Co-authored-by: Diederik Loerakker Co-authored-by: aaronbuchwald Co-authored-by: Paweł Bylica Co-authored-by: baptiste-b-pegasys <85155432+baptiste-b-pegasys@users.noreply.github.com> Co-authored-by: Baptiste Boussemart --- .github/CODEOWNERS | 1 + .github/ISSUE_TEMPLATE/bug.md | 2 + Makefile | 11 +- accounts/accounts.go | 4 +- accounts/external/backend.go | 14 + accounts/url.go | 2 +- appveyor.yml | 40 +- build/checksums.txt | 58 +- build/ci.go | 222 ++---- build/nsis.envvarupdate.nsh | 2 +- cmd/devp2p/README.md | 27 +- cmd/devp2p/dns_route53.go | 55 +- cmd/devp2p/dnscmd.go | 35 +- cmd/devp2p/internal/ethtest/chain.go | 29 +- cmd/devp2p/internal/ethtest/eth66_suite.go | 225 ++++-- .../internal/ethtest/eth66_suiteHelpers.go | 147 ++-- cmd/devp2p/internal/ethtest/large.go | 2 +- cmd/devp2p/internal/ethtest/suite.go | 128 ++-- cmd/devp2p/internal/ethtest/suite_test.go | 107 +++ cmd/devp2p/internal/ethtest/transaction.go | 158 +++- cmd/devp2p/internal/ethtest/types.go | 28 +- cmd/devp2p/nodeset.go | 28 +- cmd/devp2p/nodesetcmd.go | 71 +- cmd/evm/README.md | 6 +- cmd/evm/testdata/8/readme.md | 6 +- cmd/faucet/faucet.go | 30 +- cmd/geth/config.go | 20 +- cmd/geth/config_test.go | 2 +- cmd/geth/dbcmd.go | 58 ++ cmd/geth/main.go | 1 + cmd/geth/snapshot.go | 45 +- cmd/geth/usage.go | 1 + cmd/puppeth/ssh.go | 62 +- cmd/utils/flags.go | 18 +- common/types.go | 2 +- consensus/ethash/consensus.go | 32 +- consensus/ethash/ethash.go | 13 +- consensus/istanbul/ibft/core/events.go | 2 +- consensus/istanbul/types.go | 4 +- core/blockchain.go | 110 +-- core/blockchain_snapshot_test.go | 320 +------- core/blockchain_test.go | 4 +- core/chain_indexer.go | 2 +- core/genesis_test.go | 35 + core/headerchain.go | 2 +- core/mps/multiple_psr.go | 4 +- core/rawdb/accessors_chain.go | 7 +- core/rawdb/accessors_chain_quorum.go | 11 + core/rawdb/accessors_snapshot.go | 20 + core/rawdb/database.go | 6 +- core/rawdb/database_test.go | 17 + core/rawdb/freezer.go | 2 +- core/rawdb/freezer_table.go | 111 +-- core/rawdb/freezer_table_test.go | 62 +- core/rawdb/schema.go | 7 +- core/state/snapshot/conversion.go | 2 +- core/state/snapshot/generate.go | 686 +++++++++++++---- core/state/snapshot/generate_test.go | 649 +++++++++++++++- core/state/snapshot/journal.go | 152 +--- core/state/snapshot/snapshot.go | 105 ++- core/state/snapshot/wipe.go | 32 +- core/state/state_test.go | 30 +- core/state/statedb.go | 2 +- core/state/statedb_test.go | 6 +- core/state/sync.go | 24 +- core/state/sync_test.go | 12 +- core/types/block.go | 32 - core/types/receipt_test.go | 19 +- core/types/transaction.go | 19 +- core/types/transaction_marshalling.go | 16 + core/vm/errors.go | 10 +- core/vm/evm.go | 21 +- core/vm/instructions.go | 14 +- core/vm/instructions_test.go | 69 +- core/vm/interpreter.go | 4 +- core/vm/operations_acl.go | 24 +- core/vm/runtime/runtime_test.go | 80 ++ eth/api.go | 5 + eth/backend.go | 14 +- eth/catalyst/api.go | 322 ++++++++ eth/catalyst/api_test.go | 241 ++++++ eth/catalyst/api_types.go | 70 ++ eth/catalyst/gen_blockparams.go | 46 ++ eth/catalyst/gen_ed.go | 117 +++ eth/discovery.go | 11 - eth/downloader/downloader.go | 15 +- eth/downloader/downloader_test.go | 333 ++++---- eth/downloader/peer.go | 8 +- eth/downloader/statesync.go | 2 +- eth/fetcher/block_fetcher.go | 8 +- eth/filters/api.go | 8 +- eth/filters/filter_system_test.go | 9 +- eth/gasprice/gasprice.go | 3 + eth/handler.go | 11 +- eth/handler_eth_test.go | 54 +- eth/protocols/eth/handler.go | 53 +- eth/protocols/eth/handler_test.go | 115 ++- eth/protocols/eth/handlers.go | 16 +- eth/protocols/eth/handshake_test.go | 4 +- eth/protocols/eth/peer.go | 57 +- eth/protocols/eth/protocol.go | 5 +- eth/protocols/eth/tracker.go | 26 + eth/protocols/snap/handler.go | 16 +- eth/protocols/snap/peer.go | 7 + eth/protocols/snap/range.go | 80 ++ eth/protocols/snap/range_test.go | 143 ++++ eth/protocols/snap/sync.go | 716 ++++++++++-------- eth/protocols/snap/sync_test.go | 113 ++- eth/protocols/snap/tracker.go | 26 + eth/state_accessor.go | 5 - eth/sync.go | 4 +- eth/sync_test.go | 4 +- eth/tracers/api.go | 40 +- eth/tracers/api_test.go | 179 ++++- ethdb/batch.go | 25 + go.mod | 15 +- go.sum | 236 ++---- internal/build/env.go | 3 + internal/build/gotool.go | 149 ++++ internal/build/util.go | 18 +- internal/ethapi/api.go | 65 +- internal/web3ext/web3ext.go | 6 - les/client.go | 2 +- les/client_handler.go | 3 +- les/flowcontrol/control.go | 3 + les/metrics.go | 1 - les/peer.go | 1 - les/server.go | 16 +- les/server_handler.go | 22 +- les/test_helper.go | 18 +- les/vflux/server/balance.go | 10 +- les/vflux/server/balance_test.go | 6 +- les/vflux/server/clientpool.go | 16 +- les/vflux/server/clientpool_test.go | 10 +- les/vflux/server/prioritypool.go | 123 +-- log/format.go | 118 ++- log/format_test.go | 95 +++ node/node.go | 8 + oss-fuzz.sh | 5 + p2p/discover/v5_udp.go | 9 +- p2p/discover/v5_udp_test.go | 32 + p2p/dnsdisc/client.go | 51 +- p2p/metrics.go | 3 - p2p/tracker/tracker.go | 205 +++++ params/config.go | 48 +- params/version.go | 2 +- rpc/errors.go | 29 + rpc/http.go | 21 +- rpc/http_test.go | 39 + rpc/types.go | 12 - signer/core/api.go | 8 + signer/core/cliui.go | 12 + signer/core/types.go | 35 +- tests/block_test.go | 2 +- tests/fuzzers/bls12381/bls12381_fuzz.go | 244 ++++++ .../{bls_fuzzer.go => precompile_fuzzer.go} | 0 tests/fuzzers/bn256/bn256_fuzz.go | 48 +- tests/fuzzers/rangeproof/rangeproof-fuzzer.go | 21 +- tests/fuzzers/vflux/clientpool-fuzzer.go | 76 +- tests/fuzzers/vflux/debug/main.go | 3 + trie/committer.go | 4 +- trie/database.go | 3 - trie/iterator.go | 191 ++++- trie/iterator_test.go | 81 ++ trie/notary.go | 57 -- trie/proof.go | 99 +-- trie/proof_test.go | 96 ++- trie/stacktrie.go | 110 ++- trie/stacktrie_test.go | 96 +++ trie/sync.go | 9 +- trie/trie.go | 17 +- trie/trie_test.go | 6 +- 172 files changed, 7203 insertions(+), 2752 deletions(-) create mode 100644 cmd/devp2p/internal/ethtest/suite_test.go create mode 100644 core/rawdb/accessors_chain_quorum.go create mode 100644 core/rawdb/database_test.go create mode 100644 eth/catalyst/api.go create mode 100644 eth/catalyst/api_test.go create mode 100644 eth/catalyst/api_types.go create mode 100644 eth/catalyst/gen_blockparams.go create mode 100644 eth/catalyst/gen_ed.go create mode 100644 eth/protocols/eth/tracker.go create mode 100644 eth/protocols/snap/range.go create mode 100644 eth/protocols/snap/range_test.go create mode 100644 eth/protocols/snap/tracker.go create mode 100644 internal/build/gotool.go create mode 100644 log/format_test.go create mode 100644 p2p/tracker/tracker.go create mode 100644 tests/fuzzers/bls12381/bls12381_fuzz.go rename tests/fuzzers/bls12381/{bls_fuzzer.go => precompile_fuzzer.go} (100%) delete mode 100644 trie/notary.go diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 58c1a4a62e..2015604e64 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -9,6 +9,7 @@ cmd/puppeth @karalabe consensus @karalabe core/ @karalabe @holiman @rjl493456442 eth/ @karalabe @holiman @rjl493456442 +eth/catalyst/ @gballet graphql/ @gballet les/ @zsfelfoldi @rjl493456442 light/ @zsfelfoldi @rjl493456442 diff --git a/.github/ISSUE_TEMPLATE/bug.md b/.github/ISSUE_TEMPLATE/bug.md index c5a3654bde..2aa2c48a60 100644 --- a/.github/ISSUE_TEMPLATE/bug.md +++ b/.github/ISSUE_TEMPLATE/bug.md @@ -26,3 +26,5 @@ Commit hash : (if `develop`) ```` [backtrace] ```` + +When submitting logs: please submit them as text and not screenshots. \ No newline at end of file diff --git a/Makefile b/Makefile index 66401f747a..fe5f2a5c79 100644 --- a/Makefile +++ b/Makefile @@ -31,7 +31,7 @@ android: @echo "Import \"$(GOBIN)/geth.aar\" to use the library." @echo "Import \"$(GOBIN)/geth-sources.jar\" to add javadocs" @echo "For more info see https://stackoverflow.com/questions/20994336/android-studio-how-to-attach-javadoc" - + ios: $(GORUN) build/ci.go xcode --local @echo "Done building." @@ -51,12 +51,11 @@ clean: # You need to put $GOBIN (or $GOPATH/bin) in your PATH to use 'go generate'. devtools: - env GOBIN= go get -u golang.org/x/tools/cmd/stringer - env GOBIN= go get -u github.com/kevinburke/go-bindata/go-bindata - env GOBIN= go get -u github.com/fjl/gencodec - env GOBIN= go get -u github.com/golang/protobuf/protoc-gen-go + env GOBIN= go install golang.org/x/tools/cmd/stringer@latest + env GOBIN= go install github.com/kevinburke/go-bindata/go-bindata@latest + env GOBIN= go install github.com/fjl/gencodec@latest + env GOBIN= go install github.com/golang/protobuf/protoc-gen-go@latest env GOBIN= go install ./cmd/abigen - @type "npm" 2> /dev/null || echo 'Please install node.js and npm' @type "solc" 2> /dev/null || echo 'Please install solc' @type "protoc" 2> /dev/null || echo 'Please install protoc' diff --git a/accounts/accounts.go b/accounts/accounts.go index 08a1f0f2b1..7178578091 100644 --- a/accounts/accounts.go +++ b/accounts/accounts.go @@ -113,7 +113,7 @@ type Wallet interface { SignData(account Account, mimeType string, data []byte) ([]byte, error) // SignDataWithPassphrase is identical to SignData, but also takes a password - // NOTE: there's an chance that an erroneous call might mistake the two strings, and + // NOTE: there's a chance that an erroneous call might mistake the two strings, and // supply password in the mimetype field, or vice versa. Thus, an implementation // should never echo the mimetype or return the mimetype in the error-response SignDataWithPassphrase(account Account, passphrase, mimeType string, data []byte) ([]byte, error) @@ -127,7 +127,7 @@ type Wallet interface { // a password to decrypt the account, or a PIN code o verify the transaction), // an AuthNeededError instance will be returned, containing infos for the user // about which fields or actions are needed. The user may retry by providing - // the needed details via SignHashWithPassphrase, or by other means (e.g. unlock + // the needed details via SignTextWithPassphrase, or by other means (e.g. unlock // the account in a keystore). // // This method should return the signature in 'canonical' format, with v 0 or 1 diff --git a/accounts/external/backend.go b/accounts/external/backend.go index ae6ae6ba78..23c9ad3886 100644 --- a/accounts/external/backend.go +++ b/accounts/external/backend.go @@ -214,6 +214,20 @@ func (api *ExternalSigner) SignTx(account accounts.Account, tx *types.Transactio // Quorum IsPrivate: tx.IsPrivate(), } + // We should request the default chain id that we're operating with + // (the chain we're executing on) + if chainID != nil { + args.ChainID = (*hexutil.Big)(chainID) + } + // However, if the user asked for a particular chain id, then we should + // use that instead. + if tx.Type() != types.LegacyTxType && tx.ChainId() != nil { + args.ChainID = (*hexutil.Big)(tx.ChainId()) + } + if tx.Type() == types.AccessListTxType { + accessList := tx.AccessList() + args.AccessList = &accessList + } var res signTransactionResult if err := api.client.Call(&res, "account_signTransaction", args); err != nil { return nil, err diff --git a/accounts/url.go b/accounts/url.go index a5add10216..12a84414a0 100644 --- a/accounts/url.go +++ b/accounts/url.go @@ -64,7 +64,7 @@ func (u URL) String() string { func (u URL) TerminalString() string { url := u.String() if len(url) > 32 { - return url[:31] + "…" + return url[:31] + ".." } return url } diff --git a/appveyor.yml b/appveyor.yml index 052280be15..a72163382a 100644 --- a/appveyor.yml +++ b/appveyor.yml @@ -1,41 +1,29 @@ -os: Visual Studio 2015 - -# Clone directly into GOPATH. -clone_folder: C:\gopath\src\github.com\ethereum\go-ethereum +os: Visual Studio 2019 clone_depth: 5 version: "{branch}.{build}" environment: - global: - GO111MODULE: on - GOPATH: C:\gopath - CC: gcc.exe matrix: + # We use gcc from MSYS2 because it is the most recent compiler version available on + # AppVeyor. Note: gcc.exe only works properly if the corresponding bin/ directory is + # contained in PATH. - GETH_ARCH: amd64 - MSYS2_ARCH: x86_64 - MSYS2_BITS: 64 - MSYSTEM: MINGW64 - PATH: C:\msys64\mingw64\bin\;C:\Program Files (x86)\NSIS\;%PATH% + GETH_CC: C:\msys64\mingw64\bin\gcc.exe + PATH: C:\msys64\mingw64\bin;C:\Program Files (x86)\NSIS\;%PATH% - GETH_ARCH: 386 - MSYS2_ARCH: i686 - MSYS2_BITS: 32 - MSYSTEM: MINGW32 - PATH: C:\msys64\mingw32\bin\;C:\Program Files (x86)\NSIS\;%PATH% + GETH_CC: C:\msys64\mingw32\bin\gcc.exe + PATH: C:\msys64\mingw32\bin;C:\Program Files (x86)\NSIS\;%PATH% install: - - git submodule update --init - - rmdir C:\go /s /q - - appveyor DownloadFile https://dl.google.com/go/go1.16.windows-%GETH_ARCH%.zip - - 7z x go1.16.windows-%GETH_ARCH%.zip -y -oC:\ > NUL + - git submodule update --init --depth 1 - go version - - gcc --version + - "%GETH_CC% --version" build_script: - - go run build\ci.go install -dlgo + - go run build\ci.go install -dlgo -arch %GETH_ARCH% -cc %GETH_CC% after_build: - - go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds - - go run build\ci.go nsis -signer WINDOWS_SIGNING_KEY -upload gethstore/builds + - go run build\ci.go archive -arch %GETH_ARCH% -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds + - go run build\ci.go nsis -arch %GETH_ARCH% -signer WINDOWS_SIGNING_KEY -upload gethstore/builds test_script: - - set CGO_ENABLED=1 - - go run build\ci.go test -coverage + - go run build\ci.go test -dlgo -arch %GETH_ARCH% -cc %GETH_CC% -coverage diff --git a/build/checksums.txt b/build/checksums.txt index d5bd4d0cd3..0e33e07f2c 100644 --- a/build/checksums.txt +++ b/build/checksums.txt @@ -1,31 +1,33 @@ # This file contains sha256 checksums of optional build dependencies. -7688063d55656105898f323d90a79a39c378d86fe89ae192eb3b7fc46347c95a go1.16.src.tar.gz -6000a9522975d116bf76044967d7e69e04e982e9625330d9a539a8b45395f9a8 go1.16.darwin-amd64.tar.gz -ea435a1ac6d497b03e367fdfb74b33e961d813883468080f6e239b3b03bea6aa go1.16.linux-386.tar.gz -013a489ebb3e24ef3d915abe5b94c3286c070dfe0818d5bca8108f1d6e8440d2 go1.16.linux-amd64.tar.gz -3770f7eb22d05e25fbee8fb53c2a4e897da043eb83c69b9a14f8d98562cd8098 go1.16.linux-arm64.tar.gz -d1d9404b1dbd77afa2bdc70934e10fbfcf7d785c372efc29462bb7d83d0a32fd go1.16.linux-armv6l.tar.gz -481492a17d42193d471b93b7a06da3555331bd833b76336afc87be820c48933f go1.16.windows-386.zip -5cc88fa506b3d5c453c54c3ea218fc8dd05d7362ae1de15bb67986b72089ce93 go1.16.windows-amd64.zip -d7d6c70b05a7c2f68b48aab5ab8cb5116b8444c9ddad131673b152e7cff7c726 go1.16.freebsd-386.tar.gz -40b03216f6945fb6883a50604fc7f409a83f62171607229a9c598e701e684f8a go1.16.freebsd-amd64.tar.gz -27a1aaa988e930b7932ce459c8a63ad5b3333b3a06b016d87ff289f2a11aacd6 go1.16.linux-ppc64le.tar.gz -be4c9e4e2cf058efc4e3eb013a760cb989ddc4362f111950c990d1c63b27ccbe go1.16.linux-s390x.tar.gz +b298d29de9236ca47a023e382313bcc2d2eed31dfa706b60a04103ce83a71a25 go1.16.3.src.tar.gz +6bb1cf421f8abc2a9a4e39140b7397cdae6aca3e8d36dcff39a1a77f4f1170ac go1.16.3.darwin-amd64.tar.gz +f4e96bbcd5d2d1942f5b55d9e4ab19564da4fad192012f6d7b0b9b055ba4208f go1.16.3.darwin-arm64.tar.gz +48b2d1481db756c88c18b1f064dbfc3e265ce4a775a23177ca17e25d13a24c5d go1.16.3.linux-386.tar.gz +951a3c7c6ce4e56ad883f97d9db74d3d6d80d5fec77455c6ada6c1f7ac4776d2 go1.16.3.linux-amd64.tar.gz +566b1d6f17d2bc4ad5f81486f0df44f3088c3ed47a3bec4099d8ed9939e90d5d go1.16.3.linux-arm64.tar.gz +0dae30385e3564a557dac7f12a63eedc73543e6da0f6017990e214ce8cc8797c go1.16.3.linux-armv6l.tar.gz +a3c16e1531bf9726f47911c4a9ed7cb665a6207a51c44f10ebad4db63b4bcc5a go1.16.3.windows-386.zip +a4400345135b36cb7942e52bbaf978b66814738b855eeff8de879a09fd99de7f go1.16.3.windows-amd64.zip +31ecd11d497684fa8b0f01ba784590c4c760943665fdc4fe0adaa1405c71736c go1.16.3.freebsd-386.tar.gz +ffbd920b309e62e807457b11d80e8c17fefe3ef6de423aaba4b1e270b2ca4c3d go1.16.3.freebsd-amd64.tar.gz +5eb046bbbbc7fe2591846a4303884cb5a01abb903e3e61e33459affe7874e811 go1.16.3.linux-ppc64le.tar.gz +3e8bd7bde533a73fd6fa75b5288678ef397e76c198cfb26b8ae086035383b1cf go1.16.3.linux-s390x.tar.gz -d998a84eea42f2271aca792a7b027ca5c1edfcba229e8e5a844c9ac3f336df35 golangci-lint-1.27.0-linux-armv7.tar.gz -bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint.exe-1.27.0-windows-amd64.zip -bf781f05b0d393b4bf0a327d9e62926949a4f14d7774d950c4e009fc766ed1d4 golangci-lint-1.27.0-windows-amd64.zip -0e2a57d6ba709440d3ed018ef1037465fa010ed02595829092860e5cf863042e golangci-lint-1.27.0-freebsd-386.tar.gz -90205fc42ab5ed0096413e790d88ac9b4ed60f4c47e576d13dc0660f7ed4b013 golangci-lint-1.27.0-linux-arm64.tar.gz -8d345e4e88520e21c113d81978e89ad77fc5b13bfdf20e5bca86b83fc4261272 golangci-lint-1.27.0-linux-amd64.tar.gz -cc619634a77f18dc73df2a0725be13116d64328dc35131ca1737a850d6f76a59 golangci-lint-1.27.0-freebsd-armv7.tar.gz -fe683583cfc9eeec83e498c0d6159d87b5e1919dbe4b6c3b3913089642906069 golangci-lint-1.27.0-linux-s390x.tar.gz -058f5579bee75bdaacbaf75b75e1369f7ad877fd8b3b145aed17a17545de913e golangci-lint-1.27.0-freebsd-armv6.tar.gz -38e1e3dadbe3f56ab62b4de82ee0b88e8fad966d8dfd740a26ef94c2edef9818 golangci-lint-1.27.0-linux-armv6.tar.gz -071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint.exe-1.27.0-windows-386.zip -071b34af5516f4e1ddcaea6011e18208f4f043e1af8ba21eeccad4585cb3d095 golangci-lint-1.27.0-windows-386.zip -5f37e2b33914ecddb7cad38186ef4ec61d88172fc04f930fa0267c91151ff306 golangci-lint-1.27.0-linux-386.tar.gz -4d94cfb51fdebeb205f1d5a349ac2b683c30591c5150708073c1c329e15965f0 golangci-lint-1.27.0-freebsd-amd64.tar.gz -52572ba8ff07d5169c2365d3de3fec26dc55a97522094d13d1596199580fa281 golangci-lint-1.27.0-linux-ppc64le.tar.gz -3fb1a1683a29c6c0a8cd76135f62b606fbdd538d5a7aeab94af1af70ffdc2fd4 golangci-lint-1.27.0-darwin-amd64.tar.gz +7e9a47ab540aa3e8472fbf8120d28bed3b9d9cf625b955818e8bc69628d7187c golangci-lint-1.39.0-darwin-amd64.tar.gz +574daa2c9c299b01672a6daeb1873b5f12e413cdb6dc0e30f2ff163956778064 golangci-lint-1.39.0-darwin-arm64.tar.gz +6225f7014987324ab78e9b511f294e3f25be013728283c33918c67c8576d543e golangci-lint-1.39.0-freebsd-386.tar.gz +6b3e76e1e5eaf0159411c8e2727f8d533989d3bb19f10e9caa6e0b9619ee267d golangci-lint-1.39.0-freebsd-amd64.tar.gz +a301cacfff87ed9b00313d95278533c25a4527a06b040a17d969b4b7e1b8a90d golangci-lint-1.39.0-freebsd-armv7.tar.gz +25bfd96a29c3112f508d5e4fc860dbad7afce657233c343acfa20715717d51e7 golangci-lint-1.39.0-freebsd-armv6.tar.gz +9687e4ff15545cfc722b0e46107a94195166a505023b48a316579af25ad09505 golangci-lint-1.39.0-linux-armv7.tar.gz +a7fa7ab2bfc99cbe5e5bcbf5684f5a997f920afbbe2f253d2feb1001d5e3c8b3 golangci-lint-1.39.0-linux-armv6.tar.gz +c8f9634115beddb4ed9129c1f7ecd4c97c99d07aeef33e3707234097eeb51b7b golangci-lint-1.39.0-linux-mips64le.tar.gz +d1234c213b74751f1af413302dde0e9a6d4d29aecef034af7abb07dc1b6e887f golangci-lint-1.39.0-linux-arm64.tar.gz +df25d9267168323b163147acb823ab0215a8a3bb6898a4a9320afdfedde66817 golangci-lint-1.39.0-linux-386.tar.gz +1767e75fba357b7651b1a796d38453558f371c60af805505ec99e166908c04b5 golangci-lint-1.39.0-linux-ppc64le.tar.gz +25fd75bf3186b3d930ecae10185689968fd18fd8fa6f9f555d6beb04348c20f6 golangci-lint-1.39.0-linux-s390x.tar.gz +3a73aa7468087caa62673c8adea99b4e4dff846dc72707222db85f8679b40cbf golangci-lint-1.39.0-linux-amd64.tar.gz +578caceccf81739bda67dbfec52816709d03608c6878888ecdc0e186a094a41b golangci-lint-1.39.0-linux-mips64.tar.gz +494b66ba0e32c8ddf6c4f6b1d05729b110900f6017eda943057e43598c17d7a8 golangci-lint-1.39.0-windows-386.zip +52ec2e13a3cbb47147244dff8cfc35103563deb76e0459133058086fc35fb2c7 golangci-lint-1.39.0-windows-amd64.zip diff --git a/build/ci.go b/build/ci.go index 69d1e92b9d..3acd3691f1 100644 --- a/build/ci.go +++ b/build/ci.go @@ -152,7 +152,7 @@ var ( // This is the version of go that will be downloaded by // // go run ci.go install -dlgo - dlgoVersion = "1.16" + dlgoVersion = "1.16.3" ) var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin")) @@ -208,58 +208,25 @@ func doInstall(cmdline []string) { cc = flag.String("cc", "", "C compiler to cross build with") ) flag.CommandLine.Parse(cmdline) - env := build.Env() - - // Check local Go version. People regularly open issues about compilation - // failure with outdated Go. This should save them the trouble. - if !strings.Contains(runtime.Version(), "devel") { - // Figure out the minor version number since we can't textually compare (1.10 < 1.9) - var minor int - fmt.Sscanf(strings.TrimPrefix(runtime.Version(), "go1."), "%d", &minor) - if minor < 13 { - log.Println("You have Go version", runtime.Version()) - log.Println("go-ethereum requires at least Go version 1.13 and cannot") - log.Println("be compiled with an earlier version. Please upgrade your Go installation.") - os.Exit(1) - } - } - - // Choose which go command we're going to use. - var gobuild *exec.Cmd - if !*dlgo { - // Default behavior: use the go version which runs ci.go right now. - gobuild = goTool("build") - } else { - // Download of Go requested. This is for build environments where the - // installed version is too old and cannot be upgraded easily. - cachedir := filepath.Join("build", "cache") - goroot := downloadGo(runtime.GOARCH, runtime.GOOS, cachedir) - gobuild = localGoTool(goroot, "build") - } - // Configure environment for cross build. - if *arch != "" || *arch != runtime.GOARCH { - gobuild.Env = append(gobuild.Env, "CGO_ENABLED=1") - gobuild.Env = append(gobuild.Env, "GOARCH="+*arch) + // Configure the toolchain. + tc := build.GoToolchain{GOARCH: *arch, CC: *cc} + if *dlgo { + csdb := build.MustLoadChecksums("build/checksums.txt") + tc.Root = build.DownloadGo(csdb, dlgoVersion) } - // Configure C compiler. - if *cc != "" { - gobuild.Env = append(gobuild.Env, "CC="+*cc) - } else if os.Getenv("CC") != "" { - gobuild.Env = append(gobuild.Env, "CC="+os.Getenv("CC")) - } + // Configure the build. + env := build.Env() + gobuild := tc.Go("build", buildFlags(env)...) // arm64 CI builders are memory-constrained and can't handle concurrent builds, // better disable it. This check isn't the best, it should probably // check for something in env instead. - if runtime.GOARCH == "arm64" { + if env.CI && runtime.GOARCH == "arm64" { gobuild.Args = append(gobuild.Args, "-p", "1") } - // Put the default settings in. - gobuild.Args = append(gobuild.Args, buildFlags(env)...) - // We use -trimpath to avoid leaking local paths into the built executables. gobuild.Args = append(gobuild.Args, "-trimpath") @@ -301,57 +268,30 @@ func buildFlags(env build.Environment) (flags []string) { return flags } -// goTool returns the go tool. This uses the Go version which runs ci.go. -func goTool(subcmd string, args ...string) *exec.Cmd { - cmd := build.GoTool(subcmd, args...) - goToolSetEnv(cmd) - return cmd -} - -// localGoTool returns the go tool from the given GOROOT. -func localGoTool(goroot string, subcmd string, args ...string) *exec.Cmd { - gotool := filepath.Join(goroot, "bin", "go") - cmd := exec.Command(gotool, subcmd) - goToolSetEnv(cmd) - cmd.Env = append(cmd.Env, "GOROOT="+goroot) - cmd.Args = append(cmd.Args, args...) - return cmd -} - -// goToolSetEnv forwards the build environment to the go tool. -func goToolSetEnv(cmd *exec.Cmd) { - cmd.Env = append(cmd.Env, "GOBIN="+GOBIN) - for _, e := range os.Environ() { - if strings.HasPrefix(e, "GOBIN=") || strings.HasPrefix(e, "CC=") { - continue - } - cmd.Env = append(cmd.Env, e) - } -} - // Running The Tests // // "tests" also includes static analysis tools such as vet. func doTest(cmdline []string) { - coverage := flag.Bool("coverage", false, "Whether to record code coverage") - verbose := flag.Bool("v", false, "Whether to log verbosely") + var ( + dlgo = flag.Bool("dlgo", false, "Download Go and build with it") + arch = flag.String("arch", "", "Run tests for given architecture") + cc = flag.String("cc", "", "Sets C compiler binary") + coverage = flag.Bool("coverage", false, "Whether to record code coverage") + verbose = flag.Bool("v", false, "Whether to log verbosely") + ) flag.CommandLine.Parse(cmdline) - env := build.Env() - packages := []string{"./..."} - if len(flag.CommandLine.Args()) > 0 { - packages = flag.CommandLine.Args() + // Configure the toolchain. + tc := &build.GoToolchain{GOARCH: *arch, CC: *cc} + if *dlgo { + csdb := build.MustLoadChecksums("build/checksums.txt") + tc.Root = build.DownloadGo(csdb, dlgoVersion) } - // Quorum - // Ignore not Quorum related packages to accelerate build - packages = build.ExpandPackagesNoVendor(packages) - packages = build.IgnorePackages(packages) + gotest := tc.Go("test") - // Run the actual tests. // Test a single package at a time. CI builders are slow // and some tests run into timeouts under load. - gotest := goTool("test", buildFlags(env)...) gotest.Args = append(gotest.Args, "-p", "1", "--short") if *coverage { gotest.Args = append(gotest.Args, "-covermode=atomic", "-cover") @@ -360,6 +300,14 @@ func doTest(cmdline []string) { gotest.Args = append(gotest.Args, "-v") } + packages := []string{"./..."} + if len(flag.CommandLine.Args()) > 0 { + packages = flag.CommandLine.Args() + } + // Quorum + // Ignore not Quorum related packages to accelerate build + packages = build.ExpandPackagesNoVendor(tc, packages) + packages = build.IgnorePackages(packages) gotest.Args = append(gotest.Args, packages...) build.MustRun(gotest) } @@ -383,7 +331,7 @@ func doLint(cmdline []string) { // downloadLinter downloads and unpacks golangci-lint. func downloadLinter(cachedir string) string { - const version = "1.27.0" + const version = "1.39.0" csdb := build.MustLoadChecksums("build/checksums.txt") base := fmt.Sprintf("golangci-lint-%s-%s-%s", version, runtime.GOOS, runtime.GOARCH) @@ -419,8 +367,7 @@ func doArchive(cmdline []string) { } var ( - env = build.Env() - + env = build.Env() basegeth = archiveBasename(*arch, params.ArchiveVersion(env.Commit)) geth = "geth-" + basegeth + ext alltools = "geth-alltools-" + basegeth + ext @@ -489,6 +436,11 @@ func archiveUpload(archive string, blobstore string, signer string, signifyVar s return err } } + if signifyVar != "" { + if err := build.AzureBlobstoreUpload(archive+".sig", filepath.Base(archive+".sig"), auth); err != nil { + return err + } + } } return nil } @@ -496,15 +448,15 @@ func archiveUpload(archive string, blobstore string, signer string, signifyVar s // skips archiving for some build configurations. func maybeSkipArchive(env build.Environment) { if env.IsPullRequest { - log.Printf("skipping because this is a PR build") + log.Printf("skipping archive creation because this is a PR build") os.Exit(0) } if env.IsCronJob { - log.Printf("skipping because this is a cron job") + log.Printf("skipping archive creation because this is a cron job") os.Exit(0) } if env.Branch != "master" && !strings.HasPrefix(env.Tag, "v1.") { - log.Printf("skipping because branch %q, tag %q is not on the whitelist", env.Branch, env.Tag) + log.Printf("skipping archive creation because branch %q, tag %q is not on the whitelist", env.Branch, env.Tag) os.Exit(0) } } @@ -522,6 +474,7 @@ func doDebianSource(cmdline []string) { flag.CommandLine.Parse(cmdline) *workdir = makeWorkdir(*workdir) env := build.Env() + tc := new(build.GoToolchain) maybeSkipArchive(env) // Import the signing key. @@ -535,12 +488,12 @@ func doDebianSource(cmdline []string) { gobundle := downloadGoSources(*cachedir) // Download all the dependencies needed to build the sources and run the ci script - srcdepfetch := goTool("mod", "download") - srcdepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath")) + srcdepfetch := tc.Go("mod", "download") + srcdepfetch.Env = append(srcdepfetch.Env, "GOPATH="+filepath.Join(*workdir, "modgopath")) build.MustRun(srcdepfetch) - cidepfetch := goTool("run", "./build/ci.go") - cidepfetch.Env = append(os.Environ(), "GOPATH="+filepath.Join(*workdir, "modgopath")) + cidepfetch := tc.Go("run", "./build/ci.go") + cidepfetch.Env = append(cidepfetch.Env, "GOPATH="+filepath.Join(*workdir, "modgopath")) cidepfetch.Run() // Command fails, don't care, we only need the deps to start it // Create Debian packages and upload them. @@ -596,41 +549,6 @@ func downloadGoSources(cachedir string) string { return dst } -// downloadGo downloads the Go binary distribution and unpacks it into a temporary -// directory. It returns the GOROOT of the unpacked toolchain. -func downloadGo(goarch, goos, cachedir string) string { - if goarch == "arm" { - goarch = "armv6l" - } - - csdb := build.MustLoadChecksums("build/checksums.txt") - file := fmt.Sprintf("go%s.%s-%s", dlgoVersion, goos, goarch) - if goos == "windows" { - file += ".zip" - } else { - file += ".tar.gz" - } - url := "https://golang.org/dl/" + file - dst := filepath.Join(cachedir, file) - if err := csdb.DownloadFile(url, dst); err != nil { - log.Fatal(err) - } - - ucache, err := os.UserCacheDir() - if err != nil { - log.Fatal(err) - } - godir := filepath.Join(ucache, fmt.Sprintf("geth-go-%s-%s-%s", dlgoVersion, goos, goarch)) - if err := build.ExtractArchive(dst, godir); err != nil { - log.Fatal(err) - } - goroot, err := filepath.Abs(filepath.Join(godir, "go")) - if err != nil { - log.Fatal(err) - } - return goroot -} - func ppaUpload(workdir, ppa, sshUser string, files []string) { p := strings.Split(ppa, "/") if len(p) != 2 { @@ -905,13 +823,23 @@ func doAndroidArchive(cmdline []string) { ) flag.CommandLine.Parse(cmdline) env := build.Env() + tc := new(build.GoToolchain) // Sanity check that the SDK and NDK are installed and set if os.Getenv("ANDROID_HOME") == "" { log.Fatal("Please ensure ANDROID_HOME points to your Android SDK") } + + // Build gomobile. + install := tc.Install(GOBIN, "golang.org/x/mobile/cmd/gomobile@latest", "golang.org/x/mobile/cmd/gobind@latest") + install.Env = append(install.Env) + build.MustRun(install) + + // Ensure all dependencies are available. This is required to make + // gomobile bind work because it expects go.sum to contain all checksums. + build.MustRun(tc.Go("mod", "download")) + // Build the Android archive and Maven resources - build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind")) build.MustRun(gomobileTool("bind", "-ldflags", "-s -w", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile")) if *local { @@ -1031,10 +959,16 @@ func doXCodeFramework(cmdline []string) { ) flag.CommandLine.Parse(cmdline) env := build.Env() + tc := new(build.GoToolchain) + + // Build gomobile. + build.MustRun(tc.Install(GOBIN, "golang.org/x/mobile/cmd/gomobile@latest", "golang.org/x/mobile/cmd/gobind@latest")) + + // Ensure all dependencies are available. This is required to make + // gomobile bind work because it expects go.sum to contain all checksums. + build.MustRun(tc.Go("mod", "download")) // Build the iOS XCode framework - build.MustRun(goTool("get", "golang.org/x/mobile/cmd/gomobile", "golang.org/x/mobile/cmd/gobind")) - build.MustRun(gomobileTool("init")) bind := gomobileTool("bind", "-ldflags", "-s -w", "--target", "ios", "-v", "github.com/ethereum/go-ethereum/mobile") if *local { @@ -1043,17 +977,17 @@ func doXCodeFramework(cmdline []string) { build.MustRun(bind) return } + + // Create the archive. + maybeSkipArchive(env) archive := "geth-" + archiveBasename("ios", params.ArchiveVersion(env.Commit)) - if err := os.Mkdir(archive, os.ModePerm); err != nil { + if err := os.MkdirAll(archive, 0755); err != nil { log.Fatal(err) } bind.Dir, _ = filepath.Abs(archive) build.MustRun(bind) build.MustRunCommand("tar", "-zcvf", archive+".tar.gz", archive) - // Skip CocoaPods deploy and Azure upload for PR builds - maybeSkipArchive(env) - // Sign and upload the framework to Azure if err := archiveUpload(archive+".tar.gz", *upload, *signer, *signify); err != nil { log.Fatal(err) @@ -1119,10 +1053,10 @@ func doXgo(cmdline []string) { ) flag.CommandLine.Parse(cmdline) env := build.Env() + var tc build.GoToolchain // Make sure xgo is available for cross compilation - gogetxgo := goTool("get", "github.com/karalabe/xgo") - build.MustRun(gogetxgo) + build.MustRun(tc.Install(GOBIN, "github.com/karalabe/xgo@latest")) // If all tools building is requested, build everything the builder wants args := append(buildFlags(env), flag.Args()...) @@ -1133,27 +1067,23 @@ func doXgo(cmdline []string) { if strings.HasPrefix(res, GOBIN) { // Binary tool found, cross build it explicitly args = append(args, "./"+filepath.Join("cmd", filepath.Base(res))) - xgo := xgoTool(args) - build.MustRun(xgo) + build.MustRun(xgoTool(args)) args = args[:len(args)-1] } } return } - // Otherwise xxecute the explicit cross compilation + + // Otherwise execute the explicit cross compilation path := args[len(args)-1] args = append(args[:len(args)-1], []string{"--dest", GOBIN, path}...) - - xgo := xgoTool(args) - build.MustRun(xgo) + build.MustRun(xgoTool(args)) } func xgoTool(args []string) *exec.Cmd { cmd := exec.Command(filepath.Join(GOBIN, "xgo"), args...) cmd.Env = os.Environ() - cmd.Env = append(cmd.Env, []string{ - "GOBIN=" + GOBIN, - }...) + cmd.Env = append(cmd.Env, []string{"GOBIN=" + GOBIN}...) return cmd } diff --git a/build/nsis.envvarupdate.nsh b/build/nsis.envvarupdate.nsh index 9c3ecbe337..95c2f1f639 100644 --- a/build/nsis.envvarupdate.nsh +++ b/build/nsis.envvarupdate.nsh @@ -43,7 +43,7 @@ !ifndef Un${StrFuncName}_INCLUDED ${Un${StrFuncName}} !endif - !define un.${StrFuncName} "${Un${StrFuncName}}" + !define un.${StrFuncName} '${Un${StrFuncName}}' !macroend !insertmacro _IncludeStrFunction StrTok diff --git a/cmd/devp2p/README.md b/cmd/devp2p/README.md index e934ee25c9..7f816b602e 100644 --- a/cmd/devp2p/README.md +++ b/cmd/devp2p/README.md @@ -30,6 +30,29 @@ Run `devp2p dns to-route53 ` to publish a tree to Amazon Route53. You can find more information about these commands in the [DNS Discovery Setup Guide][dns-tutorial]. +### Node Set Utilities + +There are several commands for working with JSON node set files. These files are generated +by the discovery crawlers and DNS client commands. Node sets also used as the input of the +DNS deployer commands. + +Run `devp2p nodeset info ` to display statistics of a node set. + +Run `devp2p nodeset filter ` to write a new, filtered node +set to standard output. The following filters are supported: + +- `-limit ` limits the output set to N entries, taking the top N nodes by score +- `-ip ` filters nodes by IP subnet +- `-min-age ` filters nodes by 'first seen' time +- `-eth-network ` filters nodes by "eth" ENR entry +- `-les-server` filters nodes by LES server support +- `-snap` filters nodes by snap protocol support + +For example, given a node set in `nodes.json`, you could create a filtered set containing +up to 20 eth mainnet nodes which also support snap sync using this command: + + devp2p nodeset filter nodes.json -eth-network mainnet -snap -limit 20 + ### Discovery v4 Utilities The `devp2p discv4 ...` command family deals with the [Node Discovery v4][discv4] @@ -94,7 +117,7 @@ To run the eth protocol test suite against your implementation, the node needs t geth --datadir --nodiscover --nat=none --networkid 19763 --verbosity 5 ``` -Then, run the following command, replacing `` with the enode of the geth node: +Then, run the following command, replacing `` with the enode of the geth node: ``` devp2p rlpx eth-test cmd/devp2p/internal/ethtest/testdata/chain.rlp cmd/devp2p/internal/ethtest/testdata/genesis.json ``` @@ -103,7 +126,7 @@ Repeat the above process (re-initialising the node) in order to run the Eth Prot #### Eth66 Test Suite -The Eth66 test suite is also a conformance test suite for the eth 66 protocol version specifically. +The Eth66 test suite is also a conformance test suite for the eth 66 protocol version specifically. To run the eth66 protocol test suite, initialize a geth node as described above and run the following command, replacing `` with the enode of the geth node: diff --git a/cmd/devp2p/dns_route53.go b/cmd/devp2p/dns_route53.go index 2b6a30e60f..1d4f975dda 100644 --- a/cmd/devp2p/dns_route53.go +++ b/cmd/devp2p/dns_route53.go @@ -107,22 +107,48 @@ func (c *route53Client) deploy(name string, t *dnsdisc.Tree) error { return err } log.Info(fmt.Sprintf("Found %d TXT records", len(existing))) - records := t.ToTXT(name) changes := c.computeChanges(name, records, existing) + + // Submit to API. + comment := fmt.Sprintf("enrtree update of %s at seq %d", name, t.Seq()) + return c.submitChanges(changes, comment) +} + +// deleteDomain removes all TXT records of the given domain. +func (c *route53Client) deleteDomain(name string) error { + if err := c.checkZone(name); err != nil { + return err + } + + // Compute DNS changes. + existing, err := c.collectRecords(name) + if err != nil { + return err + } + log.Info(fmt.Sprintf("Found %d TXT records", len(existing))) + changes := makeDeletionChanges(existing, nil) + + // Submit to API. + comment := "enrtree delete of " + name + return c.submitChanges(changes, comment) +} + +// submitChanges submits the given DNS changes to Route53. +func (c *route53Client) submitChanges(changes []types.Change, comment string) error { if len(changes) == 0 { log.Info("No DNS changes needed") return nil } - // Submit all change batches. + var err error batches := splitChanges(changes, route53ChangeSizeLimit, route53ChangeCountLimit) changesToCheck := make([]*route53.ChangeResourceRecordSetsOutput, len(batches)) for i, changes := range batches { log.Info(fmt.Sprintf("Submitting %d changes to Route53", len(changes))) batch := &types.ChangeBatch{ Changes: changes, - Comment: aws.String(fmt.Sprintf("enrtree update %d/%d of %s at seq %d", i+1, len(batches), name, t.Seq())), + Comment: aws.String(fmt.Sprintf("%s (%d/%d)", comment, i+1, len(batches))), } req := &route53.ChangeResourceRecordSetsInput{HostedZoneId: &c.zoneID, ChangeBatch: batch} changesToCheck[i], err = c.api.ChangeResourceRecordSets(context.TODO(), req) @@ -151,7 +177,6 @@ func (c *route53Client) deploy(name string, t *dnsdisc.Tree) error { time.Sleep(30 * time.Second) } } - return nil } @@ -186,7 +211,8 @@ func (c *route53Client) findZoneID(name string) (string, error) { return "", errors.New("can't find zone ID for " + name) } -// computeChanges creates DNS changes for the given record. +// computeChanges creates DNS changes for the given set of DNS discovery records. +// The 'existing' arg is the set of records that already exist on Route53. func (c *route53Client) computeChanges(name string, records map[string]string, existing map[string]recordSet) []types.Change { // Convert all names to lowercase. lrecords := make(map[string]string, len(records)) @@ -223,16 +249,23 @@ func (c *route53Client) computeChanges(name string, records map[string]string, e } // Iterate over the old records and delete anything stale. - for path, set := range existing { - if _, ok := records[path]; ok { + changes = append(changes, makeDeletionChanges(existing, records)...) + + // Ensure changes are in the correct order. + sortChanges(changes) + return changes +} + +// makeDeletionChanges creates record changes which delete all records not contained in 'keep'. +func makeDeletionChanges(records map[string]recordSet, keep map[string]string) []types.Change { + var changes []types.Change + for path, set := range records { + if _, ok := keep[path]; ok { continue } - // Stale entry, nuke it. - log.Info(fmt.Sprintf("Deleting %s = %q", path, strings.Join(set.values, ""))) + log.Info(fmt.Sprintf("Deleting %s = %s", path, strings.Join(set.values, ""))) changes = append(changes, newTXTChange("DELETE", path, set.ttl, set.values...)) } - - sortChanges(changes) return changes } diff --git a/cmd/devp2p/dnscmd.go b/cmd/devp2p/dnscmd.go index 50ab7bf983..66deef56ea 100644 --- a/cmd/devp2p/dnscmd.go +++ b/cmd/devp2p/dnscmd.go @@ -43,6 +43,7 @@ var ( dnsTXTCommand, dnsCloudflareCommand, dnsRoute53Command, + dnsRoute53NukeCommand, }, } dnsSyncCommand = cli.Command{ @@ -84,6 +85,18 @@ var ( route53RegionFlag, }, } + dnsRoute53NukeCommand = cli.Command{ + Name: "nuke-route53", + Usage: "Deletes DNS TXT records of a subdomain on Amazon Route53", + ArgsUsage: "", + Action: dnsNukeRoute53, + Flags: []cli.Flag{ + route53AccessKeyFlag, + route53AccessSecretFlag, + route53ZoneIDFlag, + route53RegionFlag, + }, + } ) var ( @@ -174,6 +187,9 @@ func dnsSign(ctx *cli.Context) error { return nil } +// directoryName returns the directory name of the given path. +// For example, when dir is "foo/bar", it returns "bar". +// When dir is ".", and the working directory is "example/foo", it returns "foo". func directoryName(dir string) string { abs, err := filepath.Abs(dir) if err != nil { @@ -182,7 +198,7 @@ func directoryName(dir string) string { return filepath.Base(abs) } -// dnsToTXT peforms dnsTXTCommand. +// dnsToTXT performs dnsTXTCommand. func dnsToTXT(ctx *cli.Context) error { if ctx.NArg() < 1 { return fmt.Errorf("need tree definition directory as argument") @@ -199,9 +215,9 @@ func dnsToTXT(ctx *cli.Context) error { return nil } -// dnsToCloudflare peforms dnsCloudflareCommand. +// dnsToCloudflare performs dnsCloudflareCommand. func dnsToCloudflare(ctx *cli.Context) error { - if ctx.NArg() < 1 { + if ctx.NArg() != 1 { return fmt.Errorf("need tree definition directory as argument") } domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0)) @@ -212,9 +228,9 @@ func dnsToCloudflare(ctx *cli.Context) error { return client.deploy(domain, t) } -// dnsToRoute53 peforms dnsRoute53Command. +// dnsToRoute53 performs dnsRoute53Command. func dnsToRoute53(ctx *cli.Context) error { - if ctx.NArg() < 1 { + if ctx.NArg() != 1 { return fmt.Errorf("need tree definition directory as argument") } domain, t, err := loadTreeDefinitionForExport(ctx.Args().Get(0)) @@ -225,6 +241,15 @@ func dnsToRoute53(ctx *cli.Context) error { return client.deploy(domain, t) } +// dnsNukeRoute53 performs dnsRoute53NukeCommand. +func dnsNukeRoute53(ctx *cli.Context) error { + if ctx.NArg() != 1 { + return fmt.Errorf("need domain name as argument") + } + client := newRoute53Client(ctx) + return client.deleteDomain(ctx.Args().First()) +} + // loadSigningKey loads a private key in Ethereum keystore format. func loadSigningKey(keyfile string) *ecdsa.PrivateKey { keyjson, err := ioutil.ReadFile(keyfile) diff --git a/cmd/devp2p/internal/ethtest/chain.go b/cmd/devp2p/internal/ethtest/chain.go index d67387e80b..83c55181ad 100644 --- a/cmd/devp2p/internal/ethtest/chain.go +++ b/cmd/devp2p/internal/ethtest/chain.go @@ -34,6 +34,7 @@ import ( ) type Chain struct { + genesis core.Genesis blocks []*types.Block chainConfig *params.ChainConfig } @@ -124,16 +125,34 @@ func (c *Chain) GetHeaders(req GetBlockHeaders) (BlockHeaders, error) { // loadChain takes the given chain.rlp file, and decodes and returns // the blocks from the file. func loadChain(chainfile string, genesis string) (*Chain, error) { - chainConfig, err := ioutil.ReadFile(genesis) + gen, err := loadGenesis(genesis) if err != nil { return nil, err } + gblock := gen.ToBlock(nil) + + blocks, err := blocksFromFile(chainfile, gblock) + if err != nil { + return nil, err + } + + c := &Chain{genesis: gen, blocks: blocks, chainConfig: gen.Config} + return c, nil +} + +func loadGenesis(genesisFile string) (core.Genesis, error) { + chainConfig, err := ioutil.ReadFile(genesisFile) + if err != nil { + return core.Genesis{}, err + } var gen core.Genesis if err := json.Unmarshal(chainConfig, &gen); err != nil { - return nil, err + return core.Genesis{}, err } - gblock := gen.ToBlock(nil) + return gen, nil +} +func blocksFromFile(chainfile string, gblock *types.Block) ([]*types.Block, error) { // Load chain.rlp. fh, err := os.Open(chainfile) if err != nil { @@ -161,7 +180,5 @@ func loadChain(chainfile string, genesis string) (*Chain, error) { } blocks = append(blocks, &b) } - - c := &Chain{blocks: blocks, chainConfig: gen.Config} - return c, nil + return blocks, nil } diff --git a/cmd/devp2p/internal/ethtest/eth66_suite.go b/cmd/devp2p/internal/ethtest/eth66_suite.go index 0995dcb3e4..903a90c7eb 100644 --- a/cmd/devp2p/internal/ethtest/eth66_suite.go +++ b/cmd/devp2p/internal/ethtest/eth66_suite.go @@ -19,6 +19,7 @@ package ethtest import ( "time" + "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/eth/protocols/eth" @@ -41,6 +42,7 @@ func (s *Suite) Is_66(t *utesting.T) { // make sure the chain head is correct. func (s *Suite) TestStatus_66(t *utesting.T) { conn := s.dial66(t) + defer conn.Close() // get protoHandshake conn.handshake(t) // get status @@ -60,6 +62,7 @@ func (s *Suite) TestStatus_66(t *utesting.T) { // an eth66 `GetBlockHeaders` request and that the response is accurate. func (s *Suite) TestGetBlockHeaders_66(t *utesting.T) { conn := s.setupConnection66(t) + defer conn.Close() // get block headers req := ð.GetBlockHeadersPacket66{ RequestId: 3, @@ -73,9 +76,14 @@ func (s *Suite) TestGetBlockHeaders_66(t *utesting.T) { }, } // write message - headers := s.getBlockHeaders66(t, conn, req, req.RequestId) + headers, err := s.getBlockHeaders66(conn, req, req.RequestId) + if err != nil { + t.Fatalf("could not get block headers: %v", err) + } // check for correct headers - headersMatch(t, s.chain, headers) + if !headersMatch(t, s.chain, headers) { + t.Fatal("received wrong header(s)") + } } // TestSimultaneousRequests_66 sends two simultaneous `GetBlockHeader` requests @@ -83,7 +91,8 @@ func (s *Suite) TestGetBlockHeaders_66(t *utesting.T) { // headers per request. func (s *Suite) TestSimultaneousRequests_66(t *utesting.T) { // create two connections - conn1, conn2 := s.setupConnection66(t), s.setupConnection66(t) + conn := s.setupConnection66(t) + defer conn.Close() // create two requests req1 := ð.GetBlockHeadersPacket66{ RequestId: 111, @@ -107,33 +116,36 @@ func (s *Suite) TestSimultaneousRequests_66(t *utesting.T) { Reverse: false, }, } - // wait for headers for first request - headerChan := make(chan BlockHeaders, 1) - go func(headers chan BlockHeaders) { - headers <- s.getBlockHeaders66(t, conn1, req1, req1.RequestId) - }(headerChan) - // check headers of second request - headersMatch(t, s.chain, s.getBlockHeaders66(t, conn2, req2, req2.RequestId)) - // check headers of first request - headersMatch(t, s.chain, <-headerChan) + // write first request + if err := conn.write66(req1, GetBlockHeaders{}.Code()); err != nil { + t.Fatalf("failed to write to connection: %v", err) + } + // write second request + if err := conn.write66(req2, GetBlockHeaders{}.Code()); err != nil { + t.Fatalf("failed to write to connection: %v", err) + } + // wait for responses + headers1, err := s.waitForBlockHeadersResponse66(conn, req1.RequestId) + if err != nil { + t.Fatalf("error while waiting for block headers: %v", err) + } + headers2, err := s.waitForBlockHeadersResponse66(conn, req2.RequestId) + if err != nil { + t.Fatalf("error while waiting for block headers: %v", err) + } + // check headers of both responses + if !headersMatch(t, s.chain, headers1) { + t.Fatalf("wrong header(s) in response to req1: got %v", headers1) + } + if !headersMatch(t, s.chain, headers2) { + t.Fatalf("wrong header(s) in response to req2: got %v", headers2) + } } // TestBroadcast_66 tests whether a block announcement is correctly // propagated to the given node's peer(s) on the eth66 protocol. func (s *Suite) TestBroadcast_66(t *utesting.T) { - sendConn, receiveConn := s.setupConnection66(t), s.setupConnection66(t) - nextBlock := len(s.chain.blocks) - blockAnnouncement := &NewBlock{ - Block: s.fullChain.blocks[nextBlock], - TD: s.fullChain.TD(nextBlock + 1), - } - s.testAnnounce66(t, sendConn, receiveConn, blockAnnouncement) - // update test suite chain - s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock]) - // wait for client to update its chain - if err := receiveConn.waitForBlock66(s.chain.Head()); err != nil { - t.Fatal(err) - } + s.sendNextBlock66(t) } // TestGetBlockBodies_66 tests whether the given node can respond to @@ -141,6 +153,7 @@ func (s *Suite) TestBroadcast_66(t *utesting.T) { // the eth66 protocol. func (s *Suite) TestGetBlockBodies_66(t *utesting.T) { conn := s.setupConnection66(t) + defer conn.Close() // create block bodies request id := uint64(55) req := ð.GetBlockBodiesPacket66{ @@ -195,33 +208,31 @@ func (s *Suite) TestLargeAnnounce_66(t *utesting.T) { t.Fatalf("could not write to connection: %v", err) } // Invalid announcement, check that peer disconnected - switch msg := sendConn.ReadAndServe(s.chain, timeout).(type) { + switch msg := sendConn.ReadAndServe(s.chain, time.Second*8).(type) { case *Disconnect: case *Error: break default: t.Fatalf("unexpected: %s wanted disconnect", pretty.Sdump(msg)) } + sendConn.Close() } // Test the last block as a valid block - sendConn := s.setupConnection66(t) - receiveConn := s.setupConnection66(t) - s.testAnnounce66(t, sendConn, receiveConn, blocks[3]) - // update test suite chain - s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock]) - // wait for client to update its chain - if err := receiveConn.waitForBlock66(s.fullChain.blocks[nextBlock]); err != nil { - t.Fatal(err) - } + s.sendNextBlock66(t) } func (s *Suite) TestOldAnnounce_66(t *utesting.T) { - s.oldAnnounce(t, s.setupConnection66(t), s.setupConnection66(t)) + sendConn, recvConn := s.setupConnection66(t), s.setupConnection66(t) + defer sendConn.Close() + defer recvConn.Close() + + s.oldAnnounce(t, sendConn, recvConn) } // TestMaliciousHandshake_66 tries to send malicious data during the handshake. func (s *Suite) TestMaliciousHandshake_66(t *utesting.T) { conn := s.dial66(t) + defer conn.Close() // write hello to client pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:] handshakes := []*Hello{ @@ -295,6 +306,7 @@ func (s *Suite) TestMaliciousHandshake_66(t *utesting.T) { // TestMaliciousStatus_66 sends a status package with a large total difficulty. func (s *Suite) TestMaliciousStatus_66(t *utesting.T) { conn := s.dial66(t) + defer conn.Close() // get protoHandshake conn.handshake(t) status := &Status{ @@ -334,23 +346,37 @@ func (s *Suite) TestTransaction_66(t *utesting.T) { } func (s *Suite) TestMaliciousTx_66(t *utesting.T) { - tests := []*types.Transaction{ + badTxs := []*types.Transaction{ getOldTxFromChain(t, s), invalidNonceTx(t, s), hugeAmount(t, s), hugeGasPrice(t, s), hugeData(t, s), } - for i, tx := range tests { + sendConn := s.setupConnection66(t) + defer sendConn.Close() + // set up receiving connection before sending txs to make sure + // no announcements are missed + recvConn := s.setupConnection66(t) + defer recvConn.Close() + + for i, tx := range badTxs { t.Logf("Testing malicious tx propagation: %v\n", i) - sendFailingTx66(t, s, tx) + if err := sendConn.Write(&Transactions{tx}); err != nil { + t.Fatalf("could not write to connection: %v", err) + } + } + // check to make sure bad txs aren't propagated + waitForTxPropagation(t, s, badTxs, recvConn) } // TestZeroRequestID_66 checks that a request ID of zero is still handled // by the node. func (s *Suite) TestZeroRequestID_66(t *utesting.T) { conn := s.setupConnection66(t) + defer conn.Close() + req := ð.GetBlockHeadersPacket66{ RequestId: 0, GetBlockHeadersPacket: ð.GetBlockHeadersPacket{ @@ -360,25 +386,31 @@ func (s *Suite) TestZeroRequestID_66(t *utesting.T) { Amount: 2, }, } - headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req, req.RequestId)) + headers, err := s.getBlockHeaders66(conn, req, req.RequestId) + if err != nil { + t.Fatalf("could not get block headers: %v", err) + } + if !headersMatch(t, s.chain, headers) { + t.Fatal("received wrong header(s)") + } } // TestSameRequestID_66 sends two requests with the same request ID // concurrently to a single node. func (s *Suite) TestSameRequestID_66(t *utesting.T) { conn := s.setupConnection66(t) - // create two separate requests with same ID + // create two requests with the same request ID reqID := uint64(1234) - req1 := ð.GetBlockHeadersPacket66{ + request1 := ð.GetBlockHeadersPacket66{ RequestId: reqID, GetBlockHeadersPacket: ð.GetBlockHeadersPacket{ Origin: eth.HashOrNumber{ - Number: 0, + Number: 1, }, Amount: 2, }, } - req2 := ð.GetBlockHeadersPacket66{ + request2 := ð.GetBlockHeadersPacket66{ RequestId: reqID, GetBlockHeadersPacket: ð.GetBlockHeadersPacket{ Origin: eth.HashOrNumber{ @@ -387,10 +419,103 @@ func (s *Suite) TestSameRequestID_66(t *utesting.T) { Amount: 2, }, } - // send requests concurrently - go func() { - headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req2, reqID)) - }() - // check response from first request - headersMatch(t, s.chain, s.getBlockHeaders66(t, conn, req1, reqID)) + // write the first request + err := conn.write66(request1, GetBlockHeaders{}.Code()) + if err != nil { + t.Fatalf("could not write to connection: %v", err) + } + // perform second request + headers2, err := s.getBlockHeaders66(conn, request2, reqID) + if err != nil { + t.Fatalf("could not get block headers: %v", err) + return + } + // wait for response to first request + headers1, err := s.waitForBlockHeadersResponse66(conn, reqID) + if err != nil { + t.Fatalf("could not get BlockHeaders response: %v", err) + } + // check if headers match + if !headersMatch(t, s.chain, headers1) || !headersMatch(t, s.chain, headers2) { + t.Fatal("received wrong header(s)") + } +} + +// TestLargeTxRequest_66 tests whether a node can fulfill a large GetPooledTransactions +// request. +func (s *Suite) TestLargeTxRequest_66(t *utesting.T) { + // send the next block to ensure the node is no longer syncing and is able to accept + // txs + s.sendNextBlock66(t) + // send 2000 transactions to the node + hashMap, txs := generateTxs(t, s, 2000) + sendConn := s.setupConnection66(t) + defer sendConn.Close() + + sendMultipleSuccessfulTxs(t, s, sendConn, txs) + // set up connection to receive to ensure node is peered with the receiving connection + // before tx request is sent + recvConn := s.setupConnection66(t) + defer recvConn.Close() + // create and send pooled tx request + hashes := make([]common.Hash, 0) + for _, hash := range hashMap { + hashes = append(hashes, hash) + } + getTxReq := ð.GetPooledTransactionsPacket66{ + RequestId: 1234, + GetPooledTransactionsPacket: hashes, + } + if err := recvConn.write66(getTxReq, GetPooledTransactions{}.Code()); err != nil { + t.Fatalf("could not write to conn: %v", err) + } + // check that all received transactions match those that were sent to node + switch msg := recvConn.waitForResponse(s.chain, timeout, getTxReq.RequestId).(type) { + case PooledTransactions: + for _, gotTx := range msg { + if _, exists := hashMap[gotTx.Hash()]; !exists { + t.Fatalf("unexpected tx received: %v", gotTx.Hash()) + } + } + default: + t.Fatalf("unexpected %s", pretty.Sdump(msg)) + } +} + +// TestNewPooledTxs_66 tests whether a node will do a GetPooledTransactions +// request upon receiving a NewPooledTransactionHashes announcement. +func (s *Suite) TestNewPooledTxs_66(t *utesting.T) { + // send the next block to ensure the node is no longer syncing and is able to accept + // txs + s.sendNextBlock66(t) + // generate 50 txs + hashMap, _ := generateTxs(t, s, 50) + // create new pooled tx hashes announcement + hashes := make([]common.Hash, 0) + for _, hash := range hashMap { + hashes = append(hashes, hash) + } + announce := NewPooledTransactionHashes(hashes) + // send announcement + conn := s.setupConnection66(t) + defer conn.Close() + if err := conn.Write(announce); err != nil { + t.Fatalf("could not write to connection: %v", err) + } + // wait for GetPooledTxs request + for { + _, msg := conn.readAndServe66(s.chain, timeout) + switch msg := msg.(type) { + case GetPooledTransactions: + if len(msg) != len(hashes) { + t.Fatalf("unexpected number of txs requested: wanted %d, got %d", len(hashes), len(msg)) + } + return + case *NewPooledTransactionHashes, *NewBlock, *NewBlockHashes: + // ignore propagated txs and blocks from old tests + continue + default: + t.Fatalf("unexpected %s", pretty.Sdump(msg)) + } + } } diff --git a/cmd/devp2p/internal/ethtest/eth66_suiteHelpers.go b/cmd/devp2p/internal/ethtest/eth66_suiteHelpers.go index 4ef349740f..3c5b22f0b5 100644 --- a/cmd/devp2p/internal/ethtest/eth66_suiteHelpers.go +++ b/cmd/devp2p/internal/ethtest/eth66_suiteHelpers.go @@ -18,6 +18,7 @@ package ethtest import ( "fmt" + "reflect" "time" "github.com/ethereum/go-ethereum/core/types" @@ -111,6 +112,18 @@ func (c *Conn) read66() (uint64, Message) { msg = new(Transactions) case (NewPooledTransactionHashes{}).Code(): msg = new(NewPooledTransactionHashes) + case (GetPooledTransactions{}.Code()): + ethMsg := new(eth.GetPooledTransactionsPacket66) + if err := rlp.DecodeBytes(rawData, ethMsg); err != nil { + return 0, errorf("could not rlp decode message: %v", err) + } + return ethMsg.RequestId, GetPooledTransactions(ethMsg.GetPooledTransactionsPacket) + case (PooledTransactions{}.Code()): + ethMsg := new(eth.PooledTransactionsPacket66) + if err := rlp.DecodeBytes(rawData, ethMsg); err != nil { + return 0, errorf("could not rlp decode message: %v", err) + } + return ethMsg.RequestId, PooledTransactions(ethMsg.PooledTransactionsPacket) default: msg = errorf("invalid message code: %d", code) } @@ -124,13 +137,21 @@ func (c *Conn) read66() (uint64, Message) { return 0, errorf("invalid message: %s", string(rawData)) } +func (c *Conn) waitForResponse(chain *Chain, timeout time.Duration, requestID uint64) Message { + for { + id, msg := c.readAndServe66(chain, timeout) + if id == requestID { + return msg + } + } +} + // ReadAndServe serves GetBlockHeaders requests while waiting // on another message from the node. func (c *Conn) readAndServe66(chain *Chain, timeout time.Duration) (uint64, Message) { start := time.Now() for time.Since(start) < timeout { - timeout := time.Now().Add(10 * time.Second) - c.SetReadDeadline(timeout) + c.SetReadDeadline(time.Now().Add(10 * time.Second)) reqID, msg := c.read66() @@ -173,27 +194,33 @@ func (s *Suite) testAnnounce66(t *utesting.T, sendConn, receiveConn *Conn, block } func (s *Suite) waitAnnounce66(t *utesting.T, conn *Conn, blockAnnouncement *NewBlock) { - timeout := 20 * time.Second - _, msg := conn.readAndServe66(s.chain, timeout) - switch msg := msg.(type) { - case *NewBlock: - t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block)) - assert.Equal(t, - blockAnnouncement.Block.Header(), msg.Block.Header(), - "wrong block header in announcement", - ) - assert.Equal(t, - blockAnnouncement.TD, msg.TD, - "wrong TD in announcement", - ) - case *NewBlockHashes: - blockHashes := *msg - t.Logf("received NewBlockHashes message: %s", pretty.Sdump(blockHashes)) - assert.Equal(t, blockAnnouncement.Block.Hash(), blockHashes[0].Hash, - "wrong block hash in announcement", - ) - default: - t.Fatalf("unexpected: %s", pretty.Sdump(msg)) + for { + _, msg := conn.readAndServe66(s.chain, timeout) + switch msg := msg.(type) { + case *NewBlock: + t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block)) + assert.Equal(t, + blockAnnouncement.Block.Header(), msg.Block.Header(), + "wrong block header in announcement", + ) + assert.Equal(t, + blockAnnouncement.TD, msg.TD, + "wrong TD in announcement", + ) + return + case *NewBlockHashes: + blockHashes := *msg + t.Logf("received NewBlockHashes message: %s", pretty.Sdump(blockHashes)) + assert.Equal(t, blockAnnouncement.Block.Hash(), blockHashes[0].Hash, + "wrong block hash in announcement", + ) + return + case *NewPooledTransactionHashes: + // ignore old txs being propagated + continue + default: + t.Fatalf("unexpected: %s", pretty.Sdump(msg)) + } } } @@ -202,8 +229,11 @@ func (s *Suite) waitAnnounce66(t *utesting.T, conn *Conn, blockAnnouncement *New func (c *Conn) waitForBlock66(block *types.Block) error { defer c.SetReadDeadline(time.Time{}) - timeout := time.Now().Add(20 * time.Second) - c.SetReadDeadline(timeout) + c.SetReadDeadline(time.Now().Add(20 * time.Second)) + // note: if the node has not yet imported the block, it will respond + // to the GetBlockHeaders request with an empty BlockHeaders response, + // so the GetBlockHeaders request must be sent again until the BlockHeaders + // response contains the desired header. for { req := eth.GetBlockHeadersPacket66{ RequestId: 54, @@ -226,10 +256,15 @@ func (c *Conn) waitForBlock66(block *types.Block) error { if reqID != req.RequestId { return fmt.Errorf("request ID mismatch: wanted %d, got %d", req.RequestId, reqID) } - if len(msg) > 0 { - return nil + for _, header := range msg { + if header.Number.Uint64() == block.NumberU64() { + return nil + } } time.Sleep(100 * time.Millisecond) + case *NewPooledTransactionHashes: + // ignore old announcements + continue default: return fmt.Errorf("invalid message: %s", pretty.Sdump(msg)) } @@ -238,37 +273,61 @@ func (c *Conn) waitForBlock66(block *types.Block) error { func sendSuccessfulTx66(t *utesting.T, s *Suite, tx *types.Transaction) { sendConn := s.setupConnection66(t) + defer sendConn.Close() sendSuccessfulTxWithConn(t, s, tx, sendConn) } -func sendFailingTx66(t *utesting.T, s *Suite, tx *types.Transaction) { - sendConn, recvConn := s.setupConnection66(t), s.setupConnection66(t) - sendFailingTxWithConns(t, s, tx, sendConn, recvConn) -} - -func (s *Suite) getBlockHeaders66(t *utesting.T, conn *Conn, req eth.Packet, expectedID uint64) BlockHeaders { - if err := conn.write66(req, GetBlockHeaders{}.Code()); err != nil { - t.Fatalf("could not write to connection: %v", err) - } - // check block headers response +// waitForBlockHeadersResponse66 waits for a BlockHeaders message with the given expected request ID +func (s *Suite) waitForBlockHeadersResponse66(conn *Conn, expectedID uint64) (BlockHeaders, error) { reqID, msg := conn.readAndServe66(s.chain, timeout) - switch msg := msg.(type) { case BlockHeaders: if reqID != expectedID { - t.Fatalf("request ID mismatch: wanted %d, got %d", expectedID, reqID) + return nil, fmt.Errorf("request ID mismatch: wanted %d, got %d", expectedID, reqID) } - return msg + return msg, nil default: - t.Fatalf("unexpected: %s", pretty.Sdump(msg)) - return nil + return nil, fmt.Errorf("unexpected: %s", pretty.Sdump(msg)) + } +} + +func (s *Suite) getBlockHeaders66(conn *Conn, req eth.Packet, expectedID uint64) (BlockHeaders, error) { + if err := conn.write66(req, GetBlockHeaders{}.Code()); err != nil { + return nil, fmt.Errorf("could not write to connection: %v", err) } + return s.waitForBlockHeadersResponse66(conn, expectedID) } -func headersMatch(t *utesting.T, chain *Chain, headers BlockHeaders) { +func headersMatch(t *utesting.T, chain *Chain, headers BlockHeaders) bool { + mismatched := 0 for _, header := range headers { num := header.Number.Uint64() t.Logf("received header (%d): %s", num, pretty.Sdump(header.Hash())) - assert.Equal(t, chain.blocks[int(num)].Header(), header) + if !reflect.DeepEqual(chain.blocks[int(num)].Header(), header) { + mismatched += 1 + t.Logf("received wrong header: %v", pretty.Sdump(header)) + } + } + return mismatched == 0 +} + +func (s *Suite) sendNextBlock66(t *utesting.T) { + sendConn, receiveConn := s.setupConnection66(t), s.setupConnection66(t) + defer sendConn.Close() + defer receiveConn.Close() + + // create new block announcement + nextBlock := len(s.chain.blocks) + blockAnnouncement := &NewBlock{ + Block: s.fullChain.blocks[nextBlock], + TD: s.fullChain.TD(nextBlock + 1), + } + // send announcement and wait for node to request the header + s.testAnnounce66(t, sendConn, receiveConn, blockAnnouncement) + // wait for client to update its chain + if err := receiveConn.waitForBlock66(s.fullChain.blocks[nextBlock]); err != nil { + t.Fatal(err) } + // update test suite chain + s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock]) } diff --git a/cmd/devp2p/internal/ethtest/large.go b/cmd/devp2p/internal/ethtest/large.go index deca00be53..22421355ab 100644 --- a/cmd/devp2p/internal/ethtest/large.go +++ b/cmd/devp2p/internal/ethtest/large.go @@ -70,7 +70,7 @@ func largeHeader() *types.Header { GasUsed: 0, Coinbase: common.Address{}, GasLimit: 0, - UncleHash: randHash(), + UncleHash: types.EmptyUncleHash, Time: 1337, ParentHash: randHash(), Root: randHash(), diff --git a/cmd/devp2p/internal/ethtest/suite.go b/cmd/devp2p/internal/ethtest/suite.go index 66fb8026a0..abc6bcddce 100644 --- a/cmd/devp2p/internal/ethtest/suite.go +++ b/cmd/devp2p/internal/ethtest/suite.go @@ -69,20 +69,20 @@ func NewSuite(dest *enode.Node, chainfile string, genesisfile string) (*Suite, e func (s *Suite) AllEthTests() []utesting.Test { return []utesting.Test{ // status - {Name: "Status", Fn: s.TestStatus}, - {Name: "Status_66", Fn: s.TestStatus_66}, + {Name: "TestStatus", Fn: s.TestStatus}, + {Name: "TestStatus_66", Fn: s.TestStatus_66}, // get block headers - {Name: "GetBlockHeaders", Fn: s.TestGetBlockHeaders}, - {Name: "GetBlockHeaders_66", Fn: s.TestGetBlockHeaders_66}, + {Name: "TestGetBlockHeaders", Fn: s.TestGetBlockHeaders}, + {Name: "TestGetBlockHeaders_66", Fn: s.TestGetBlockHeaders_66}, {Name: "TestSimultaneousRequests_66", Fn: s.TestSimultaneousRequests_66}, {Name: "TestSameRequestID_66", Fn: s.TestSameRequestID_66}, {Name: "TestZeroRequestID_66", Fn: s.TestZeroRequestID_66}, // get block bodies - {Name: "GetBlockBodies", Fn: s.TestGetBlockBodies}, - {Name: "GetBlockBodies_66", Fn: s.TestGetBlockBodies_66}, + {Name: "TestGetBlockBodies", Fn: s.TestGetBlockBodies}, + {Name: "TestGetBlockBodies_66", Fn: s.TestGetBlockBodies_66}, // broadcast - {Name: "Broadcast", Fn: s.TestBroadcast}, - {Name: "Broadcast_66", Fn: s.TestBroadcast_66}, + {Name: "TestBroadcast", Fn: s.TestBroadcast}, + {Name: "TestBroadcast_66", Fn: s.TestBroadcast_66}, {Name: "TestLargeAnnounce", Fn: s.TestLargeAnnounce}, {Name: "TestLargeAnnounce_66", Fn: s.TestLargeAnnounce_66}, {Name: "TestOldAnnounce", Fn: s.TestOldAnnounce}, @@ -91,44 +91,48 @@ func (s *Suite) AllEthTests() []utesting.Test { {Name: "TestMaliciousHandshake", Fn: s.TestMaliciousHandshake}, {Name: "TestMaliciousStatus", Fn: s.TestMaliciousStatus}, {Name: "TestMaliciousHandshake_66", Fn: s.TestMaliciousHandshake_66}, - {Name: "TestMaliciousStatus_66", Fn: s.TestMaliciousStatus}, + {Name: "TestMaliciousStatus_66", Fn: s.TestMaliciousStatus_66}, // test transactions - {Name: "TestTransactions", Fn: s.TestTransaction}, - {Name: "TestTransactions_66", Fn: s.TestTransaction_66}, - {Name: "TestMaliciousTransactions", Fn: s.TestMaliciousTx}, - {Name: "TestMaliciousTransactions_66", Fn: s.TestMaliciousTx_66}, + {Name: "TestTransaction", Fn: s.TestTransaction}, + {Name: "TestTransaction_66", Fn: s.TestTransaction_66}, + {Name: "TestMaliciousTx", Fn: s.TestMaliciousTx}, + {Name: "TestMaliciousTx_66", Fn: s.TestMaliciousTx_66}, + {Name: "TestLargeTxRequest_66", Fn: s.TestLargeTxRequest_66}, + {Name: "TestNewPooledTxs_66", Fn: s.TestNewPooledTxs_66}, } } func (s *Suite) EthTests() []utesting.Test { return []utesting.Test{ - {Name: "Status", Fn: s.TestStatus}, - {Name: "GetBlockHeaders", Fn: s.TestGetBlockHeaders}, - {Name: "GetBlockBodies", Fn: s.TestGetBlockBodies}, - {Name: "Broadcast", Fn: s.TestBroadcast}, + {Name: "TestStatus", Fn: s.TestStatus}, + {Name: "TestGetBlockHeaders", Fn: s.TestGetBlockHeaders}, + {Name: "TestGetBlockBodies", Fn: s.TestGetBlockBodies}, + {Name: "TestBroadcast", Fn: s.TestBroadcast}, {Name: "TestLargeAnnounce", Fn: s.TestLargeAnnounce}, {Name: "TestMaliciousHandshake", Fn: s.TestMaliciousHandshake}, {Name: "TestMaliciousStatus", Fn: s.TestMaliciousStatus}, - {Name: "TestMaliciousStatus_66", Fn: s.TestMaliciousStatus}, - {Name: "TestTransactions", Fn: s.TestTransaction}, - {Name: "TestMaliciousTransactions", Fn: s.TestMaliciousTx}, + {Name: "TestTransaction", Fn: s.TestTransaction}, + {Name: "TestMaliciousTx", Fn: s.TestMaliciousTx}, } } func (s *Suite) Eth66Tests() []utesting.Test { return []utesting.Test{ // only proceed with eth66 test suite if node supports eth 66 protocol - {Name: "Status_66", Fn: s.TestStatus_66}, - {Name: "GetBlockHeaders_66", Fn: s.TestGetBlockHeaders_66}, + {Name: "TestStatus_66", Fn: s.TestStatus_66}, + {Name: "TestGetBlockHeaders_66", Fn: s.TestGetBlockHeaders_66}, {Name: "TestSimultaneousRequests_66", Fn: s.TestSimultaneousRequests_66}, {Name: "TestSameRequestID_66", Fn: s.TestSameRequestID_66}, {Name: "TestZeroRequestID_66", Fn: s.TestZeroRequestID_66}, - {Name: "GetBlockBodies_66", Fn: s.TestGetBlockBodies_66}, - {Name: "Broadcast_66", Fn: s.TestBroadcast_66}, + {Name: "TestGetBlockBodies_66", Fn: s.TestGetBlockBodies_66}, + {Name: "TestBroadcast_66", Fn: s.TestBroadcast_66}, {Name: "TestLargeAnnounce_66", Fn: s.TestLargeAnnounce_66}, {Name: "TestMaliciousHandshake_66", Fn: s.TestMaliciousHandshake_66}, - {Name: "TestTransactions_66", Fn: s.TestTransaction_66}, - {Name: "TestMaliciousTransactions_66", Fn: s.TestMaliciousTx_66}, + {Name: "TestMaliciousStatus_66", Fn: s.TestMaliciousStatus_66}, + {Name: "TestTransaction_66", Fn: s.TestTransaction_66}, + {Name: "TestMaliciousTx_66", Fn: s.TestMaliciousTx_66}, + {Name: "TestLargeTxRequest_66", Fn: s.TestLargeTxRequest_66}, + {Name: "TestNewPooledTxs_66", Fn: s.TestNewPooledTxs_66}, } } @@ -140,6 +144,7 @@ func (s *Suite) TestStatus(t *utesting.T) { if err != nil { t.Fatalf("could not dial: %v", err) } + defer conn.Close() // get protoHandshake conn.handshake(t) // get status @@ -157,6 +162,7 @@ func (s *Suite) TestMaliciousStatus(t *utesting.T) { if err != nil { t.Fatalf("could not dial: %v", err) } + defer conn.Close() // get protoHandshake conn.handshake(t) status := &Status{ @@ -191,6 +197,7 @@ func (s *Suite) TestGetBlockHeaders(t *utesting.T) { if err != nil { t.Fatalf("could not dial: %v", err) } + defer conn.Close() conn.handshake(t) conn.statusExchange(t, s.chain, nil) @@ -229,6 +236,7 @@ func (s *Suite) TestGetBlockBodies(t *utesting.T) { if err != nil { t.Fatalf("could not dial: %v", err) } + defer conn.Close() conn.handshake(t) conn.statusExchange(t, s.chain, nil) @@ -252,19 +260,28 @@ func (s *Suite) TestGetBlockBodies(t *utesting.T) { // TestBroadcast tests whether a block announcement is correctly // propagated to the given node's peer(s). func (s *Suite) TestBroadcast(t *utesting.T) { + s.sendNextBlock(t) +} + +func (s *Suite) sendNextBlock(t *utesting.T) { sendConn, receiveConn := s.setupConnection(t), s.setupConnection(t) + defer sendConn.Close() + defer receiveConn.Close() + + // create new block announcement nextBlock := len(s.chain.blocks) blockAnnouncement := &NewBlock{ Block: s.fullChain.blocks[nextBlock], TD: s.fullChain.TD(nextBlock + 1), } + // send announcement and wait for node to request the header s.testAnnounce(t, sendConn, receiveConn, blockAnnouncement) - // update test suite chain - s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock]) // wait for client to update its chain - if err := receiveConn.waitForBlock(s.chain.Head()); err != nil { + if err := receiveConn.waitForBlock(s.fullChain.blocks[nextBlock]); err != nil { t.Fatal(err) } + // update test suite chain + s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock]) } // TestMaliciousHandshake tries to send malicious data during the handshake. @@ -273,6 +290,7 @@ func (s *Suite) TestMaliciousHandshake(t *utesting.T) { if err != nil { t.Fatalf("could not dial: %v", err) } + defer conn.Close() // write hello to client pub0 := crypto.FromECDSAPub(&conn.ourKey.PublicKey)[1:] handshakes := []*Hello{ @@ -372,28 +390,25 @@ func (s *Suite) TestLargeAnnounce(t *utesting.T) { t.Fatalf("could not write to connection: %v", err) } // Invalid announcement, check that peer disconnected - switch msg := sendConn.ReadAndServe(s.chain, timeout).(type) { + switch msg := sendConn.ReadAndServe(s.chain, time.Second*8).(type) { case *Disconnect: case *Error: break default: t.Fatalf("unexpected: %s wanted disconnect", pretty.Sdump(msg)) } + sendConn.Close() } // Test the last block as a valid block - sendConn := s.setupConnection(t) - receiveConn := s.setupConnection(t) - s.testAnnounce(t, sendConn, receiveConn, blocks[3]) - // update test suite chain - s.chain.blocks = append(s.chain.blocks, s.fullChain.blocks[nextBlock]) - // wait for client to update its chain - if err := receiveConn.waitForBlock(s.fullChain.blocks[nextBlock]); err != nil { - t.Fatal(err) - } + s.sendNextBlock(t) } func (s *Suite) TestOldAnnounce(t *utesting.T) { - s.oldAnnounce(t, s.setupConnection(t), s.setupConnection(t)) + sendConn, recvConn := s.setupConnection(t), s.setupConnection(t) + defer sendConn.Close() + defer recvConn.Close() + + s.oldAnnounce(t, sendConn, recvConn) } func (s *Suite) oldAnnounce(t *utesting.T, sendConn, receiveConn *Conn) { @@ -406,11 +421,19 @@ func (s *Suite) oldAnnounce(t *utesting.T, sendConn, receiveConn *Conn) { t.Fatalf("could not write to connection: %v", err) } - switch msg := receiveConn.ReadAndServe(s.chain, timeout*2).(type) { + switch msg := receiveConn.ReadAndServe(s.chain, time.Second*8).(type) { case *NewBlock: - t.Fatalf("unexpected: block propagated: %s", pretty.Sdump(msg)) + block := *msg + if block.Block.Hash() == oldBlockAnnounce.Block.Hash() { + t.Fatalf("unexpected: block propagated: %s", pretty.Sdump(msg)) + } case *NewBlockHashes: - t.Fatalf("unexpected: block announced: %s", pretty.Sdump(msg)) + hashes := *msg + for _, hash := range hashes { + if hash.Hash == oldBlockAnnounce.Block.Hash() { + t.Fatalf("unexpected: block announced: %s", pretty.Sdump(msg)) + } + } case *Error: errMsg := *msg // check to make sure error is timeout (propagation didn't come through == test successful) @@ -431,7 +454,6 @@ func (s *Suite) testAnnounce(t *utesting.T, sendConn, receiveConn *Conn, blockAn } func (s *Suite) waitAnnounce(t *utesting.T, conn *Conn, blockAnnouncement *NewBlock) { - timeout := 20 * time.Second switch msg := conn.ReadAndServe(s.chain, timeout).(type) { case *NewBlock: t.Logf("received NewBlock message: %s", pretty.Sdump(msg.Block)) @@ -502,15 +524,27 @@ func (s *Suite) TestTransaction(t *utesting.T) { } func (s *Suite) TestMaliciousTx(t *utesting.T) { - tests := []*types.Transaction{ + badTxs := []*types.Transaction{ getOldTxFromChain(t, s), invalidNonceTx(t, s), hugeAmount(t, s), hugeGasPrice(t, s), hugeData(t, s), } - for i, tx := range tests { + sendConn := s.setupConnection(t) + defer sendConn.Close() + // set up receiving connection before sending txs to make sure + // no announcements are missed + recvConn := s.setupConnection(t) + defer recvConn.Close() + + for i, tx := range badTxs { t.Logf("Testing malicious tx propagation: %v\n", i) - sendFailingTx(t, s, tx) + if err := sendConn.Write(&Transactions{tx}); err != nil { + t.Fatalf("could not write to connection: %v", err) + } + } + // check to make sure bad txs aren't propagated + waitForTxPropagation(t, s, badTxs, recvConn) } diff --git a/cmd/devp2p/internal/ethtest/suite_test.go b/cmd/devp2p/internal/ethtest/suite_test.go new file mode 100644 index 0000000000..6e3217151a --- /dev/null +++ b/cmd/devp2p/internal/ethtest/suite_test.go @@ -0,0 +1,107 @@ +// Copyright 2020 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package ethtest + +import ( + "os" + "testing" + "time" + + "github.com/ethereum/go-ethereum/eth" + "github.com/ethereum/go-ethereum/eth/ethconfig" + "github.com/ethereum/go-ethereum/internal/utesting" + "github.com/ethereum/go-ethereum/node" + "github.com/ethereum/go-ethereum/p2p" +) + +var ( + genesisFile = "./testdata/genesis.json" + halfchainFile = "./testdata/halfchain.rlp" + fullchainFile = "./testdata/chain.rlp" +) + +func TestEthSuite(t *testing.T) { + geth, err := runGeth() + if err != nil { + t.Fatalf("could not run geth: %v", err) + } + defer geth.Close() + + suite, err := NewSuite(geth.Server().Self(), fullchainFile, genesisFile) + if err != nil { + t.Fatalf("could not create new test suite: %v", err) + } + for _, test := range suite.AllEthTests() { + t.Run(test.Name, func(t *testing.T) { + result := utesting.RunTAP([]utesting.Test{{Name: test.Name, Fn: test.Fn}}, os.Stdout) + if result[0].Failed { + t.Fatal() + } + }) + } +} + +// runGeth creates and starts a geth node +func runGeth() (*node.Node, error) { + stack, err := node.New(&node.Config{ + P2P: p2p.Config{ + ListenAddr: "127.0.0.1:0", + NoDiscovery: true, + MaxPeers: 10, // in case a test requires multiple connections, can be changed in the future + NoDial: true, + }, + }) + if err != nil { + return nil, err + } + + err = setupGeth(stack) + if err != nil { + stack.Close() + return nil, err + } + if err = stack.Start(); err != nil { + stack.Close() + return nil, err + } + return stack, nil +} + +func setupGeth(stack *node.Node) error { + chain, err := loadChain(halfchainFile, genesisFile) + if err != nil { + return err + } + + backend, err := eth.New(stack, ðconfig.Config{ + Genesis: &chain.genesis, + NetworkId: chain.genesis.Config.ChainID.Uint64(), // 19763 + DatabaseCache: 10, + TrieCleanCache: 10, + TrieCleanCacheJournal: "", + TrieCleanCacheRejournal: 60 * time.Minute, + TrieDirtyCache: 16, + TrieTimeout: 60 * time.Minute, + SnapshotCache: 10, + }) + if err != nil { + return err + } + + _, err = backend.BlockChain().InsertChain(chain.blocks[1:]) + return err +} diff --git a/cmd/devp2p/internal/ethtest/transaction.go b/cmd/devp2p/internal/ethtest/transaction.go index 21aa221e8b..a6166bd2e3 100644 --- a/cmd/devp2p/internal/ethtest/transaction.go +++ b/cmd/devp2p/internal/ethtest/transaction.go @@ -17,12 +17,15 @@ package ethtest import ( + "math/big" + "strings" "time" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/internal/utesting" + "github.com/ethereum/go-ethereum/params" ) //var faucetAddr = common.HexToAddress("0x71562b71999873DB5b286dF957af199Ec94617F7") @@ -30,6 +33,7 @@ var faucetKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c666 func sendSuccessfulTx(t *utesting.T, s *Suite, tx *types.Transaction) { sendConn := s.setupConnection(t) + defer sendConn.Close() sendSuccessfulTxWithConn(t, s, tx, sendConn) } @@ -39,7 +43,9 @@ func sendSuccessfulTxWithConn(t *utesting.T, s *Suite, tx *types.Transaction, se if err := sendConn.Write(&Transactions{tx}); err != nil { t.Fatal(err) } - time.Sleep(100 * time.Millisecond) + // update last nonce seen + nonce = tx.Nonce() + recvConn := s.setupConnection(t) // Wait for the transaction announcement switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) { @@ -65,29 +71,84 @@ func sendSuccessfulTxWithConn(t *utesting.T, s *Suite, tx *types.Transaction, se } } -func sendFailingTx(t *utesting.T, s *Suite, tx *types.Transaction) { - sendConn, recvConn := s.setupConnection(t), s.setupConnection(t) - sendFailingTxWithConns(t, s, tx, sendConn, recvConn) -} +var nonce = uint64(99) -func sendFailingTxWithConns(t *utesting.T, s *Suite, tx *types.Transaction, sendConn, recvConn *Conn) { - // Wait for a transaction announcement - switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) { - case *NewPooledTransactionHashes: - break - default: - t.Logf("unexpected message, logging: %v", pretty.Sdump(msg)) - } - // Send the transaction - if err := sendConn.Write(&Transactions{tx}); err != nil { +func sendMultipleSuccessfulTxs(t *utesting.T, s *Suite, sendConn *Conn, txs []*types.Transaction) { + txMsg := Transactions(txs) + t.Logf("sending %d txs\n", len(txs)) + + recvConn := s.setupConnection(t) + defer recvConn.Close() + + // Send the transactions + if err := sendConn.Write(&txMsg); err != nil { t.Fatal(err) } + // update nonce + nonce = txs[len(txs)-1].Nonce() + // Wait for the transaction announcement(s) and make sure all sent txs are being propagated + recvHashes := make([]common.Hash, 0) + // all txs should be announced within 3 announcements + for i := 0; i < 3; i++ { + switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) { + case *Transactions: + for _, tx := range *msg { + recvHashes = append(recvHashes, tx.Hash()) + } + case *NewPooledTransactionHashes: + recvHashes = append(recvHashes, *msg...) + default: + if !strings.Contains(pretty.Sdump(msg), "i/o timeout") { + t.Fatalf("unexpected message while waiting to receive txs: %s", pretty.Sdump(msg)) + } + } + // break once all 2000 txs have been received + if len(recvHashes) == 2000 { + break + } + if len(recvHashes) > 0 { + _, missingTxs := compareReceivedTxs(recvHashes, txs) + if len(missingTxs) > 0 { + continue + } else { + t.Logf("successfully received all %d txs", len(txs)) + return + } + } + } + _, missingTxs := compareReceivedTxs(recvHashes, txs) + if len(missingTxs) > 0 { + for _, missing := range missingTxs { + t.Logf("missing tx: %v", missing.Hash()) + } + t.Fatalf("missing %d txs", len(missingTxs)) + } +} + +func waitForTxPropagation(t *utesting.T, s *Suite, txs []*types.Transaction, recvConn *Conn) { // Wait for another transaction announcement - switch msg := recvConn.ReadAndServe(s.chain, timeout).(type) { + switch msg := recvConn.ReadAndServe(s.chain, time.Second*8).(type) { case *Transactions: - t.Fatalf("Received unexpected transaction announcement: %v", msg) + // check to see if any of the failing txs were in the announcement + recvTxs := make([]common.Hash, len(*msg)) + for i, recvTx := range *msg { + recvTxs[i] = recvTx.Hash() + } + badTxs, _ := compareReceivedTxs(recvTxs, txs) + if len(badTxs) > 0 { + for _, tx := range badTxs { + t.Logf("received bad tx: %v", tx) + } + t.Fatalf("received %d bad txs", len(badTxs)) + } case *NewPooledTransactionHashes: - t.Fatalf("Received unexpected pooledTx announcement: %v", msg) + badTxs, _ := compareReceivedTxs(*msg, txs) + if len(badTxs) > 0 { + for _, tx := range badTxs { + t.Logf("received bad tx: %v", tx) + } + t.Fatalf("received %d bad txs", len(badTxs)) + } case *Error: // Transaction should not be announced -> wait for timeout return @@ -96,6 +157,29 @@ func sendFailingTxWithConns(t *utesting.T, s *Suite, tx *types.Transaction, send } } +// compareReceivedTxs compares the received set of txs against the given set of txs, +// returning both the set received txs that were present within the given txs, and +// the set of txs that were missing from the set of received txs +func compareReceivedTxs(recvTxs []common.Hash, txs []*types.Transaction) (present []*types.Transaction, missing []*types.Transaction) { + // create a map of the hashes received from node + recvHashes := make(map[common.Hash]common.Hash) + for _, hash := range recvTxs { + recvHashes[hash] = hash + } + + // collect present txs and missing txs separately + present = make([]*types.Transaction, 0) + missing = make([]*types.Transaction, 0) + for _, tx := range txs { + if _, exists := recvHashes[tx.Hash()]; exists { + present = append(present, tx) + } else { + missing = append(missing, tx) + } + } + return present, missing +} + func unknownTx(t *utesting.T, s *Suite) *types.Transaction { tx := getNextTxFromChain(t, s) var to common.Address @@ -103,7 +187,7 @@ func unknownTx(t *utesting.T, s *Suite) *types.Transaction { to = *tx.To() } txNew := types.NewTransaction(tx.Nonce()+1, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data()) - return signWithFaucet(t, txNew) + return signWithFaucet(t, s.chain.chainConfig, txNew) } func getNextTxFromChain(t *utesting.T, s *Suite) *types.Transaction { @@ -122,6 +206,30 @@ func getNextTxFromChain(t *utesting.T, s *Suite) *types.Transaction { return tx } +func generateTxs(t *utesting.T, s *Suite, numTxs int) (map[common.Hash]common.Hash, []*types.Transaction) { + txHashMap := make(map[common.Hash]common.Hash, numTxs) + txs := make([]*types.Transaction, numTxs) + + nextTx := getNextTxFromChain(t, s) + gas := nextTx.Gas() + + nonce = nonce + 1 + // generate txs + for i := 0; i < numTxs; i++ { + tx := generateTx(t, s.chain.chainConfig, nonce, gas) + txHashMap[tx.Hash()] = tx.Hash() + txs[i] = tx + nonce = nonce + 1 + } + return txHashMap, txs +} + +func generateTx(t *utesting.T, chainConfig *params.ChainConfig, nonce uint64, gas uint64) *types.Transaction { + var to common.Address + tx := types.NewTransaction(nonce, to, big.NewInt(1), gas, big.NewInt(1), []byte{}) + return signWithFaucet(t, chainConfig, tx) +} + func getOldTxFromChain(t *utesting.T, s *Suite) *types.Transaction { var tx *types.Transaction for _, blocks := range s.fullChain.blocks[:s.chain.Len()-1] { @@ -144,7 +252,7 @@ func invalidNonceTx(t *utesting.T, s *Suite) *types.Transaction { to = *tx.To() } txNew := types.NewTransaction(tx.Nonce()-2, to, tx.Value(), tx.Gas(), tx.GasPrice(), tx.Data()) - return signWithFaucet(t, txNew) + return signWithFaucet(t, s.chain.chainConfig, txNew) } func hugeAmount(t *utesting.T, s *Suite) *types.Transaction { @@ -155,7 +263,7 @@ func hugeAmount(t *utesting.T, s *Suite) *types.Transaction { to = *tx.To() } txNew := types.NewTransaction(tx.Nonce(), to, amount, tx.Gas(), tx.GasPrice(), tx.Data()) - return signWithFaucet(t, txNew) + return signWithFaucet(t, s.chain.chainConfig, txNew) } func hugeGasPrice(t *utesting.T, s *Suite) *types.Transaction { @@ -166,7 +274,7 @@ func hugeGasPrice(t *utesting.T, s *Suite) *types.Transaction { to = *tx.To() } txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), gasPrice, tx.Data()) - return signWithFaucet(t, txNew) + return signWithFaucet(t, s.chain.chainConfig, txNew) } func hugeData(t *utesting.T, s *Suite) *types.Transaction { @@ -176,11 +284,11 @@ func hugeData(t *utesting.T, s *Suite) *types.Transaction { to = *tx.To() } txNew := types.NewTransaction(tx.Nonce(), to, tx.Value(), tx.Gas(), tx.GasPrice(), largeBuffer(2)) - return signWithFaucet(t, txNew) + return signWithFaucet(t, s.chain.chainConfig, txNew) } -func signWithFaucet(t *utesting.T, tx *types.Transaction) *types.Transaction { - signer := types.HomesteadSigner{} +func signWithFaucet(t *utesting.T, chainConfig *params.ChainConfig, tx *types.Transaction) *types.Transaction { + signer := types.LatestSigner(chainConfig) signedTx, err := types.SignTx(tx, signer, faucetKey) if err != nil { t.Fatalf("could not sign tx: %v\n", err) diff --git a/cmd/devp2p/internal/ethtest/types.go b/cmd/devp2p/internal/ethtest/types.go index 1e2ae77965..50a69b9418 100644 --- a/cmd/devp2p/internal/ethtest/types.go +++ b/cmd/devp2p/internal/ethtest/types.go @@ -120,6 +120,14 @@ type NewPooledTransactionHashes eth.NewPooledTransactionHashesPacket func (nb NewPooledTransactionHashes) Code() int { return 24 } +type GetPooledTransactions eth.GetPooledTransactionsPacket + +func (gpt GetPooledTransactions) Code() int { return 25 } + +type PooledTransactions eth.PooledTransactionsPacket + +func (pt PooledTransactions) Code() int { return 26 } + // Conn represents an individual connection with a peer type Conn struct { *rlpx.Conn @@ -163,6 +171,10 @@ func (c *Conn) Read() Message { msg = new(Transactions) case (NewPooledTransactionHashes{}).Code(): msg = new(NewPooledTransactionHashes) + case (GetPooledTransactions{}.Code()): + msg = new(GetPooledTransactions) + case (PooledTransactions{}.Code()): + msg = new(PooledTransactions) default: return errorf("invalid message code: %d", code) } @@ -178,8 +190,7 @@ func (c *Conn) Read() Message { func (c *Conn) ReadAndServe(chain *Chain, timeout time.Duration) Message { start := time.Now() for time.Since(start) < timeout { - timeout := time.Now().Add(10 * time.Second) - c.SetReadDeadline(timeout) + c.SetReadDeadline(time.Now().Add(5 * time.Second)) switch msg := c.Read().(type) { case *Ping: c.Write(&Pong{}) @@ -323,8 +334,11 @@ loop: func (c *Conn) waitForBlock(block *types.Block) error { defer c.SetReadDeadline(time.Time{}) - timeout := time.Now().Add(20 * time.Second) - c.SetReadDeadline(timeout) + c.SetReadDeadline(time.Now().Add(20 * time.Second)) + // note: if the node has not yet imported the block, it will respond + // to the GetBlockHeaders request with an empty BlockHeaders response, + // so the GetBlockHeaders request must be sent again until the BlockHeaders + // response contains the desired header. for { req := &GetBlockHeaders{Origin: eth.HashOrNumber{Hash: block.Hash()}, Amount: 1} if err := c.Write(req); err != nil { @@ -332,8 +346,10 @@ func (c *Conn) waitForBlock(block *types.Block) error { } switch msg := c.Read().(type) { case *BlockHeaders: - if len(*msg) > 0 { - return nil + for _, header := range *msg { + if header.Number.Uint64() == block.NumberU64() { + return nil + } } time.Sleep(100 * time.Millisecond) default: diff --git a/cmd/devp2p/nodeset.go b/cmd/devp2p/nodeset.go index 2d86c3f65a..1d78e34c73 100644 --- a/cmd/devp2p/nodeset.go +++ b/cmd/devp2p/nodeset.go @@ -71,6 +71,7 @@ func writeNodesJSON(file string, nodes nodeSet) { } } +// nodes returns the node records contained in the set. func (ns nodeSet) nodes() []*enode.Node { result := make([]*enode.Node, 0, len(ns)) for _, n := range ns { @@ -83,12 +84,37 @@ func (ns nodeSet) nodes() []*enode.Node { return result } +// add ensures the given nodes are present in the set. func (ns nodeSet) add(nodes ...*enode.Node) { for _, n := range nodes { - ns[n.ID()] = nodeJSON{Seq: n.Seq(), N: n} + v := ns[n.ID()] + v.N = n + v.Seq = n.Seq() + ns[n.ID()] = v } } +// topN returns the top n nodes by score as a new set. +func (ns nodeSet) topN(n int) nodeSet { + if n >= len(ns) { + return ns + } + + byscore := make([]nodeJSON, 0, len(ns)) + for _, v := range ns { + byscore = append(byscore, v) + } + sort.Slice(byscore, func(i, j int) bool { + return byscore[i].Score >= byscore[j].Score + }) + result := make(nodeSet, n) + for _, v := range byscore[:n] { + result[v.N.ID()] = v + } + return result +} + +// verify performs integrity checks on the node set. func (ns nodeSet) verify() error { for id, n := range ns { if n.N.ID() != id { diff --git a/cmd/devp2p/nodesetcmd.go b/cmd/devp2p/nodesetcmd.go index 33de1fdf31..848288c9cf 100644 --- a/cmd/devp2p/nodesetcmd.go +++ b/cmd/devp2p/nodesetcmd.go @@ -17,8 +17,12 @@ package main import ( + "errors" "fmt" "net" + "sort" + "strconv" + "strings" "time" "github.com/ethereum/go-ethereum/core/forkid" @@ -60,25 +64,64 @@ func nodesetInfo(ctx *cli.Context) error { ns := loadNodesJSON(ctx.Args().First()) fmt.Printf("Set contains %d nodes.\n", len(ns)) + showAttributeCounts(ns) return nil } +// showAttributeCounts prints the distribution of ENR attributes in a node set. +func showAttributeCounts(ns nodeSet) { + attrcount := make(map[string]int) + var attrlist []interface{} + for _, n := range ns { + r := n.N.Record() + attrlist = r.AppendElements(attrlist[:0])[1:] + for i := 0; i < len(attrlist); i += 2 { + key := attrlist[i].(string) + attrcount[key]++ + } + } + + var keys []string + var maxlength int + for key := range attrcount { + keys = append(keys, key) + if len(key) > maxlength { + maxlength = len(key) + } + } + sort.Strings(keys) + fmt.Println("ENR attribute counts:") + for _, key := range keys { + fmt.Printf("%s%s: %d\n", strings.Repeat(" ", maxlength-len(key)+1), key, attrcount[key]) + } +} + func nodesetFilter(ctx *cli.Context) error { if ctx.NArg() < 1 { return fmt.Errorf("need nodes file as argument") } - ns := loadNodesJSON(ctx.Args().First()) + // Parse -limit. + limit, err := parseFilterLimit(ctx.Args().Tail()) + if err != nil { + return err + } + // Parse the filters. filter, err := andFilter(ctx.Args().Tail()) if err != nil { return err } + // Load nodes and apply filters. + ns := loadNodesJSON(ctx.Args().First()) result := make(nodeSet) for id, n := range ns { if filter(n) { result[id] = n } } + if limit >= 0 { + result = result.topN(limit) + } writeNodesJSON("-", result) return nil } @@ -91,6 +134,7 @@ type nodeFilterC struct { } var filterFlags = map[string]nodeFilterC{ + "-limit": {1, trueFilter}, // needed to skip over -limit "-ip": {1, ipFilter}, "-min-age": {1, minAgeFilter}, "-eth-network": {1, ethFilter}, @@ -98,6 +142,7 @@ var filterFlags = map[string]nodeFilterC{ "-snap": {0, snapFilter}, } +// parseFilters parses nodeFilters from args. func parseFilters(args []string) ([]nodeFilter, error) { var filters []nodeFilter for len(args) > 0 { @@ -118,6 +163,26 @@ func parseFilters(args []string) ([]nodeFilter, error) { return filters, nil } +// parseFilterLimit parses the -limit option in args. It returns -1 if there is no limit. +func parseFilterLimit(args []string) (int, error) { + limit := -1 + for i, arg := range args { + if arg == "-limit" { + if i == len(args)-1 { + return -1, errors.New("-limit requires an argument") + } + n, err := strconv.Atoi(args[i+1]) + if err != nil { + return -1, fmt.Errorf("invalid -limit %q", args[i+1]) + } + limit = n + } + } + return limit, nil +} + +// andFilter parses node filters in args and and returns a single filter that requires all +// of them to match. func andFilter(args []string) (nodeFilter, error) { checks, err := parseFilters(args) if err != nil { @@ -134,6 +199,10 @@ func andFilter(args []string) (nodeFilter, error) { return f, nil } +func trueFilter(args []string) (nodeFilter, error) { + return func(n nodeJSON) bool { return true }, nil +} + func ipFilter(args []string) (nodeFilter, error) { _, cidr, err := net.ParseCIDR(args[0]) if err != nil { diff --git a/cmd/evm/README.md b/cmd/evm/README.md index 7742ccbbb7..cdff41f904 100644 --- a/cmd/evm/README.md +++ b/cmd/evm/README.md @@ -256,9 +256,9 @@ Error code: 4 Another thing that can be done, is to chain invocations: ``` ./evm t8n --input.alloc=./testdata/1/alloc.json --input.txs=./testdata/1/txs.json --input.env=./testdata/1/env.json --output.alloc=stdout | ./evm t8n --input.alloc=stdin --input.env=./testdata/1/env.json --input.txs=./testdata/1/txs.json -INFO [01-21|22:41:22.963] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" -INFO [01-21|22:41:22.966] rejected tx index=0 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" -INFO [01-21|22:41:22.967] rejected tx index=1 hash="0557ba…18d673" from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" +INFO [01-21|22:41:22.963] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" +INFO [01-21|22:41:22.966] rejected tx index=0 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" +INFO [01-21|22:41:22.967] rejected tx index=1 hash=0557ba..18d673 from=0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192 error="nonce too low: address 0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192, tx: 0 state: 1" ``` What happened here, is that we first applied two identical transactions, so the second one was rejected. diff --git a/cmd/evm/testdata/8/readme.md b/cmd/evm/testdata/8/readme.md index 778fc6151a..e021cd7e2e 100644 --- a/cmd/evm/testdata/8/readme.md +++ b/cmd/evm/testdata/8/readme.md @@ -56,8 +56,8 @@ dir=./testdata/8 \ If we try to execute it on older rules: ``` dir=./testdata/8 && ./evm t8n --state.fork=Istanbul --input.alloc=$dir/alloc.json --input.txs=$dir/txs.json --input.env=$dir/env.json -INFO [01-21|23:21:51.265] rejected tx index=0 hash="d2818d…6ab3da" error="tx type not supported" -INFO [01-21|23:21:51.265] rejected tx index=1 hash="26ea00…81c01b" from=0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B error="nonce too high: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B, tx: 1 state: 0" -INFO [01-21|23:21:51.265] rejected tx index=2 hash="698d01…369cee" error="tx type not supported" +INFO [01-21|23:21:51.265] rejected tx index=0 hash=d2818d..6ab3da error="tx type not supported" +INFO [01-21|23:21:51.265] rejected tx index=1 hash=26ea00..81c01b from=0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B error="nonce too high: address 0xa94f5374Fce5edBC8E2a8697C15331677e6EbF0B, tx: 1 state: 0" +INFO [01-21|23:21:51.265] rejected tx index=2 hash=698d01..369cee error="tx type not supported" ``` Number `1` and `3` are not applicable, and therefore number `2` has wrong nonce. \ No newline at end of file diff --git a/cmd/faucet/faucet.go b/cmd/faucet/faucet.go index 094f5a498b..06171e9a68 100644 --- a/cmd/faucet/faucet.go +++ b/cmd/faucet/faucet.go @@ -87,6 +87,9 @@ var ( twitterTokenFlag = flag.String("twitter.token", "", "Bearer token to authenticate with the v2 Twitter API") twitterTokenV1Flag = flag.String("twitter.token.v1", "", "Bearer token to authenticate with the v1.1 Twitter API") + + goerliFlag = flag.Bool("goerli", false, "Initializes the faucet with Görli network config") + rinkebyFlag = flag.Bool("rinkeby", false, "Initializes the faucet with Rinkeby network config") ) var ( @@ -146,13 +149,9 @@ func main() { log.Crit("Failed to render the faucet template", "err", err) } // Load and parse the genesis block requested by the user - blob, err := ioutil.ReadFile(*genesisFlag) + genesis, err := getGenesis(genesisFlag, *goerliFlag, *rinkebyFlag) if err != nil { - log.Crit("Failed to read genesis block contents", "genesis", *genesisFlag, "err", err) - } - genesis := new(core.Genesis) - if err = json.Unmarshal(blob, genesis); err != nil { - log.Crit("Failed to parse genesis block json", "err", err) + log.Crit("Failed to parse genesis config", "err", err) } // Convert the bootnodes to internal enode representations var enodes []*enode.Node @@ -164,7 +163,8 @@ func main() { } } // Load up the account key and decrypt its password - if blob, err = ioutil.ReadFile(*accPassFlag); err != nil { + blob, err := ioutil.ReadFile(*accPassFlag) + if err != nil { log.Crit("Failed to read account password contents", "file", *accPassFlag, "err", err) } pass := strings.TrimSuffix(string(blob), "\n") @@ -886,3 +886,19 @@ func authNoAuth(url string) (string, string, common.Address, error) { } return address.Hex() + "@noauth", "", address, nil } + +// getGenesis returns a genesis based on input args +func getGenesis(genesisFlag *string, goerliFlag bool, rinkebyFlag bool) (*core.Genesis, error) { + switch { + case genesisFlag != nil: + var genesis core.Genesis + err := common.LoadJSON(*genesisFlag, &genesis) + return &genesis, err + case goerliFlag: + return core.DefaultGoerliGenesisBlock(), nil + case rinkebyFlag: + return core.DefaultRinkebyGenesisBlock(), nil + default: + return nil, fmt.Errorf("no genesis flag provided") + } +} diff --git a/cmd/geth/config.go b/cmd/geth/config.go index e15ff6b147..147b6e686e 100644 --- a/cmd/geth/config.go +++ b/cmd/geth/config.go @@ -30,6 +30,7 @@ import ( "github.com/ethereum/go-ethereum/cmd/utils" "github.com/ethereum/go-ethereum/common/http" "github.com/ethereum/go-ethereum/eth" + "github.com/ethereum/go-ethereum/eth/catalyst" "github.com/ethereum/go-ethereum/eth/ethconfig" "github.com/ethereum/go-ethereum/extension/privacyExtension" "github.com/ethereum/go-ethereum/internal/ethapi" @@ -188,8 +189,17 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) { utils.Fatalf("Error initialising Private Transaction Manager: %s", err.Error()) } - // Quorum - returning `ethService` too for the Raft and extension service - backend, ethService := utils.RegisterEthService(stack, &cfg.Eth) + backend, eth := utils.RegisterEthService(stack, &cfg.Eth) + + // Configure catalyst. + if ctx.GlobalBool(utils.CatalystFlag.Name) { + if eth == nil { + utils.Fatalf("Catalyst does not work in light client mode.") + } + if err := catalyst.Register(stack, eth); err != nil { + utils.Fatalf("%v", err) + } + } // Quorum // plugin service must be after eth service so that eth service will be stopped gradually if any of the plugin @@ -197,7 +207,7 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) { if cfg.Node.Plugins != nil { utils.RegisterPluginService(stack, &cfg.Node, ctx.Bool(utils.PluginSkipVerifyFlag.Name), ctx.Bool(utils.PluginLocalVerifyFlag.Name), ctx.String(utils.PluginPublicKeyFlag.Name)) log.Debug("plugin manager", "value", stack.PluginManager()) - err := ethService.NotifyRegisteredPluginService(stack.PluginManager()) + err := eth.NotifyRegisteredPluginService(stack.PluginManager()) if err != nil { utils.Fatalf("Error initialising QLight Token Manager: %s", err.Error()) } @@ -208,11 +218,11 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) { } if ctx.GlobalBool(utils.RaftModeFlag.Name) && !cfg.Eth.QuorumLightClient.Enabled() { - utils.RegisterRaftService(stack, ctx, &cfg.Node, ethService) + utils.RegisterRaftService(stack, ctx, &cfg.Node, eth) } if private.IsQuorumPrivacyEnabled() { - utils.RegisterExtensionService(stack, ethService) + utils.RegisterExtensionService(stack, eth) } // End Quorum diff --git a/cmd/geth/config_test.go b/cmd/geth/config_test.go index 3afac230e1..8b520c9a34 100644 --- a/cmd/geth/config_test.go +++ b/cmd/geth/config_test.go @@ -372,7 +372,7 @@ func TestFlagsConfig(t *testing.T) { assert.Equal(t, bootNodes(t).Nodes, p2p.BootstrapNodes) //assert.Equal(t, bootNodesV5(t).Nodes, p2p.BootstrapNodesV5) - assert.Equal(t, ":0", p2p.ListenAddr) + assert.Equal(t, "", p2p.ListenAddr) assert.Equal(t, false, p2p.EnableMsgEvents) type NetRestrictType struct { diff --git a/cmd/geth/dbcmd.go b/cmd/geth/dbcmd.go index 4172362258..3ecdf06585 100644 --- a/cmd/geth/dbcmd.go +++ b/cmd/geth/dbcmd.go @@ -20,6 +20,7 @@ import ( "fmt" "os" "path/filepath" + "sort" "strconv" "time" @@ -60,6 +61,7 @@ Remove blockchain and state databases`, dbDeleteCmd, dbPutCmd, dbGetSlotsCmd, + dbDumpFreezerIndex, }, } dbInspectCmd = cli.Command{ @@ -177,6 +179,22 @@ WARNING: This is a low-level operation which may cause database corruption!`, }, Description: "This command looks up the specified database key from the database.", } + dbDumpFreezerIndex = cli.Command{ + Action: utils.MigrateFlags(freezerInspect), + Name: "freezer-index", + Usage: "Dump out the index of a given freezer type", + ArgsUsage: " ", + Flags: []cli.Flag{ + utils.DataDirFlag, + utils.SyncModeFlag, + utils.MainnetFlag, + utils.RopstenFlag, + utils.RinkebyFlag, + utils.GoerliFlag, + utils.YoloV3Flag, + }, + Description: "This command displays information about the freezer index.", + } ) func removeDB(ctx *cli.Context) error { @@ -449,3 +467,43 @@ func dbDumpTrie(ctx *cli.Context) error { } return it.Err } + +func freezerInspect(ctx *cli.Context) error { + var ( + start, end int64 + disableSnappy bool + err error + ) + if ctx.NArg() < 3 { + return fmt.Errorf("required arguments: %v", ctx.Command.ArgsUsage) + } + kind := ctx.Args().Get(0) + if noSnap, ok := rawdb.FreezerNoSnappy[kind]; !ok { + var options []string + for opt := range rawdb.FreezerNoSnappy { + options = append(options, opt) + } + sort.Strings(options) + return fmt.Errorf("Could read freezer-type '%v'. Available options: %v", kind, options) + } else { + disableSnappy = noSnap + } + if start, err = strconv.ParseInt(ctx.Args().Get(1), 10, 64); err != nil { + log.Info("Could read start-param", "error", err) + return err + } + if end, err = strconv.ParseInt(ctx.Args().Get(2), 10, 64); err != nil { + log.Info("Could read count param", "error", err) + return err + } + stack, _ := makeConfigNode(ctx) + defer stack.Close() + path := filepath.Join(stack.ResolvePath("chaindata"), "ancient") + log.Info("Opening freezer", "location", path, "name", kind) + if f, err := rawdb.NewFreezerTable(path, kind, disableSnappy); err != nil { + return err + } else { + f.DumpIndex(start, end) + } + return nil +} diff --git a/cmd/geth/main.go b/cmd/geth/main.go index fb5904450c..f7a65e3e4e 100644 --- a/cmd/geth/main.go +++ b/cmd/geth/main.go @@ -154,6 +154,7 @@ var ( utils.EVMInterpreterFlag, utils.MinerNotifyFullFlag, configFileFlag, + utils.CatalystFlag, // Quorum utils.QuorumImmutabilityThreshold, utils.EnableNodePermissionFlag, diff --git a/cmd/geth/snapshot.go b/cmd/geth/snapshot.go index 62ea1a9db5..524fcb8aca 100644 --- a/cmd/geth/snapshot.go +++ b/cmd/geth/snapshot.go @@ -153,7 +153,6 @@ func pruneState(ctx *cli.Context) error { defer stack.Close() chaindb := utils.MakeChainDatabase(ctx, stack, false) - defer chaindb.Close() // Quorum if config.Eth.Genesis.Config.IsQuorum { @@ -163,7 +162,7 @@ func pruneState(ctx *cli.Context) error { pruner, err := pruner.NewPruner(chaindb, stack.ResolvePath(""), stack.ResolvePath(config.Eth.TrieCleanCacheJournal), ctx.GlobalUint64(utils.BloomFilterSizeFlag.Name)) if err != nil { - log.Error("Failed to open snapshot tree", "error", err) + log.Error("Failed to open snapshot tree", "err", err) return err } if ctx.NArg() > 1 { @@ -174,12 +173,12 @@ func pruneState(ctx *cli.Context) error { if ctx.NArg() == 1 { targetRoot, err = parseRoot(ctx.Args()[0]) if err != nil { - log.Error("Failed to resolve state root", "error", err) + log.Error("Failed to resolve state root", "err", err) return err } } if err = pruner.Prune(targetRoot); err != nil { - log.Error("Failed to prune state", "error", err) + log.Error("Failed to prune state", "err", err) return err } return nil @@ -190,8 +189,6 @@ func verifyState(ctx *cli.Context) error { defer stack.Close() chaindb := utils.MakeChainDatabase(ctx, stack, true) - defer chaindb.Close() - headBlock := rawdb.ReadHeadBlock(chaindb) if headBlock == nil { log.Error("Failed to load head block") @@ -199,7 +196,7 @@ func verifyState(ctx *cli.Context) error { } snaptree, err := snapshot.New(chaindb, trie.NewDatabase(chaindb), 256, headBlock.Root(), false, false, false) if err != nil { - log.Error("Failed to open snapshot tree", "error", err) + log.Error("Failed to open snapshot tree", "err", err) return err } if ctx.NArg() > 1 { @@ -210,15 +207,15 @@ func verifyState(ctx *cli.Context) error { if ctx.NArg() == 1 { root, err = parseRoot(ctx.Args()[0]) if err != nil { - log.Error("Failed to resolve state root", "error", err) + log.Error("Failed to resolve state root", "err", err) return err } } if err := snaptree.Verify(root); err != nil { - log.Error("Failed to verfiy state", "error", err) + log.Error("Failed to verify state", "root", root, "err", err) return err } - log.Info("Verified the state") + log.Info("Verified the state", "root", root) return nil } @@ -230,8 +227,6 @@ func traverseState(ctx *cli.Context) error { defer stack.Close() chaindb := utils.MakeChainDatabase(ctx, stack, true) - defer chaindb.Close() - headBlock := rawdb.ReadHeadBlock(chaindb) if headBlock == nil { log.Error("Failed to load head block") @@ -248,7 +243,7 @@ func traverseState(ctx *cli.Context) error { if ctx.NArg() == 1 { root, err = parseRoot(ctx.Args()[0]) if err != nil { - log.Error("Failed to resolve state root", "error", err) + log.Error("Failed to resolve state root", "err", err) return err } log.Info("Start traversing the state", "root", root) @@ -259,7 +254,7 @@ func traverseState(ctx *cli.Context) error { triedb := trie.NewDatabase(chaindb) t, err := trie.NewSecure(root, triedb) if err != nil { - log.Error("Failed to open trie", "root", root, "error", err) + log.Error("Failed to open trie", "root", root, "err", err) return err } var ( @@ -274,13 +269,13 @@ func traverseState(ctx *cli.Context) error { accounts += 1 var acc state.Account if err := rlp.DecodeBytes(accIter.Value, &acc); err != nil { - log.Error("Invalid account encountered during traversal", "error", err) + log.Error("Invalid account encountered during traversal", "err", err) return err } if acc.Root != emptyRoot { storageTrie, err := trie.NewSecure(acc.Root, triedb) if err != nil { - log.Error("Failed to open storage trie", "root", acc.Root, "error", err) + log.Error("Failed to open storage trie", "root", acc.Root, "err", err) return err } storageIter := trie.NewIterator(storageTrie.NodeIterator(nil)) @@ -288,7 +283,7 @@ func traverseState(ctx *cli.Context) error { slots += 1 } if storageIter.Err != nil { - log.Error("Failed to traverse storage trie", "root", acc.Root, "error", storageIter.Err) + log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Err) return storageIter.Err } } @@ -306,7 +301,7 @@ func traverseState(ctx *cli.Context) error { } } if accIter.Err != nil { - log.Error("Failed to traverse state trie", "root", root, "error", accIter.Err) + log.Error("Failed to traverse state trie", "root", root, "err", accIter.Err) return accIter.Err } log.Info("State is complete", "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start))) @@ -322,8 +317,6 @@ func traverseRawState(ctx *cli.Context) error { defer stack.Close() chaindb := utils.MakeChainDatabase(ctx, stack, true) - defer chaindb.Close() - headBlock := rawdb.ReadHeadBlock(chaindb) if headBlock == nil { log.Error("Failed to load head block") @@ -340,7 +333,7 @@ func traverseRawState(ctx *cli.Context) error { if ctx.NArg() == 1 { root, err = parseRoot(ctx.Args()[0]) if err != nil { - log.Error("Failed to resolve state root", "error", err) + log.Error("Failed to resolve state root", "err", err) return err } log.Info("Start traversing the state", "root", root) @@ -351,7 +344,7 @@ func traverseRawState(ctx *cli.Context) error { triedb := trie.NewDatabase(chaindb) t, err := trie.NewSecure(root, triedb) if err != nil { - log.Error("Failed to open trie", "root", root, "error", err) + log.Error("Failed to open trie", "root", root, "err", err) return err } var ( @@ -382,13 +375,13 @@ func traverseRawState(ctx *cli.Context) error { accounts += 1 var acc state.Account if err := rlp.DecodeBytes(accIter.LeafBlob(), &acc); err != nil { - log.Error("Invalid account encountered during traversal", "error", err) + log.Error("Invalid account encountered during traversal", "err", err) return errors.New("invalid account") } if acc.Root != emptyRoot { storageTrie, err := trie.NewSecure(acc.Root, triedb) if err != nil { - log.Error("Failed to open storage trie", "root", acc.Root, "error", err) + log.Error("Failed to open storage trie", "root", acc.Root, "err", err) return errors.New("missing storage trie") } storageIter := storageTrie.NodeIterator(nil) @@ -411,7 +404,7 @@ func traverseRawState(ctx *cli.Context) error { } } if storageIter.Error() != nil { - log.Error("Failed to traverse storage trie", "root", acc.Root, "error", storageIter.Error()) + log.Error("Failed to traverse storage trie", "root", acc.Root, "err", storageIter.Error()) return storageIter.Error() } } @@ -430,7 +423,7 @@ func traverseRawState(ctx *cli.Context) error { } } if accIter.Error() != nil { - log.Error("Failed to traverse state trie", "root", root, "error", accIter.Error()) + log.Error("Failed to traverse state trie", "root", root, "err", accIter.Error()) return accIter.Error() } log.Info("State is complete", "nodes", nodes, "accounts", accounts, "slots", slots, "codes", codes, "elapsed", common.PrettyDuration(time.Since(start))) diff --git a/cmd/geth/usage.go b/cmd/geth/usage.go index 2373119619..1b90c8ade4 100644 --- a/cmd/geth/usage.go +++ b/cmd/geth/usage.go @@ -335,6 +335,7 @@ var AppHelpFlagGroups = []flags.FlagGroup{ utils.SnapshotFlag, utils.BloomFilterSizeFlag, cli.HelpFlag, + utils.CatalystFlag, }, }, } diff --git a/cmd/puppeth/ssh.go b/cmd/puppeth/ssh.go index da2862db2f..039cb6cb45 100644 --- a/cmd/puppeth/ssh.go +++ b/cmd/puppeth/ssh.go @@ -30,6 +30,7 @@ import ( "github.com/ethereum/go-ethereum/log" "golang.org/x/crypto/ssh" + "golang.org/x/crypto/ssh/agent" "golang.org/x/crypto/ssh/terminal" ) @@ -43,6 +44,8 @@ type sshClient struct { logger log.Logger } +const EnvSSHAuthSock = "SSH_AUTH_SOCK" + // dial establishes an SSH connection to a remote node using the current user and // the user's configured private RSA key. If that fails, password authentication // is fallen back to. server can be a string like user:identity@server:port. @@ -79,38 +82,49 @@ func dial(server string, pubkey []byte) (*sshClient, error) { if username == "" { username = user.Username } - // Configure the supported authentication methods (private key and password) - var auths []ssh.AuthMethod - path := filepath.Join(user.HomeDir, ".ssh", identity) - if buf, err := ioutil.ReadFile(path); err != nil { - log.Warn("No SSH key, falling back to passwords", "path", path, "err", err) + // Configure the supported authentication methods (ssh agent, private key and password) + var ( + auths []ssh.AuthMethod + conn net.Conn + ) + if conn, err = net.Dial("unix", os.Getenv(EnvSSHAuthSock)); err != nil { + log.Warn("Unable to dial SSH agent, falling back to private keys", "err", err) } else { - key, err := ssh.ParsePrivateKey(buf) - if err != nil { - fmt.Printf("What's the decryption password for %s? (won't be echoed)\n>", path) - blob, err := terminal.ReadPassword(int(os.Stdin.Fd())) - fmt.Println() - if err != nil { - log.Warn("Couldn't read password", "err", err) - } - key, err := ssh.ParsePrivateKeyWithPassphrase(buf, blob) + client := agent.NewClient(conn) + auths = append(auths, ssh.PublicKeysCallback(client.Signers)) + } + if err != nil { + path := filepath.Join(user.HomeDir, ".ssh", identity) + if buf, err := ioutil.ReadFile(path); err != nil { + log.Warn("No SSH key, falling back to passwords", "path", path, "err", err) + } else { + key, err := ssh.ParsePrivateKey(buf) if err != nil { - log.Warn("Failed to decrypt SSH key, falling back to passwords", "path", path, "err", err) + fmt.Printf("What's the decryption password for %s? (won't be echoed)\n>", path) + blob, err := terminal.ReadPassword(int(os.Stdin.Fd())) + fmt.Println() + if err != nil { + log.Warn("Couldn't read password", "err", err) + } + key, err := ssh.ParsePrivateKeyWithPassphrase(buf, blob) + if err != nil { + log.Warn("Failed to decrypt SSH key, falling back to passwords", "path", path, "err", err) + } else { + auths = append(auths, ssh.PublicKeys(key)) + } } else { auths = append(auths, ssh.PublicKeys(key)) } - } else { - auths = append(auths, ssh.PublicKeys(key)) } - } - auths = append(auths, ssh.PasswordCallback(func() (string, error) { - fmt.Printf("What's the login password for %s at %s? (won't be echoed)\n> ", username, server) - blob, err := terminal.ReadPassword(int(os.Stdin.Fd())) + auths = append(auths, ssh.PasswordCallback(func() (string, error) { + fmt.Printf("What's the login password for %s at %s? (won't be echoed)\n> ", username, server) + blob, err := terminal.ReadPassword(int(os.Stdin.Fd())) - fmt.Println() - return string(blob), err - })) + fmt.Println() + return string(blob), err + })) + } // Resolve the IP address of the remote server addr, err := net.LookupHost(hostname) if err != nil { diff --git a/cmd/utils/flags.go b/cmd/utils/flags.go index 21c00ba153..7a97382b49 100644 --- a/cmd/utils/flags.go +++ b/cmd/utils/flags.go @@ -801,6 +801,10 @@ var ( Usage: "External EVM configuration (default = built-in interpreter)", Value: "", } + CatalystFlag = cli.BoolFlag{ + Name: "catalyst", + Usage: "Catalyst mode (eth2 integration testing)", + } // Quorum - added configurable call timeout for execution of calls EVMCallTimeOutFlag = cli.IntFlag{ @@ -1495,10 +1499,11 @@ func SetP2PConfig(ctx *cli.Context, cfg *p2p.Config) { cfg.NetRestrict = list } - if ctx.GlobalBool(DeveloperFlag.Name) { + if ctx.GlobalBool(DeveloperFlag.Name) || ctx.GlobalBool(CatalystFlag.Name) { // --dev mode can't use p2p networking. cfg.MaxPeers = 0 - cfg.ListenAddr = ":0" + cfg.ListenAddr = "" + cfg.NoDial = true cfg.NoDiscovery = true cfg.DiscoveryV5 = false } @@ -2242,16 +2247,14 @@ func SetDNSDiscoveryDefaults(cfg *ethconfig.Config, genesis common.Hash) { } if url := params.KnownDNSNetwork(genesis, protocol); url != "" { cfg.EthDiscoveryURLs = []string{url} - } - if cfg.SyncMode == downloader.SnapSync { - if url := params.KnownDNSNetwork(genesis, "snap"); url != "" { - cfg.SnapDiscoveryURLs = []string{url} - } + cfg.SnapDiscoveryURLs = cfg.EthDiscoveryURLs } } // RegisterEthService adds an Ethereum client to the stack. // Quorum => returns also the ethereum service which is used by the raft service +// The second return value is the full node instance, which may be nil if the +// node is running as a light client. func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) (ethapi.Backend, *eth.Ethereum) { if cfg.SyncMode == downloader.LightSync { backend, err := les.New(stack, cfg) @@ -2474,7 +2477,6 @@ func MakeGenesis(ctx *cli.Context) *core.Genesis { return genesis } -// MakeChain creates a chain manager from set command line flags. func MakeChain(ctx *cli.Context, stack *node.Node, useExist bool) (chain *core.BlockChain, chainDb ethdb.Database) { var err error var config *params.ChainConfig diff --git a/common/types.go b/common/types.go index 0afaefa565..43d4e357dc 100644 --- a/common/types.go +++ b/common/types.go @@ -183,7 +183,7 @@ func (h Hash) Hex() string { return hexutil.Encode(h[:]) } // TerminalString implements log.TerminalStringer, formatting a string for console // output during logging. func (h Hash) TerminalString() string { - return fmt.Sprintf("%x…%x", h[:3], h[29:]) + return fmt.Sprintf("%x..%x", h[:3], h[29:]) } // String implements the stringer interface and is used also by the logger when diff --git a/consensus/ethash/consensus.go b/consensus/ethash/consensus.go index babbcc420f..2ee3638845 100644 --- a/consensus/ethash/consensus.go +++ b/consensus/ethash/consensus.go @@ -203,15 +203,23 @@ func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Blo number, parent := block.NumberU64()-1, block.ParentHash() for i := 0; i < 7; i++ { - ancestor := chain.GetBlock(parent, number) - if ancestor == nil { + ancestorHeader := chain.GetHeader(parent, number) + if ancestorHeader == nil { break } - ancestors[ancestor.Hash()] = ancestor.Header() - for _, uncle := range ancestor.Uncles() { - uncles.Add(uncle.Hash()) + ancestors[parent] = ancestorHeader + // If the ancestor doesn't have any uncles, we don't have to iterate them + if ancestorHeader.UncleHash != types.EmptyUncleHash { + // Need to add those uncles to the blacklist too + ancestor := chain.GetBlock(parent, number) + if ancestor == nil { + break + } + for _, uncle := range ancestor.Uncles() { + uncles.Add(uncle.Hash()) + } } - parent, number = ancestor.ParentHash(), number-1 + parent, number = ancestorHeader.ParentHash, number-1 } ancestors[block.Hash()] = block.Header() uncles.Add(block.Hash()) @@ -319,6 +327,8 @@ func (ethash *Ethash) CalcDifficulty(chain consensus.ChainHeaderReader, time uin func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int { next := new(big.Int).Add(parent.Number, big1) switch { + case config.IsCatalyst(next): + return big.NewInt(1) case config.IsMuirGlacier(next): return calcDifficultyEip2384(time, parent) case config.IsConstantinople(next): @@ -624,6 +634,16 @@ var ( // reward. The total reward consists of the static block reward and rewards for // included uncles. The coinbase of each uncle block is also rewarded. func accumulateRewards(config *params.ChainConfig, state *state.StateDB, header *types.Header, uncles []*types.Header) { + // Skip block reward in catalyst mode + if config.IsCatalyst(header.Number) { + return + } + + // Quorum: Disable reward for Quorum if gas price is not enabled, + // otherwise static block reward will impact Raft (even though the gas price is zero) + if config.IsQuorum && !config.IsGasPriceEnabled(header.Number) { + return + } // Quorum: // Historically, quorum was adding (static) reward to account 0x0. diff --git a/consensus/ethash/ethash.go b/consensus/ethash/ethash.go index 995fb2038c..0922c2586a 100644 --- a/consensus/ethash/ethash.go +++ b/consensus/ethash/ethash.go @@ -112,12 +112,13 @@ func memoryMapFile(file *os.File, write bool) (mmap.MMap, []uint32, error) { if err != nil { return nil, nil, err } - // Yay, we managed to memory map the file, here be dragons - header := *(*reflect.SliceHeader)(unsafe.Pointer(&mem)) - header.Len /= 4 - header.Cap /= 4 - - return mem, *(*[]uint32)(unsafe.Pointer(&header)), nil + // The file is now memory-mapped. Create a []uint32 view of the file. + var view []uint32 + header := (*reflect.SliceHeader)(unsafe.Pointer(&view)) + header.Data = (*reflect.SliceHeader)(unsafe.Pointer(&mem)).Data + header.Cap = len(mem) / 4 + header.Len = header.Cap + return mem, view, nil } // memoryMapAndGenerate tries to memory map a temporary file of uint32s for write diff --git a/consensus/istanbul/ibft/core/events.go b/consensus/istanbul/ibft/core/events.go index f8f46facbc..fe617b96b5 100644 --- a/consensus/istanbul/ibft/core/events.go +++ b/consensus/istanbul/ibft/core/events.go @@ -1,4 +1,4 @@ -// Copyright 2017 The go-ethereum Authors +// Copyright 2021 The go-ethereum Authors // This file is part of the go-ethereum library. // // The go-ethereum library is free software: you can redistribute it and/or modify diff --git a/consensus/istanbul/types.go b/consensus/istanbul/types.go index 86b586a2d0..a560adc366 100644 --- a/consensus/istanbul/types.go +++ b/consensus/istanbul/types.go @@ -37,10 +37,10 @@ type Proposal interface { EncodeRLP(w io.Writer) error DecodeRLP(s *rlp.Stream) error - - String() string } +var _ Proposal = &types.Block{} + type Request struct { Proposal Proposal } diff --git a/core/blockchain.go b/core/blockchain.go index ae8adde311..a14c831487 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -211,9 +211,8 @@ type BlockChain struct { processor Processor // Block transaction processor interface vmConfig vm.Config - shouldPreserve func(*types.Block) bool // Function used to determine whether should preserve the given block. - terminateInsert func(common.Hash, uint64) bool // Testing hook used to terminate ancient receipt chain insertion. - writeLegacyJournal bool // Testing flag used to flush the snapshot journal in legacy format. + shouldPreserve func(*types.Block) bool // Function used to determine whether should preserve the given block. + terminateInsert func(common.Hash, uint64) bool // Testing hook used to terminate ancient receipt chain insertion. // Quorum quorumConfig *QuorumChainConfig // quorum chain config holds all the possible configuration fields for GoQuorum @@ -552,7 +551,7 @@ func (bc *BlockChain) SetHead(head uint64) error { // SetHeadBeyondRoot rewinds the local chain to a new head with the extra condition // that the rewind must pass the specified state root. This method is meant to be -// used when rewiding with snapshots enabled to ensure that we go back further than +// used when rewinding with snapshots enabled to ensure that we go back further than // persistent disk layer. Depending on whether the node was fast synced or full, and // in which state, the method will try to delete minimal data from disk whilst // retaining chain consistency. @@ -702,7 +701,7 @@ func (bc *BlockChain) FastSyncCommitHead(hash common.Hash) error { // Make sure that both the block as well at its state trie exists block := bc.GetBlockByHash(hash) if block == nil { - return fmt.Errorf("non existent block [%x…]", hash[:4]) + return fmt.Errorf("non existent block [%x..]", hash[:4]) } if _, err := trie.NewSecure(block.Root(), bc.stateCache.TrieDB()); err != nil { return err @@ -713,7 +712,8 @@ func (bc *BlockChain) FastSyncCommitHead(hash common.Hash) error { headBlockGauge.Update(int64(block.NumberU64())) bc.chainmu.Unlock() - // Destroy any existing state snapshot and regenerate it in the background + // Destroy any existing state snapshot and regenerate it in the background, + // also resuming the normal maintenance of any previously paused snapshot. if bc.snaps != nil { bc.snaps.Rebuild(block.Root()) } @@ -1149,14 +1149,8 @@ func (bc *BlockChain) Stop() { var snapBase common.Hash if bc.snaps != nil { var err error - if bc.writeLegacyJournal { - if snapBase, err = bc.snaps.LegacyJournal(bc.CurrentBlock().Root()); err != nil { - log.Error("Failed to journal state snapshot", "err", err) - } - } else { - if snapBase, err = bc.snaps.Journal(bc.CurrentBlock().Root()); err != nil { - log.Error("Failed to journal state snapshot", "err", err) - } + if snapBase, err = bc.snaps.Journal(bc.CurrentBlock().Root()); err != nil { + log.Error("Failed to journal state snapshot", "err", err) } } // Ensure the state of a recent block is also stored to disk before exiting. @@ -1294,7 +1288,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [ if blockChain[i].NumberU64() != blockChain[i-1].NumberU64()+1 || blockChain[i].ParentHash() != blockChain[i-1].Hash() { log.Error("Non contiguous receipt insert", "number", blockChain[i].Number(), "hash", blockChain[i].Hash(), "parent", blockChain[i].ParentHash(), "prevnumber", blockChain[i-1].Number(), "prevhash", blockChain[i-1].Hash()) - return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, blockChain[i-1].NumberU64(), + return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x..], item %d is #%d [%x..] (parent [%x..])", i-1, blockChain[i-1].NumberU64(), blockChain[i-1].Hash().Bytes()[:4], i, blockChain[i].NumberU64(), blockChain[i].Hash().Bytes()[:4], blockChain[i].ParentHash().Bytes()[:4]) } } @@ -1359,66 +1353,17 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [ } // Short circuit if the owner header is unknown if !bc.HasHeader(block.Hash(), block.NumberU64()) { - return i, fmt.Errorf("containing header #%d [%x…] unknown", block.Number(), block.Hash().Bytes()[:4]) + return i, fmt.Errorf("containing header #%d [%x..] unknown", block.Number(), block.Hash().Bytes()[:4]) } - var ( - start = time.Now() - logged = time.Now() - count int - ) - // Migrate all ancient blocks. This can happen if someone upgrades from Geth - // 1.8.x to 1.9.x mid-fast-sync. Perhaps we can get rid of this path in the - // long term. - for { - // We can ignore the error here since light client won't hit this code path. - frozen, _ := bc.db.Ancients() - if frozen >= block.NumberU64() { - break - } - h := rawdb.ReadCanonicalHash(bc.db, frozen) - b := rawdb.ReadBlock(bc.db, h, frozen) - size += rawdb.WriteAncientBlock(bc.db, b, rawdb.ReadReceipts(bc.db, h, frozen, bc.chainConfig), rawdb.ReadTd(bc.db, h, frozen)) - count += 1 - - // Always keep genesis block in active database. - if b.NumberU64() != 0 { - deleted = append(deleted, &numberHash{b.NumberU64(), b.Hash()}) - } - if time.Since(logged) > 8*time.Second { - log.Info("Migrating ancient blocks", "count", count, "elapsed", common.PrettyDuration(time.Since(start))) - logged = time.Now() - } - // Don't collect too much in-memory, write it out every 100K blocks - if len(deleted) > 100000 { - // Sync the ancient store explicitly to ensure all data has been flushed to disk. - if err := bc.db.Sync(); err != nil { - return 0, err - } - // Wipe out canonical block data. - for _, nh := range deleted { - rawdb.DeleteBlockWithoutNumber(batch, nh.hash, nh.number) - rawdb.DeleteCanonicalHash(batch, nh.number) - } - if err := batch.Write(); err != nil { - return 0, err - } - batch.Reset() - // Wipe out side chain too. - for _, nh := range deleted { - for _, hash := range rawdb.ReadAllHashes(bc.db, nh.number) { - rawdb.DeleteBlock(batch, hash, nh.number) - } - } - if err := batch.Write(); err != nil { - return 0, err - } - batch.Reset() - deleted = deleted[0:] + if block.NumberU64() == 1 { + // Make sure to write the genesis into the freezer + if frozen, _ := bc.db.Ancients(); frozen == 0 { + h := rawdb.ReadCanonicalHash(bc.db, 0) + b := rawdb.ReadBlock(bc.db, h, 0) + size += rawdb.WriteAncientBlock(bc.db, b, rawdb.ReadReceipts(bc.db, h, 0, bc.chainConfig), rawdb.ReadTd(bc.db, h, 0)) + log.Info("Wrote genesis to ancients") } } - if count > 0 { - log.Info("Migrated ancient blocks", "count", count, "elapsed", common.PrettyDuration(time.Since(start))) - } // Flush data into ancient database. size += rawdb.WriteAncientBlock(bc.db, block, receiptChain[i], bc.GetTd(block.Hash(), block.NumberU64())) @@ -1503,7 +1448,7 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [ } // Short circuit if the owner header is unknown if !bc.HasHeader(block.Hash(), block.NumberU64()) { - return i, fmt.Errorf("containing header #%d [%x…] unknown", block.Number(), block.Hash().Bytes()[:4]) + return i, fmt.Errorf("containing header #%d [%x..] unknown", block.Number(), block.Hash().Bytes()[:4]) } if !skipPresenceCheck { // Ignore if the entire data is already known @@ -1874,7 +1819,7 @@ func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) { log.Error("Non contiguous block insert", "number", block.Number(), "hash", block.Hash(), "parent", block.ParentHash(), "prevnumber", prev.Number(), "prevhash", prev.Hash()) - return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, prev.NumberU64(), + return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x..], item %d is #%d [%x..] (parent [%x..])", i-1, prev.NumberU64(), prev.Hash().Bytes()[:4], i, block.NumberU64(), block.Hash().Bytes()[:4], block.ParentHash().Bytes()[:4]) } } @@ -1888,6 +1833,22 @@ func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) { return n, err } +// InsertChainWithoutSealVerification works exactly the same +// except for seal verification, seal verification is omitted +func (bc *BlockChain) InsertChainWithoutSealVerification(block *types.Block) (int, error) { + bc.blockProcFeed.Send(true) + defer bc.blockProcFeed.Send(false) + + // Pre-checks passed, start the full block imports + bc.wg.Add(1) + bc.chainmu.Lock() + n, err := bc.insertChain(types.Blocks([]*types.Block{block}), false) + bc.chainmu.Unlock() + bc.wg.Done() + + return n, err +} + // insertChain is the internal implementation of InsertChain, which assumes that // 1) chains are contiguous, and 2) The chain mutex is held. // @@ -2387,7 +2348,6 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error { l := *log if removed { l.Removed = true - } else { } logs = append(logs, &l) } diff --git a/core/blockchain_snapshot_test.go b/core/blockchain_snapshot_test.go index 3dcdac6230..4718a5b19b 100644 --- a/core/blockchain_snapshot_test.go +++ b/core/blockchain_snapshot_test.go @@ -39,7 +39,6 @@ import ( // snapshotTestBasic wraps the common testing fields in the snapshot tests. type snapshotTestBasic struct { - legacy bool // Wether write the snapshot journal in legacy format chainBlocks int // Number of blocks to generate for the canonical chain snapshotBlock uint64 // Block number of the relevant snapshot disk layer commitBlock uint64 // Block number for which to commit the state to disk @@ -104,19 +103,13 @@ func (basic *snapshotTestBasic) prepare(t *testing.T) (*BlockChain, []*types.Blo chain.stateCache.TrieDB().Commit(blocks[point-1].Root(), true, nil) } if basic.snapshotBlock > 0 && basic.snapshotBlock == point { - if basic.legacy { - // Here we commit the snapshot disk root to simulate - // committing the legacy snapshot. - rawdb.WriteSnapshotRoot(db, blocks[point-1].Root()) - } else { - // Flushing the entire snap tree into the disk, the - // relavant (a) snapshot root and (b) snapshot generator - // will be persisted atomically. - chain.snaps.Cap(blocks[point-1].Root(), 0) - diskRoot, blockRoot := chain.snaps.DiskRoot(), blocks[point-1].Root() - if !bytes.Equal(diskRoot.Bytes(), blockRoot.Bytes()) { - t.Fatalf("Failed to flush disk layer change, want %x, got %x", blockRoot, diskRoot) - } + // Flushing the entire snap tree into the disk, the + // relavant (a) snapshot root and (b) snapshot generator + // will be persisted atomically. + chain.snaps.Cap(blocks[point-1].Root(), 0) + diskRoot, blockRoot := chain.snaps.DiskRoot(), blocks[point-1].Root() + if !bytes.Equal(diskRoot.Bytes(), blockRoot.Bytes()) { + t.Fatalf("Failed to flush disk layer change, want %x, got %x", blockRoot, diskRoot) } } } @@ -129,12 +122,6 @@ func (basic *snapshotTestBasic) prepare(t *testing.T) (*BlockChain, []*types.Blo basic.db = db basic.gendb = gendb basic.engine = engine - - // Ugly hack, notify the chain to flush the journal in legacy format - // if it's requested. - if basic.legacy { - chain.writeLegacyJournal = true - } return chain, blocks } @@ -484,46 +471,6 @@ func TestRestartWithNewSnapshot(t *testing.T) { // Expected snapshot disk : G test := &snapshotTest{ snapshotTestBasic{ - legacy: false, - chainBlocks: 8, - snapshotBlock: 0, - commitBlock: 0, - expCanonicalBlocks: 8, - expHeadHeader: 8, - expHeadFastBlock: 8, - expHeadBlock: 8, - expSnapshotBottom: 0, // Initial disk layer built from genesis - }, - } - test.test(t) - test.teardown() -} - -// Tests a Geth restart with valid but "legacy" snapshot. Before the shutdown, -// all snapshot journal will be persisted correctly. In this case no snapshot -// recovery is required. -func TestRestartWithLegacySnapshot(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G - // Snapshot: G - // - // SetHead(0) - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4->C5->C6->C7->C8 - // - // Expected head header : C8 - // Expected head fast block: C8 - // Expected head block : C8 - // Expected snapshot disk : G - t.Skip("Legacy format testing is not supported") - test := &snapshotTest{ - snapshotTestBasic{ - legacy: true, chainBlocks: 8, snapshotBlock: 0, commitBlock: 0, @@ -563,7 +510,6 @@ func TestNoCommitCrashWithNewSnapshot(t *testing.T) { // Expected snapshot disk : C4 test := &crashSnapshotTest{ snapshotTestBasic{ - legacy: false, chainBlocks: 8, snapshotBlock: 4, commitBlock: 0, @@ -603,7 +549,6 @@ func TestLowCommitCrashWithNewSnapshot(t *testing.T) { // Expected snapshot disk : C4 test := &crashSnapshotTest{ snapshotTestBasic{ - legacy: false, chainBlocks: 8, snapshotBlock: 4, commitBlock: 2, @@ -643,7 +588,6 @@ func TestHighCommitCrashWithNewSnapshot(t *testing.T) { // Expected snapshot disk : C4 test := &crashSnapshotTest{ snapshotTestBasic{ - legacy: false, chainBlocks: 8, snapshotBlock: 4, commitBlock: 6, @@ -658,131 +602,6 @@ func TestHighCommitCrashWithNewSnapshot(t *testing.T) { test.teardown() } -// Tests a Geth was crashed and restarts with a broken and "legacy format" -// snapshot. In this case the entire legacy snapshot should be discared -// and rebuild from the new chain head. The new head here refers to the -// genesis because there is no committed point. -func TestNoCommitCrashWithLegacySnapshot(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G - // Snapshot: G, C4 - // - // CRASH - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4->C5->C6->C7->C8 - // - // Expected head header : C8 - // Expected head fast block: C8 - // Expected head block : G - // Expected snapshot disk : G - t.Skip("Legacy format testing is not supported") - test := &crashSnapshotTest{ - snapshotTestBasic{ - legacy: true, - chainBlocks: 8, - snapshotBlock: 4, - commitBlock: 0, - expCanonicalBlocks: 8, - expHeadHeader: 8, - expHeadFastBlock: 8, - expHeadBlock: 0, - expSnapshotBottom: 0, // Rebuilt snapshot from the latest HEAD(genesis) - }, - } - test.test(t) - test.teardown() -} - -// Tests a Geth was crashed and restarts with a broken and "legacy format" -// snapshot. In this case the entire legacy snapshot should be discared -// and rebuild from the new chain head. The new head here refers to the -// block-2 because it's committed into the disk. -func TestLowCommitCrashWithLegacySnapshot(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G, C2 - // Snapshot: G, C4 - // - // CRASH - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4->C5->C6->C7->C8 - // - // Expected head header : C8 - // Expected head fast block: C8 - // Expected head block : C2 - // Expected snapshot disk : C2 - t.Skip("Legacy format testing is not supported") - test := &crashSnapshotTest{ - snapshotTestBasic{ - legacy: true, - chainBlocks: 8, - snapshotBlock: 4, - commitBlock: 2, - expCanonicalBlocks: 8, - expHeadHeader: 8, - expHeadFastBlock: 8, - expHeadBlock: 2, - expSnapshotBottom: 2, // Rebuilt snapshot from the latest HEAD - }, - } - test.test(t) - test.teardown() -} - -// Tests a Geth was crashed and restarts with a broken and "legacy format" -// snapshot. In this case the entire legacy snapshot should be discared -// and rebuild from the new chain head. -// -// The new head here refers to the the genesis, the reason is: -// - the state of block-6 is committed into the disk -// - the legacy disk layer of block-4 is committed into the disk -// - the head is rewound the genesis in order to find an available -// state lower than disk layer -func TestHighCommitCrashWithLegacySnapshot(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G, C6 - // Snapshot: G, C4 - // - // CRASH - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4->C5->C6->C7->C8 - // - // Expected head header : C8 - // Expected head fast block: C8 - // Expected head block : G - // Expected snapshot disk : G - t.Skip("Legacy format testing is not supported") - test := &crashSnapshotTest{ - snapshotTestBasic{ - legacy: true, - chainBlocks: 8, - snapshotBlock: 4, - commitBlock: 6, - expCanonicalBlocks: 8, - expHeadHeader: 8, - expHeadFastBlock: 8, - expHeadBlock: 0, - expSnapshotBottom: 0, // Rebuilt snapshot from the latest HEAD(genesis) - }, - } - test.test(t) - test.teardown() -} - // Tests a Geth was running with snapshot enabled. Then restarts without // enabling snapshot and after that re-enable the snapshot again. In this // case the snapshot should be rebuilt with latest chain head. @@ -806,47 +625,6 @@ func TestGappedNewSnapshot(t *testing.T) { // Expected snapshot disk : C10 test := &gappedSnapshotTest{ snapshotTestBasic: snapshotTestBasic{ - legacy: false, - chainBlocks: 8, - snapshotBlock: 0, - commitBlock: 0, - expCanonicalBlocks: 10, - expHeadHeader: 10, - expHeadFastBlock: 10, - expHeadBlock: 10, - expSnapshotBottom: 10, // Rebuilt snapshot from the latest HEAD - }, - gapped: 2, - } - test.test(t) - test.teardown() -} - -// Tests a Geth was running with leagcy snapshot enabled. Then restarts -// without enabling snapshot and after that re-enable the snapshot again. -// In this case the snapshot should be rebuilt with latest chain head. -func TestGappedLegacySnapshot(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G - // Snapshot: G - // - // SetHead(0) - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10 - // - // Expected head header : C10 - // Expected head fast block: C10 - // Expected head block : C10 - // Expected snapshot disk : C10 - t.Skip("Legacy format testing is not supported") - test := &gappedSnapshotTest{ - snapshotTestBasic: snapshotTestBasic{ - legacy: true, chainBlocks: 8, snapshotBlock: 0, commitBlock: 0, @@ -885,7 +663,6 @@ func TestSetHeadWithNewSnapshot(t *testing.T) { // Expected snapshot disk : G test := &setHeadSnapshotTest{ snapshotTestBasic: snapshotTestBasic{ - legacy: false, chainBlocks: 8, snapshotBlock: 0, commitBlock: 0, @@ -901,88 +678,6 @@ func TestSetHeadWithNewSnapshot(t *testing.T) { test.teardown() } -// Tests the Geth was running with snapshot(legacy-format) enabled and resetHead -// is applied. In this case the head is rewound to the target(with state available). -// After that the chain is restarted and the original disk layer is kept. -func TestSetHeadWithLegacySnapshot(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G - // Snapshot: G - // - // SetHead(4) - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4 - // - // Expected head header : C4 - // Expected head fast block: C4 - // Expected head block : C4 - // Expected snapshot disk : G - t.Skip("Legacy format testing is not supported") - test := &setHeadSnapshotTest{ - snapshotTestBasic: snapshotTestBasic{ - legacy: true, - chainBlocks: 8, - snapshotBlock: 0, - commitBlock: 0, - expCanonicalBlocks: 4, - expHeadHeader: 4, - expHeadFastBlock: 4, - expHeadBlock: 4, - expSnapshotBottom: 0, // The initial disk layer is built from the genesis - }, - setHead: 4, - } - test.test(t) - test.teardown() -} - -// Tests the Geth was running with snapshot(legacy-format) enabled and upgrades -// the disk layer journal(journal generator) to latest format. After that the Geth -// is restarted from a crash. In this case Geth will find the new-format disk layer -// journal but with legacy-format diff journal(the new-format is never committed), -// and the invalid diff journal is expected to be dropped. -func TestRecoverSnapshotFromCrashWithLegacyDiffJournal(t *testing.T) { - // Chain: - // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) - // - // Commit: G - // Snapshot: G - // - // SetHead(0) - // - // ------------------------------ - // - // Expected in leveldb: - // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10 - // - // Expected head header : C10 - // Expected head fast block: C10 - // Expected head block : C8 - // Expected snapshot disk : C10 - t.Skip("Legacy format testing is not supported") - test := &restartCrashSnapshotTest{ - snapshotTestBasic: snapshotTestBasic{ - legacy: true, - chainBlocks: 8, - snapshotBlock: 0, - commitBlock: 0, - expCanonicalBlocks: 10, - expHeadHeader: 10, - expHeadFastBlock: 10, - expHeadBlock: 8, // The persisted state in the first running - expSnapshotBottom: 10, // The persisted disk layer in the second running - }, - newBlocks: 2, - } - test.test(t) - test.teardown() -} - // Tests the Geth was running with a complete snapshot and then imports a few // more new blocks on top without enabling the snapshot. After the restart, // crash happens. Check everything is ok after the restart. @@ -1006,7 +701,6 @@ func TestRecoverSnapshotFromWipingCrash(t *testing.T) { // Expected snapshot disk : C10 test := &wipeCrashSnapshotTest{ snapshotTestBasic: snapshotTestBasic{ - legacy: false, chainBlocks: 8, snapshotBlock: 4, commitBlock: 0, diff --git a/core/blockchain_test.go b/core/blockchain_test.go index c7d5d9ac61..398ce816c4 100644 --- a/core/blockchain_test.go +++ b/core/blockchain_test.go @@ -1465,13 +1465,13 @@ func TestBlockchainHeaderchainReorgConsistency(t *testing.T) { t.Fatalf("block %d: failed to insert into chain: %v", i, err) } if chain.CurrentBlock().Hash() != chain.CurrentHeader().Hash() { - t.Errorf("block %d: current block/header mismatch: block #%d [%x…], header #%d [%x…]", i, chain.CurrentBlock().Number(), chain.CurrentBlock().Hash().Bytes()[:4], chain.CurrentHeader().Number, chain.CurrentHeader().Hash().Bytes()[:4]) + t.Errorf("block %d: current block/header mismatch: block #%d [%x..], header #%d [%x..]", i, chain.CurrentBlock().Number(), chain.CurrentBlock().Hash().Bytes()[:4], chain.CurrentHeader().Number, chain.CurrentHeader().Hash().Bytes()[:4]) } if _, err := chain.InsertChain(forks[i : i+1]); err != nil { t.Fatalf(" fork %d: failed to insert into chain: %v", i, err) } if chain.CurrentBlock().Hash() != chain.CurrentHeader().Hash() { - t.Errorf(" fork %d: current block/header mismatch: block #%d [%x…], header #%d [%x…]", i, chain.CurrentBlock().Number(), chain.CurrentBlock().Hash().Bytes()[:4], chain.CurrentHeader().Number, chain.CurrentHeader().Hash().Bytes()[:4]) + t.Errorf(" fork %d: current block/header mismatch: block #%d [%x..], header #%d [%x..]", i, chain.CurrentBlock().Number(), chain.CurrentBlock().Hash().Bytes()[:4], chain.CurrentHeader().Number, chain.CurrentHeader().Hash().Bytes()[:4]) } } } diff --git a/core/chain_indexer.go b/core/chain_indexer.go index 4b326c970b..95901a0eaa 100644 --- a/core/chain_indexer.go +++ b/core/chain_indexer.go @@ -401,7 +401,7 @@ func (c *ChainIndexer) processSection(section uint64, lastHead common.Hash) (com } header := rawdb.ReadHeader(c.chainDb, hash, number) if header == nil { - return common.Hash{}, fmt.Errorf("block #%d [%x…] not found", number, hash[:4]) + return common.Hash{}, fmt.Errorf("block #%d [%x..] not found", number, hash[:4]) } else if header.ParentHash != lastHead { return common.Hash{}, fmt.Errorf("chain reorged during section processing") } diff --git a/core/genesis_test.go b/core/genesis_test.go index 21a029fce7..4dd2082d1b 100644 --- a/core/genesis_test.go +++ b/core/genesis_test.go @@ -131,3 +131,38 @@ func TestSetupGenesis(t *testing.T) { } } } + +// TestGenesisHashes checks the congruity of default genesis data to corresponding hardcoded genesis hash values. +func TestGenesisHashes(t *testing.T) { + cases := []struct { + genesis *Genesis + hash common.Hash + }{ + { + genesis: DefaultGenesisBlock(), + hash: params.MainnetGenesisHash, + }, + { + genesis: DefaultGoerliGenesisBlock(), + hash: params.GoerliGenesisHash, + }, + { + genesis: DefaultRopstenGenesisBlock(), + hash: params.RopstenGenesisHash, + }, + { + genesis: DefaultRinkebyGenesisBlock(), + hash: params.RinkebyGenesisHash, + }, + { + genesis: DefaultYoloV3GenesisBlock(), + hash: params.YoloV3GenesisHash, + }, + } + for i, c := range cases { + b := c.genesis.MustCommit(rawdb.NewMemoryDatabase()) + if got := b.Hash(); got != c.hash { + t.Errorf("case: %d, want: %s, got: %s", i, c.hash.Hex(), got.Hex()) + } + } +} diff --git a/core/headerchain.go b/core/headerchain.go index 9aee660eba..1dbf958786 100644 --- a/core/headerchain.go +++ b/core/headerchain.go @@ -306,7 +306,7 @@ func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int) log.Error("Non contiguous header insert", "number", chain[i].Number, "hash", hash, "parent", chain[i].ParentHash, "prevnumber", chain[i-1].Number, "prevhash", parentHash) - return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, chain[i-1].Number, + return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x..], item %d is #%d [%x..] (parent [%x..])", i-1, chain[i-1].Number, parentHash.Bytes()[:4], i, chain[i].Number, hash.Bytes()[:4], chain[i].ParentHash[:4]) } // If the header is a banned one, straight out abort diff --git a/core/mps/multiple_psr.go b/core/mps/multiple_psr.go index b0d785c8bd..0c665ec10c 100644 --- a/core/mps/multiple_psr.go +++ b/core/mps/multiple_psr.go @@ -188,7 +188,7 @@ func (mpsr *MultiplePrivateStateRepository) CommitAndWrite(isEIP158 bool, block } } // commit the trie of states - mtRoot, err := mpsr.trie.Commit(func(path []byte, leaf []byte, parent common.Hash) error { + mtRoot, err := mpsr.trie.Commit(func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error { privateRoot := common.BytesToHash(leaf) if privateRoot != emptyRoot { mpsr.privateCacheProvider.Reference(privateRoot, parent) @@ -224,7 +224,7 @@ func (mpsr *MultiplePrivateStateRepository) Commit(isEIP158 bool, block *types.B } } // commit the trie of states - _, err := mpsr.trie.Commit(func(path []byte, leaf []byte, parent common.Hash) error { + _, err := mpsr.trie.Commit(func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error { privateRoot := common.BytesToHash(leaf) if privateRoot != emptyRoot { mpsr.privateCacheProvider.Reference(privateRoot, parent) diff --git a/core/rawdb/accessors_chain.go b/core/rawdb/accessors_chain.go index fbdf12ec04..014ace50da 100644 --- a/core/rawdb/accessors_chain.go +++ b/core/rawdb/accessors_chain.go @@ -865,11 +865,6 @@ func DeleteBadBlocks(db ethdb.KeyValueWriter) { } } -// HasBadBlock returns whether the block with the hash is a bad block. dep: Istanbul -func HasBadBlock(db ethdb.Reader, hash common.Hash) bool { - return ReadBadBlock(db, hash) != nil -} - // FindCommonAncestor returns the last common ancestor of two block headers func FindCommonAncestor(db ethdb.Reader, a, b *types.Header) *types.Header { for bn := b.Number.Uint64(); a.Number.Uint64() > bn; { @@ -910,7 +905,7 @@ func ReadHeadHeader(db ethdb.Reader) *types.Header { return ReadHeader(db, headHeaderHash, *headHeaderNumber) } -// ReadHeadHeader returns the current canonical head block. +// ReadHeadBlock returns the current canonical head block. func ReadHeadBlock(db ethdb.Reader) *types.Block { headBlockHash := ReadHeadBlockHash(db) if headBlockHash == (common.Hash{}) { diff --git a/core/rawdb/accessors_chain_quorum.go b/core/rawdb/accessors_chain_quorum.go new file mode 100644 index 0000000000..2969b0d5e4 --- /dev/null +++ b/core/rawdb/accessors_chain_quorum.go @@ -0,0 +1,11 @@ +package rawdb + +import ( + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/ethdb" +) + +// HasBadBlock returns whether the block with the hash is a bad block. dep: Istanbul +func HasBadBlock(db ethdb.Reader, hash common.Hash) bool { + return ReadBadBlock(db, hash) != nil +} diff --git a/core/rawdb/accessors_snapshot.go b/core/rawdb/accessors_snapshot.go index 8066c621c0..1efb8e7bda 100644 --- a/core/rawdb/accessors_snapshot.go +++ b/core/rawdb/accessors_snapshot.go @@ -24,6 +24,26 @@ import ( "github.com/ethereum/go-ethereum/log" ) +// ReadSnapshotDisabled retrieves if the snapshot maintenance is disabled. +func ReadSnapshotDisabled(db ethdb.KeyValueReader) bool { + disabled, _ := db.Has(snapshotDisabledKey) + return disabled +} + +// WriteSnapshotDisabled stores the snapshot pause flag. +func WriteSnapshotDisabled(db ethdb.KeyValueWriter) { + if err := db.Put(snapshotDisabledKey, []byte("42")); err != nil { + log.Crit("Failed to store snapshot disabled flag", "err", err) + } +} + +// DeleteSnapshotDisabled deletes the flag keeping the snapshot maintenance disabled. +func DeleteSnapshotDisabled(db ethdb.KeyValueWriter) { + if err := db.Delete(snapshotDisabledKey); err != nil { + log.Crit("Failed to remove snapshot disabled flag", "err", err) + } +} + // ReadSnapshotRoot retrieves the root of the block whose state is contained in // the persisted snapshot. func ReadSnapshotRoot(db ethdb.KeyValueReader) common.Hash { diff --git a/core/rawdb/database.go b/core/rawdb/database.go index 94759eb984..3a0a26c61d 100644 --- a/core/rawdb/database.go +++ b/core/rawdb/database.go @@ -371,9 +371,9 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error { var accounted bool for _, meta := range [][]byte{ databaseVersionKey, headHeaderKey, headBlockKey, headFastBlockKey, lastPivotKey, - fastTrieProgressKey, snapshotRootKey, snapshotJournalKey, snapshotGeneratorKey, - snapshotRecoveryKey, txIndexTailKey, fastTxLookupLimitKey, uncleanShutdownKey, - badBlockKey, + fastTrieProgressKey, snapshotDisabledKey, snapshotRootKey, snapshotJournalKey, + snapshotGeneratorKey, snapshotRecoveryKey, txIndexTailKey, fastTxLookupLimitKey, + uncleanShutdownKey, badBlockKey, } { if bytes.Equal(key, meta) { metadata.Add(size) diff --git a/core/rawdb/database_test.go b/core/rawdb/database_test.go new file mode 100644 index 0000000000..8bf06f97d8 --- /dev/null +++ b/core/rawdb/database_test.go @@ -0,0 +1,17 @@ +// Copyright 2019 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package rawdb diff --git a/core/rawdb/freezer.go b/core/rawdb/freezer.go index 0ab3cd6a2e..7a810d85e4 100644 --- a/core/rawdb/freezer.go +++ b/core/rawdb/freezer.go @@ -118,7 +118,7 @@ func newFreezer(datadir string, namespace string, readonly bool) (*freezer, erro trigger: make(chan chan struct{}), quit: make(chan struct{}), } - for name, disableSnappy := range freezerNoSnappy { + for name, disableSnappy := range FreezerNoSnappy { table, err := newTable(datadir, name, readMeter, writeMeter, sizeGauge, disableSnappy) if err != nil { for _, table := range freezer.tables { diff --git a/core/rawdb/freezer_table.go b/core/rawdb/freezer_table.go index cd273222b1..d7bfe18e02 100644 --- a/core/rawdb/freezer_table.go +++ b/core/rawdb/freezer_table.go @@ -465,35 +465,59 @@ func (t *freezerTable) releaseFilesAfter(num uint32, remove bool) { // Note, this method will *not* flush any data to disk so be sure to explicitly // fsync before irreversibly deleting data from the database. func (t *freezerTable) Append(item uint64, blob []byte) error { + // Encode the blob before the lock portion + if !t.noCompression { + blob = snappy.Encode(nil, blob) + } // Read lock prevents competition with truncate - t.lock.RLock() + retry, err := t.append(item, blob, false) + if err != nil { + return err + } + if retry { + // Read lock was insufficient, retry with a writelock + _, err = t.append(item, blob, true) + } + return err +} + +// append injects a binary blob at the end of the freezer table. +// Normally, inserts do not require holding the write-lock, so it should be invoked with 'wlock' set to +// false. +// However, if the data will grown the current file out of bounds, then this +// method will return 'true, nil', indicating that the caller should retry, this time +// with 'wlock' set to true. +func (t *freezerTable) append(item uint64, encodedBlob []byte, wlock bool) (bool, error) { + if wlock { + t.lock.Lock() + defer t.lock.Unlock() + } else { + t.lock.RLock() + defer t.lock.RUnlock() + } // Ensure the table is still accessible if t.index == nil || t.head == nil { - t.lock.RUnlock() - return errClosed + return false, errClosed } // Ensure only the next item can be written, nothing else if atomic.LoadUint64(&t.items) != item { - t.lock.RUnlock() - return fmt.Errorf("appending unexpected item: want %d, have %d", t.items, item) - } - // Encode the blob and write it into the data file - if !t.noCompression { - blob = snappy.Encode(nil, blob) + return false, fmt.Errorf("appending unexpected item: want %d, have %d", t.items, item) } - bLen := uint32(len(blob)) + bLen := uint32(len(encodedBlob)) if t.headBytes+bLen < bLen || t.headBytes+bLen > t.maxFileSize { - // we need a new file, writing would overflow - t.lock.RUnlock() - t.lock.Lock() + // Writing would overflow, so we need to open a new data file. + // If we don't already hold the writelock, abort and let the caller + // invoke this method a second time. + if !wlock { + return true, nil + } nextID := atomic.LoadUint32(&t.headId) + 1 // We open the next file in truncated mode -- if this file already // exists, we need to start over from scratch on it newHead, err := t.openFile(nextID, openFreezerFileTruncated) if err != nil { - t.lock.Unlock() - return err + return false, err } // Close old file, and reopen in RDONLY mode t.releaseFile(t.headId) @@ -503,13 +527,9 @@ func (t *freezerTable) Append(item uint64, blob []byte) error { t.head = newHead atomic.StoreUint32(&t.headBytes, 0) atomic.StoreUint32(&t.headId, nextID) - t.lock.Unlock() - t.lock.RLock() } - - defer t.lock.RUnlock() - if _, err := t.head.Write(blob); err != nil { - return err + if _, err := t.head.Write(encodedBlob); err != nil { + return false, err } newOffset := atomic.AddUint32(&t.headBytes, bLen) idx := indexEntry{ @@ -523,7 +543,7 @@ func (t *freezerTable) Append(item uint64, blob []byte) error { t.sizeGauge.Inc(int64(bLen + indexEntrySize)) atomic.AddUint64(&t.items, 1) - return nil + return false, nil } // getBounds returns the indexes for the item @@ -562,44 +582,48 @@ func (t *freezerTable) getBounds(item uint64) (uint32, uint32, uint32, error) { // Retrieve looks up the data offset of an item with the given number and retrieves // the raw binary blob from the data file. func (t *freezerTable) Retrieve(item uint64) ([]byte, error) { + blob, err := t.retrieve(item) + if err != nil { + return nil, err + } + if t.noCompression { + return blob, nil + } + return snappy.Decode(nil, blob) +} + +// retrieve looks up the data offset of an item with the given number and retrieves +// the raw binary blob from the data file. OBS! This method does not decode +// compressed data. +func (t *freezerTable) retrieve(item uint64) ([]byte, error) { t.lock.RLock() + defer t.lock.RUnlock() // Ensure the table and the item is accessible if t.index == nil || t.head == nil { - t.lock.RUnlock() return nil, errClosed } if atomic.LoadUint64(&t.items) <= item { - t.lock.RUnlock() return nil, errOutOfBounds } // Ensure the item was not deleted from the tail either if uint64(t.itemOffset) > item { - t.lock.RUnlock() return nil, errOutOfBounds } startOffset, endOffset, filenum, err := t.getBounds(item - uint64(t.itemOffset)) if err != nil { - t.lock.RUnlock() return nil, err } dataFile, exist := t.files[filenum] if !exist { - t.lock.RUnlock() return nil, fmt.Errorf("missing data file %d", filenum) } // Retrieve the data itself, decompress and return blob := make([]byte, endOffset-startOffset) if _, err := dataFile.ReadAt(blob, int64(startOffset)); err != nil { - t.lock.RUnlock() return nil, err } - t.lock.RUnlock() t.readMeter.Mark(int64(len(blob) + 2*indexEntrySize)) - - if t.noCompression { - return blob, nil - } - return snappy.Decode(nil, blob) + return blob, nil } // has returns an indicator whether the specified number data @@ -636,25 +660,24 @@ func (t *freezerTable) Sync() error { return t.head.Sync() } -// printIndex is a debug print utility function for testing -func (t *freezerTable) printIndex() { +// DumpIndex is a debug print utility function, mainly for testing. It can also +// be used to analyse a live freezer table index. +func (t *freezerTable) DumpIndex(start, stop int64) { buf := make([]byte, indexEntrySize) - fmt.Printf("|-----------------|\n") - fmt.Printf("| fileno | offset |\n") - fmt.Printf("|--------+--------|\n") + fmt.Printf("| number | fileno | offset |\n") + fmt.Printf("|--------|--------|--------|\n") - for i := uint64(0); ; i++ { + for i := uint64(start); ; i++ { if _, err := t.index.ReadAt(buf, int64(i*indexEntrySize)); err != nil { break } var entry indexEntry entry.unmarshalBinary(buf) - fmt.Printf("| %03d | %03d | \n", entry.filenum, entry.offset) - if i > 100 { - fmt.Printf(" ... \n") + fmt.Printf("| %03d | %03d | %03d | \n", i, entry.filenum, entry.offset) + if stop > 0 && i >= uint64(stop) { break } } - fmt.Printf("|-----------------|\n") + fmt.Printf("|--------------------------|\n") } diff --git a/core/rawdb/freezer_table_test.go b/core/rawdb/freezer_table_test.go index b22c58e138..0df28f236d 100644 --- a/core/rawdb/freezer_table_test.go +++ b/core/rawdb/freezer_table_test.go @@ -18,10 +18,13 @@ package rawdb import ( "bytes" + "encoding/binary" "fmt" + "io/ioutil" "math/rand" "os" "path/filepath" + "sync" "testing" "time" @@ -525,7 +528,7 @@ func TestOffset(t *testing.T) { f.Append(4, getChunk(20, 0xbb)) f.Append(5, getChunk(20, 0xaa)) - f.printIndex() + f.DumpIndex(0, 100) f.Close() } // Now crop it. @@ -572,7 +575,7 @@ func TestOffset(t *testing.T) { if err != nil { t.Fatal(err) } - f.printIndex() + f.DumpIndex(0, 100) // It should allow writing item 6 f.Append(numDeleted+2, getChunk(20, 0x99)) @@ -637,6 +640,55 @@ func TestOffset(t *testing.T) { // 1. have data files d0, d1, d2, d3 // 2. remove d2,d3 // -// However, all 'normal' failure modes arising due to failing to sync() or save a file should be -// handled already, and the case described above can only (?) happen if an external process/user -// deletes files from the filesystem. +// However, all 'normal' failure modes arising due to failing to sync() or save a file +// should be handled already, and the case described above can only (?) happen if an +// external process/user deletes files from the filesystem. + +// TestAppendTruncateParallel is a test to check if the Append/truncate operations are +// racy. +// +// The reason why it's not a regular fuzzer, within tests/fuzzers, is that it is dependent +// on timing rather than 'clever' input -- there's no determinism. +func TestAppendTruncateParallel(t *testing.T) { + dir, err := ioutil.TempDir("", "freezer") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(dir) + + f, err := newCustomTable(dir, "tmp", metrics.NilMeter{}, metrics.NilMeter{}, metrics.NilGauge{}, 8, true) + if err != nil { + t.Fatal(err) + } + + fill := func(mark uint64) []byte { + data := make([]byte, 8) + binary.LittleEndian.PutUint64(data, mark) + return data + } + + for i := 0; i < 5000; i++ { + f.truncate(0) + data0 := fill(0) + f.Append(0, data0) + data1 := fill(1) + + var wg sync.WaitGroup + wg.Add(2) + go func() { + f.truncate(0) + wg.Done() + }() + go func() { + f.Append(1, data1) + wg.Done() + }() + wg.Wait() + + if have, err := f.Retrieve(0); err == nil { + if !bytes.Equal(have, data0) { + t.Fatalf("have %x want %x", have, data0) + } + } + } +} diff --git a/core/rawdb/schema.go b/core/rawdb/schema.go index 0b411057f8..2505ce90b9 100644 --- a/core/rawdb/schema.go +++ b/core/rawdb/schema.go @@ -45,6 +45,9 @@ var ( // fastTrieProgressKey tracks the number of trie entries imported during fast sync. fastTrieProgressKey = []byte("TrieSync") + // snapshotDisabledKey flags that the snapshot should not be maintained due to initial sync. + snapshotDisabledKey = []byte("SnapshotDisabled") + // snapshotRootKey tracks the hash of the last snapshot. snapshotRootKey = []byte("SnapshotRoot") @@ -114,9 +117,9 @@ const ( freezerDifficultyTable = "diffs" ) -// freezerNoSnappy configures whether compression is disabled for the ancient-tables. +// FreezerNoSnappy configures whether compression is disabled for the ancient-tables. // Hashes and difficulties don't compress well. -var freezerNoSnappy = map[string]bool{ +var FreezerNoSnappy = map[string]bool{ freezerHeaderTable: false, freezerHashTable: true, freezerBodiesTable: false, diff --git a/core/state/snapshot/conversion.go b/core/state/snapshot/conversion.go index bb87ecddf1..f70cbf1e68 100644 --- a/core/state/snapshot/conversion.go +++ b/core/state/snapshot/conversion.go @@ -322,7 +322,7 @@ func generateTrieRoot(db ethdb.KeyValueWriter, it Iterator, account common.Hash, return } if !bytes.Equal(account.Root, subroot.Bytes()) { - results <- fmt.Errorf("invalid subroot(%x), want %x, got %x", it.Hash(), account.Root, subroot) + results <- fmt.Errorf("invalid subroot(path %x), want %x, have %x", hash, account.Root, subroot) return } results <- nil diff --git a/core/state/snapshot/generate.go b/core/state/snapshot/generate.go index 2b41dd5513..5dce011334 100644 --- a/core/state/snapshot/generate.go +++ b/core/state/snapshot/generate.go @@ -19,17 +19,21 @@ package snapshot import ( "bytes" "encoding/binary" + "errors" "fmt" "math/big" "time" "github.com/VictoriaMetrics/fastcache" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/ethdb" + "github.com/ethereum/go-ethereum/ethdb/memorydb" "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/trie" ) @@ -40,17 +44,63 @@ var ( // emptyCode is the known hash of the empty EVM bytecode. emptyCode = crypto.Keccak256Hash(nil) + + // accountCheckRange is the upper limit of the number of accounts involved in + // each range check. This is a value estimated based on experience. If this + // value is too large, the failure rate of range prove will increase. Otherwise + // the the value is too small, the efficiency of the state recovery will decrease. + accountCheckRange = 128 + + // storageCheckRange is the upper limit of the number of storage slots involved + // in each range check. This is a value estimated based on experience. If this + // value is too large, the failure rate of range prove will increase. Otherwise + // the the value is too small, the efficiency of the state recovery will decrease. + storageCheckRange = 1024 + + // errMissingTrie is returned if the target trie is missing while the generation + // is running. In this case the generation is aborted and wait the new signal. + errMissingTrie = errors.New("missing trie") +) + +// Metrics in generation +var ( + snapGeneratedAccountMeter = metrics.NewRegisteredMeter("state/snapshot/generation/account/generated", nil) + snapRecoveredAccountMeter = metrics.NewRegisteredMeter("state/snapshot/generation/account/recovered", nil) + snapWipedAccountMeter = metrics.NewRegisteredMeter("state/snapshot/generation/account/wiped", nil) + snapMissallAccountMeter = metrics.NewRegisteredMeter("state/snapshot/generation/account/missall", nil) + snapGeneratedStorageMeter = metrics.NewRegisteredMeter("state/snapshot/generation/storage/generated", nil) + snapRecoveredStorageMeter = metrics.NewRegisteredMeter("state/snapshot/generation/storage/recovered", nil) + snapWipedStorageMeter = metrics.NewRegisteredMeter("state/snapshot/generation/storage/wiped", nil) + snapMissallStorageMeter = metrics.NewRegisteredMeter("state/snapshot/generation/storage/missall", nil) + snapSuccessfulRangeProofMeter = metrics.NewRegisteredMeter("state/snapshot/generation/proof/success", nil) + snapFailedRangeProofMeter = metrics.NewRegisteredMeter("state/snapshot/generation/proof/failure", nil) + + // snapAccountProveCounter measures time spent on the account proving + snapAccountProveCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/account/prove", nil) + // snapAccountTrieReadCounter measures time spent on the account trie iteration + snapAccountTrieReadCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/account/trieread", nil) + // snapAccountSnapReadCounter measues time spent on the snapshot account iteration + snapAccountSnapReadCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/account/snapread", nil) + // snapAccountWriteCounter measures time spent on writing/updating/deleting accounts + snapAccountWriteCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/account/write", nil) + // snapStorageProveCounter measures time spent on storage proving + snapStorageProveCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/storage/prove", nil) + // snapStorageTrieReadCounter measures time spent on the storage trie iteration + snapStorageTrieReadCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/storage/trieread", nil) + // snapStorageSnapReadCounter measures time spent on the snapshot storage iteration + snapStorageSnapReadCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/storage/snapread", nil) + // snapStorageWriteCounter measures time spent on writing/updating/deleting storages + snapStorageWriteCounter = metrics.NewRegisteredCounter("state/snapshot/generation/duration/storage/write", nil) ) // generatorStats is a collection of statistics gathered by the snapshot generator // for logging purposes. type generatorStats struct { - wiping chan struct{} // Notification channel if wiping is in progress origin uint64 // Origin prefix where generation started start time.Time // Timestamp when generation started - accounts uint64 // Number of accounts indexed - slots uint64 // Number of storage slots indexed - storage common.StorageSize // Account and storage slot size + accounts uint64 // Number of accounts indexed(generated or recovered) + slots uint64 // Number of storage slots indexed(generated or recovered) + storage common.StorageSize // Total account and storage slot size(generation or recovery) } // Log creates an contextual log with the given message and the context pulled @@ -94,22 +144,17 @@ func (gs *generatorStats) Log(msg string, root common.Hash, marker []byte) { // generateSnapshot regenerates a brand new snapshot based on an existing state // database and head block asynchronously. The snapshot is returned immediately // and generation is continued in the background until done. -func generateSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, root common.Hash, wiper chan struct{}) *diskLayer { - // Wipe any previously existing snapshot from the database if no wiper is - // currently in progress. - if wiper == nil { - wiper = wipeSnapshot(diskdb, true) - } +func generateSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, root common.Hash) *diskLayer { // Create a new disk layer with an initialized state marker at zero var ( - stats = &generatorStats{wiping: wiper, start: time.Now()} + stats = &generatorStats{start: time.Now()} batch = diskdb.NewBatch() genMarker = []byte{} // Initialized but empty! ) rawdb.WriteSnapshotRoot(batch, root) journalProgress(batch, genMarker, stats) if err := batch.Write(); err != nil { - log.Crit("Failed to write initialized state marker", "error", err) + log.Crit("Failed to write initialized state marker", "err", err) } base := &diskLayer{ diskdb: diskdb, @@ -135,7 +180,6 @@ func journalProgress(db ethdb.KeyValueWriter, marker []byte, stats *generatorSta Marker: marker, } if stats != nil { - entry.Wiping = (stats.wiping != nil) entry.Accounts = stats.accounts entry.Slots = stats.slots entry.Storage = uint64(stats.storage) @@ -159,170 +203,538 @@ func journalProgress(db ethdb.KeyValueWriter, marker []byte, stats *generatorSta rawdb.WriteSnapshotGenerator(db, blob) } -// generate is a background thread that iterates over the state and storage tries, -// constructing the state snapshot. All the arguments are purely for statistics -// gathering and logging, since the method surfs the blocks as they arrive, often -// being restarted. -func (dl *diskLayer) generate(stats *generatorStats) { - // If a database wipe is in operation, wait until it's done - if stats.wiping != nil { - stats.Log("Wiper running, state snapshotting paused", common.Hash{}, dl.genMarker) - select { - // If wiper is done, resume normal mode of operation - case <-stats.wiping: - stats.wiping = nil - stats.start = time.Now() +// proofResult contains the output of range proving which can be used +// for further processing regardless if it is successful or not. +type proofResult struct { + keys [][]byte // The key set of all elements being iterated, even proving is failed + vals [][]byte // The val set of all elements being iterated, even proving is failed + diskMore bool // Set when the database has extra snapshot states since last iteration + trieMore bool // Set when the trie has extra snapshot states(only meaningful for successful proving) + proofErr error // Indicator whether the given state range is valid or not + tr *trie.Trie // The trie, in case the trie was resolved by the prover (may be nil) +} - // If generator was aborted during wipe, return - case abort := <-dl.genAbort: - abort <- stats - return +// valid returns the indicator that range proof is successful or not. +func (result *proofResult) valid() bool { + return result.proofErr == nil +} + +// last returns the last verified element key regardless of whether the range proof is +// successful or not. Nil is returned if nothing involved in the proving. +func (result *proofResult) last() []byte { + var last []byte + if len(result.keys) > 0 { + last = result.keys[len(result.keys)-1] + } + return last +} + +// forEach iterates all the visited elements and applies the given callback on them. +// The iteration is aborted if the callback returns non-nil error. +func (result *proofResult) forEach(callback func(key []byte, val []byte) error) error { + for i := 0; i < len(result.keys); i++ { + key, val := result.keys[i], result.vals[i] + if err := callback(key, val); err != nil { + return err + } + } + return nil +} + +// proveRange proves the snapshot segment with particular prefix is "valid". +// The iteration start point will be assigned if the iterator is restored from +// the last interruption. Max will be assigned in order to limit the maximum +// amount of data involved in each iteration. +// +// The proof result will be returned if the range proving is finished, otherwise +// the error will be returned to abort the entire procedure. +func (dl *diskLayer) proveRange(stats *generatorStats, root common.Hash, prefix []byte, kind string, origin []byte, max int, valueConvertFn func([]byte) ([]byte, error)) (*proofResult, error) { + var ( + keys [][]byte + vals [][]byte + proof = rawdb.NewMemoryDatabase() + diskMore = false + ) + iter := dl.diskdb.NewIterator(prefix, origin) + defer iter.Release() + + var start = time.Now() + for iter.Next() { + key := iter.Key() + if len(key) != len(prefix)+common.HashLength { + continue + } + if len(keys) == max { + // Break if we've reached the max size, and signal that we're not + // done yet. + diskMore = true + break + } + keys = append(keys, common.CopyBytes(key[len(prefix):])) + + if valueConvertFn == nil { + vals = append(vals, common.CopyBytes(iter.Value())) + } else { + val, err := valueConvertFn(iter.Value()) + if err != nil { + // Special case, the state data is corrupted (invalid slim-format account), + // don't abort the entire procedure directly. Instead, let the fallback + // generation to heal the invalid data. + // + // Here append the original value to ensure that the number of key and + // value are the same. + vals = append(vals, common.CopyBytes(iter.Value())) + log.Error("Failed to convert account state data", "err", err) + } else { + vals = append(vals, val) + } } } - // Create an account and state iterator pointing to the current generator marker - accTrie, err := trie.NewSecure(dl.root, dl.triedb) + // Update metrics for database iteration and merkle proving + if kind == "storage" { + snapStorageSnapReadCounter.Inc(time.Since(start).Nanoseconds()) + } else { + snapAccountSnapReadCounter.Inc(time.Since(start).Nanoseconds()) + } + defer func(start time.Time) { + if kind == "storage" { + snapStorageProveCounter.Inc(time.Since(start).Nanoseconds()) + } else { + snapAccountProveCounter.Inc(time.Since(start).Nanoseconds()) + } + }(time.Now()) + + // The snap state is exhausted, pass the entire key/val set for verification + if origin == nil && !diskMore { + stackTr := trie.NewStackTrie(nil) + for i, key := range keys { + stackTr.TryUpdate(key, vals[i]) + } + if gotRoot := stackTr.Hash(); gotRoot != root { + return &proofResult{ + keys: keys, + vals: vals, + proofErr: fmt.Errorf("wrong root: have %#x want %#x", gotRoot, root), + }, nil + } + return &proofResult{keys: keys, vals: vals}, nil + } + // Snap state is chunked, generate edge proofs for verification. + tr, err := trie.New(root, dl.triedb) if err != nil { - // The account trie is missing (GC), surf the chain until one becomes available stats.Log("Trie missing, state snapshotting paused", dl.root, dl.genMarker) + return nil, errMissingTrie + } + // Firstly find out the key of last iterated element. + var last []byte + if len(keys) > 0 { + last = keys[len(keys)-1] + } + // Generate the Merkle proofs for the first and last element + if origin == nil { + origin = common.Hash{}.Bytes() + } + if err := tr.Prove(origin, 0, proof); err != nil { + log.Debug("Failed to prove range", "kind", kind, "origin", origin, "err", err) + return &proofResult{ + keys: keys, + vals: vals, + diskMore: diskMore, + proofErr: err, + tr: tr, + }, nil + } + if last != nil { + if err := tr.Prove(last, 0, proof); err != nil { + log.Debug("Failed to prove range", "kind", kind, "last", last, "err", err) + return &proofResult{ + keys: keys, + vals: vals, + diskMore: diskMore, + proofErr: err, + tr: tr, + }, nil + } + } + // Verify the snapshot segment with range prover, ensure that all flat states + // in this range correspond to merkle trie. + cont, err := trie.VerifyRangeProof(root, origin, last, keys, vals, proof) + return &proofResult{ + keys: keys, + vals: vals, + diskMore: diskMore, + trieMore: cont, + proofErr: err, + tr: tr}, + nil +} - abort := <-dl.genAbort - abort <- stats - return +// onStateCallback is a function that is called by generateRange, when processing a range of +// accounts or storage slots. For each element, the callback is invoked. +// If 'delete' is true, then this element (and potential slots) needs to be deleted from the snapshot. +// If 'write' is true, then this element needs to be updated with the 'val'. +// If 'write' is false, then this element is already correct, and needs no update. However, +// for accounts, the storage trie of the account needs to be checked. +// The 'val' is the canonical encoding of the value (not the slim format for accounts) +type onStateCallback func(key []byte, val []byte, write bool, delete bool) error + +// generateRange generates the state segment with particular prefix. Generation can +// either verify the correctness of existing state through rangeproof and skip +// generation, or iterate trie to regenerate state on demand. +func (dl *diskLayer) generateRange(root common.Hash, prefix []byte, kind string, origin []byte, max int, stats *generatorStats, onState onStateCallback, valueConvertFn func([]byte) ([]byte, error)) (bool, []byte, error) { + // Use range prover to check the validity of the flat state in the range + result, err := dl.proveRange(stats, root, prefix, kind, origin, max, valueConvertFn) + if err != nil { + return false, nil, err } - stats.Log("Resuming state snapshot generation", dl.root, dl.genMarker) + last := result.last() + + // Construct contextual logger + logCtx := []interface{}{"kind", kind, "prefix", hexutil.Encode(prefix)} + if len(origin) > 0 { + logCtx = append(logCtx, "origin", hexutil.Encode(origin)) + } + logger := log.New(logCtx...) + + // The range prover says the range is correct, skip trie iteration + if result.valid() { + snapSuccessfulRangeProofMeter.Mark(1) + logger.Trace("Proved state range", "last", hexutil.Encode(last)) + + // The verification is passed, process each state with the given + // callback function. If this state represents a contract, the + // corresponding storage check will be performed in the callback + if err := result.forEach(func(key []byte, val []byte) error { return onState(key, val, false, false) }); err != nil { + return false, nil, err + } + // Only abort the iteration when both database and trie are exhausted + return !result.diskMore && !result.trieMore, last, nil + } + logger.Trace("Detected outdated state range", "last", hexutil.Encode(last), "err", result.proofErr) + snapFailedRangeProofMeter.Mark(1) + + // Special case, the entire trie is missing. In the original trie scheme, + // all the duplicated subtries will be filter out(only one copy of data + // will be stored). While in the snapshot model, all the storage tries + // belong to different contracts will be kept even they are duplicated. + // Track it to a certain extent remove the noise data used for statistics. + if origin == nil && last == nil { + meter := snapMissallAccountMeter + if kind == "storage" { + meter = snapMissallStorageMeter + } + meter.Mark(1) + } + + // We use the snap data to build up a cache which can be used by the + // main account trie as a primary lookup when resolving hashes + var snapNodeCache ethdb.KeyValueStore + if len(result.keys) > 0 { + snapNodeCache = memorydb.New() + snapTrieDb := trie.NewDatabase(snapNodeCache) + snapTrie, _ := trie.New(common.Hash{}, snapTrieDb) + for i, key := range result.keys { + snapTrie.Update(key, result.vals[i]) + } + root, _ := snapTrie.Commit(nil) + snapTrieDb.Commit(root, false, nil) + } + tr := result.tr + if tr == nil { + tr, err = trie.New(root, dl.triedb) + if err != nil { + stats.Log("Trie missing, state snapshotting paused", dl.root, dl.genMarker) + return false, nil, errMissingTrie + } + } + + var ( + trieMore bool + nodeIt = tr.NodeIterator(origin) + iter = trie.NewIterator(nodeIt) + kvkeys, kvvals = result.keys, result.vals + + // counters + count = 0 // number of states delivered by iterator + created = 0 // states created from the trie + updated = 0 // states updated from the trie + deleted = 0 // states not in trie, but were in snapshot + untouched = 0 // states already correct + + // timers + start = time.Now() + internal time.Duration + ) + nodeIt.AddResolver(snapNodeCache) + for iter.Next() { + if last != nil && bytes.Compare(iter.Key, last) > 0 { + trieMore = true + break + } + count++ + write := true + created++ + for len(kvkeys) > 0 { + if cmp := bytes.Compare(kvkeys[0], iter.Key); cmp < 0 { + // delete the key + istart := time.Now() + if err := onState(kvkeys[0], nil, false, true); err != nil { + return false, nil, err + } + kvkeys = kvkeys[1:] + kvvals = kvvals[1:] + deleted++ + internal += time.Since(istart) + continue + } else if cmp == 0 { + // the snapshot key can be overwritten + created-- + if write = !bytes.Equal(kvvals[0], iter.Value); write { + updated++ + } else { + untouched++ + } + kvkeys = kvkeys[1:] + kvvals = kvvals[1:] + } + break + } + istart := time.Now() + if err := onState(iter.Key, iter.Value, write, false); err != nil { + return false, nil, err + } + internal += time.Since(istart) + } + if iter.Err != nil { + return false, nil, iter.Err + } + // Delete all stale snapshot states remaining + istart := time.Now() + for _, key := range kvkeys { + if err := onState(key, nil, false, true); err != nil { + return false, nil, err + } + deleted += 1 + } + internal += time.Since(istart) + + // Update metrics for counting trie iteration + if kind == "storage" { + snapStorageTrieReadCounter.Inc((time.Since(start) - internal).Nanoseconds()) + } else { + snapAccountTrieReadCounter.Inc((time.Since(start) - internal).Nanoseconds()) + } + logger.Debug("Regenerated state range", "root", root, "last", hexutil.Encode(last), + "count", count, "created", created, "updated", updated, "untouched", untouched, "deleted", deleted) - var accMarker []byte + // If there are either more trie items, or there are more snap items + // (in the next segment), then we need to keep working + return !trieMore && !result.diskMore, last, nil +} + +// generate is a background thread that iterates over the state and storage tries, +// constructing the state snapshot. All the arguments are purely for statistics +// gathering and logging, since the method surfs the blocks as they arrive, often +// being restarted. +func (dl *diskLayer) generate(stats *generatorStats) { + var ( + accMarker []byte + accountRange = accountCheckRange + ) if len(dl.genMarker) > 0 { // []byte{} is the start, use nil for that - accMarker = dl.genMarker[:common.HashLength] + // Always reset the initial account range as 1 + // whenever recover from the interruption. + accMarker, accountRange = dl.genMarker[:common.HashLength], 1 } - accIt := trie.NewIterator(accTrie.NodeIterator(accMarker)) - batch := dl.diskdb.NewBatch() + var ( + batch = dl.diskdb.NewBatch() + logged = time.Now() + accOrigin = common.CopyBytes(accMarker) + abort chan *generatorStats + ) + stats.Log("Resuming state snapshot generation", dl.root, dl.genMarker) - // Iterate from the previous marker and continue generating the state snapshot - logged := time.Now() - for accIt.Next() { - // Retrieve the current account and flatten it into the internal format - accountHash := common.BytesToHash(accIt.Key) + checkAndFlush := func(currentLocation []byte) error { + select { + case abort = <-dl.genAbort: + default: + } + if batch.ValueSize() > ethdb.IdealBatchSize || abort != nil { + // Flush out the batch anyway no matter it's empty or not. + // It's possible that all the states are recovered and the + // generation indeed makes progress. + journalProgress(batch, currentLocation, stats) + + if err := batch.Write(); err != nil { + return err + } + batch.Reset() + + dl.lock.Lock() + dl.genMarker = currentLocation + dl.lock.Unlock() + + if abort != nil { + stats.Log("Aborting state snapshot generation", dl.root, currentLocation) + return errors.New("aborted") + } + } + if time.Since(logged) > 8*time.Second { + stats.Log("Generating state snapshot", dl.root, currentLocation) + logged = time.Now() + } + return nil + } + + onAccount := func(key []byte, val []byte, write bool, delete bool) error { + var ( + start = time.Now() + accountHash = common.BytesToHash(key) + ) + if delete { + rawdb.DeleteAccountSnapshot(batch, accountHash) + snapWipedAccountMeter.Mark(1) + // Ensure that any previous snapshot storage values are cleared + prefix := append(rawdb.SnapshotStoragePrefix, accountHash.Bytes()...) + keyLen := len(rawdb.SnapshotStoragePrefix) + 2*common.HashLength + if err := wipeKeyRange(dl.diskdb, "storage", prefix, nil, nil, keyLen, snapWipedStorageMeter, false); err != nil { + return err + } + snapAccountWriteCounter.Inc(time.Since(start).Nanoseconds()) + return nil + } + // Retrieve the current account and flatten it into the internal format var acc struct { Nonce uint64 Balance *big.Int Root common.Hash CodeHash []byte } - if err := rlp.DecodeBytes(accIt.Value, &acc); err != nil { + if err := rlp.DecodeBytes(val, &acc); err != nil { log.Crit("Invalid account encountered during snapshot creation", "err", err) } - data := SlimAccountRLP(acc.Nonce, acc.Balance, acc.Root, acc.CodeHash) - // If the account is not yet in-progress, write it out if accMarker == nil || !bytes.Equal(accountHash[:], accMarker) { - rawdb.WriteAccountSnapshot(batch, accountHash, data) - stats.storage += common.StorageSize(1 + common.HashLength + len(data)) + dataLen := len(val) // Approximate size, saves us a round of RLP-encoding + if !write { + if bytes.Equal(acc.CodeHash, emptyCode[:]) { + dataLen -= 32 + } + if acc.Root == emptyRoot { + dataLen -= 32 + } + snapRecoveredAccountMeter.Mark(1) + } else { + data := SlimAccountRLP(acc.Nonce, acc.Balance, acc.Root, acc.CodeHash) + dataLen = len(data) + rawdb.WriteAccountSnapshot(batch, accountHash, data) + snapGeneratedAccountMeter.Mark(1) + } + stats.storage += common.StorageSize(1 + common.HashLength + dataLen) stats.accounts++ } // If we've exceeded our batch allowance or termination was requested, flush to disk - var abort chan *generatorStats - select { - case abort = <-dl.genAbort: - default: - } - if batch.ValueSize() > ethdb.IdealBatchSize || abort != nil { - // Only write and set the marker if we actually did something useful - if batch.ValueSize() > 0 { - // Ensure the generator entry is in sync with the data - marker := accountHash[:] - journalProgress(batch, marker, stats) - - batch.Write() - batch.Reset() - - dl.lock.Lock() - dl.genMarker = marker - dl.lock.Unlock() - } - if abort != nil { - stats.Log("Aborting state snapshot generation", dl.root, accountHash[:]) - abort <- stats - return - } + if err := checkAndFlush(accountHash[:]); err != nil { + return err } - // If the account is in-progress, continue where we left off (otherwise iterate all) - if acc.Root != emptyRoot { - storeTrie, err := trie.NewSecure(acc.Root, dl.triedb) - if err != nil { - log.Error("Generator failed to access storage trie", "root", dl.root, "account", accountHash, "stroot", acc.Root, "err", err) - abort := <-dl.genAbort - abort <- stats - return + // If the iterated account is the contract, create a further loop to + // verify or regenerate the contract storage. + if acc.Root == emptyRoot { + // If the root is empty, we still need to ensure that any previous snapshot + // storage values are cleared + // TODO: investigate if this can be avoided, this will be very costly since it + // affects every single EOA account + // - Perhaps we can avoid if where codeHash is emptyCode + prefix := append(rawdb.SnapshotStoragePrefix, accountHash.Bytes()...) + keyLen := len(rawdb.SnapshotStoragePrefix) + 2*common.HashLength + if err := wipeKeyRange(dl.diskdb, "storage", prefix, nil, nil, keyLen, snapWipedStorageMeter, false); err != nil { + return err } + snapAccountWriteCounter.Inc(time.Since(start).Nanoseconds()) + } else { + snapAccountWriteCounter.Inc(time.Since(start).Nanoseconds()) + var storeMarker []byte if accMarker != nil && bytes.Equal(accountHash[:], accMarker) && len(dl.genMarker) > common.HashLength { storeMarker = dl.genMarker[common.HashLength:] } - storeIt := trie.NewIterator(storeTrie.NodeIterator(storeMarker)) - for storeIt.Next() { - rawdb.WriteStorageSnapshot(batch, accountHash, common.BytesToHash(storeIt.Key), storeIt.Value) - stats.storage += common.StorageSize(1 + 2*common.HashLength + len(storeIt.Value)) + onStorage := func(key []byte, val []byte, write bool, delete bool) error { + defer func(start time.Time) { + snapStorageWriteCounter.Inc(time.Since(start).Nanoseconds()) + }(time.Now()) + + if delete { + rawdb.DeleteStorageSnapshot(batch, accountHash, common.BytesToHash(key)) + snapWipedStorageMeter.Mark(1) + return nil + } + if write { + rawdb.WriteStorageSnapshot(batch, accountHash, common.BytesToHash(key), val) + snapGeneratedStorageMeter.Mark(1) + } else { + snapRecoveredStorageMeter.Mark(1) + } + stats.storage += common.StorageSize(1 + 2*common.HashLength + len(val)) stats.slots++ // If we've exceeded our batch allowance or termination was requested, flush to disk - var abort chan *generatorStats - select { - case abort = <-dl.genAbort: - default: - } - if batch.ValueSize() > ethdb.IdealBatchSize || abort != nil { - // Only write and set the marker if we actually did something useful - if batch.ValueSize() > 0 { - // Ensure the generator entry is in sync with the data - marker := append(accountHash[:], storeIt.Key...) - journalProgress(batch, marker, stats) - - batch.Write() - batch.Reset() - - dl.lock.Lock() - dl.genMarker = marker - dl.lock.Unlock() - } - if abort != nil { - stats.Log("Aborting state snapshot generation", dl.root, append(accountHash[:], storeIt.Key...)) - abort <- stats - return - } - if time.Since(logged) > 8*time.Second { - stats.Log("Generating state snapshot", dl.root, append(accountHash[:], storeIt.Key...)) - logged = time.Now() - } + if err := checkAndFlush(append(accountHash[:], key...)); err != nil { + return err } + return nil } - if err := storeIt.Err; err != nil { - log.Error("Generator failed to iterate storage trie", "accroot", dl.root, "acchash", common.BytesToHash(accIt.Key), "stroot", acc.Root, "err", err) - abort := <-dl.genAbort - abort <- stats - return + var storeOrigin = common.CopyBytes(storeMarker) + for { + exhausted, last, err := dl.generateRange(acc.Root, append(rawdb.SnapshotStoragePrefix, accountHash.Bytes()...), "storage", storeOrigin, storageCheckRange, stats, onStorage, nil) + if err != nil { + return err + } + if exhausted { + break + } + if storeOrigin = increaseKey(last); storeOrigin == nil { + break // special case, the last is 0xffffffff...fff + } } } - if time.Since(logged) > 8*time.Second { - stats.Log("Generating state snapshot", dl.root, accIt.Key) - logged = time.Now() - } // Some account processed, unmark the marker accMarker = nil + return nil } - if err := accIt.Err; err != nil { - log.Error("Generator failed to iterate account trie", "root", dl.root, "err", err) - abort := <-dl.genAbort - abort <- stats - return + + // Global loop for regerating the entire state trie + all layered storage tries. + for { + exhausted, last, err := dl.generateRange(dl.root, rawdb.SnapshotAccountPrefix, "account", accOrigin, accountRange, stats, onAccount, FullAccountRLP) + // The procedure it aborted, either by external signal or internal error + if err != nil { + if abort == nil { // aborted by internal error, wait the signal + abort = <-dl.genAbort + } + abort <- stats + return + } + // Abort the procedure if the entire snapshot is generated + if exhausted { + break + } + if accOrigin = increaseKey(last); accOrigin == nil { + break // special case, the last is 0xffffffff...fff + } + accountRange = accountCheckRange } // Snapshot fully generated, set the marker to nil. // Note even there is nothing to commit, persist the // generator anyway to mark the snapshot is complete. journalProgress(batch, nil, stats) - batch.Write() + if err := batch.Write(); err != nil { + log.Error("Failed to flush batch", "err", err) + abort = <-dl.genAbort + abort <- stats + return + } + batch.Reset() log.Info("Generated state snapshot", "accounts", stats.accounts, "slots", stats.slots, "storage", stats.storage, "elapsed", common.PrettyDuration(time.Since(stats.start))) @@ -332,6 +744,18 @@ func (dl *diskLayer) generate(stats *generatorStats) { dl.lock.Unlock() // Someone will be looking for us, wait it out - abort := <-dl.genAbort + abort = <-dl.genAbort abort <- nil } + +// increaseKey increase the input key by one bit. Return nil if the entire +// addition operation overflows, +func increaseKey(key []byte) []byte { + for i := len(key) - 1; i >= 0; i-- { + key[i]++ + if key[i] != 0x0 { + return key + } + } + return nil +} diff --git a/core/state/snapshot/generate_test.go b/core/state/snapshot/generate_test.go index 03263f3976..3a669085f7 100644 --- a/core/state/snapshot/generate_test.go +++ b/core/state/snapshot/generate_test.go @@ -17,16 +17,361 @@ package snapshot import ( + "fmt" "math/big" + "os" "testing" "time" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/core/rawdb" + "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb/memorydb" + "github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/trie" + "golang.org/x/crypto/sha3" ) +// Tests that snapshot generation from an empty database. +func TestGeneration(t *testing.T) { + // We can't use statedb to make a test trie (circular dependency), so make + // a fake one manually. We're going with a small account trie of 3 accounts, + // two of which also has the same 3-slot storage trie attached. + var ( + diskdb = memorydb.New() + triedb = trie.NewDatabase(diskdb) + ) + stTrie, _ := trie.NewSecure(common.Hash{}, triedb) + stTrie.Update([]byte("key-1"), []byte("val-1")) // 0x1314700b81afc49f94db3623ef1df38f3ed18b73a1b7ea2f6c095118cf6118a0 + stTrie.Update([]byte("key-2"), []byte("val-2")) // 0x18a0f4d79cff4459642dd7604f303886ad9d77c30cf3d7d7cedb3a693ab6d371 + stTrie.Update([]byte("key-3"), []byte("val-3")) // 0x51c71a47af0695957647fb68766d0becee77e953df17c29b3c2f25436f055c78 + stTrie.Commit(nil) // Root: 0xddefcd9376dd029653ef384bd2f0a126bb755fe84fdcc9e7cf421ba454f2bc67 + + accTrie, _ := trie.NewSecure(common.Hash{}, triedb) + acc := &Account{Balance: big.NewInt(1), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-1"), val) // 0x9250573b9c18c664139f3b6a7a8081b7d8f8916a8fcc5d94feec6c29f5fd4e9e + + acc = &Account{Balance: big.NewInt(2), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()} + val, _ = rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-2"), val) // 0x65145f923027566669a1ae5ccac66f945b55ff6eaeb17d2ea8e048b7d381f2d7 + + acc = &Account{Balance: big.NewInt(3), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ = rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-3"), val) // 0x50815097425d000edfc8b3a4a13e175fc2bdcfee8bdfbf2d1ff61041d3c235b2 + root, _ := accTrie.Commit(nil) // Root: 0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd + triedb.Commit(root, false, nil) + + if have, want := root, common.HexToHash("0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd"); have != want { + t.Fatalf("have %#x want %#x", have, want) + } + snap := generateSnapshot(diskdb, triedb, 16, root) + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + +func hashData(input []byte) common.Hash { + var hasher = sha3.NewLegacyKeccak256() + var hash common.Hash + hasher.Reset() + hasher.Write(input) + hasher.Sum(hash[:0]) + return hash +} + +// Tests that snapshot generation with existent flat state. +func TestGenerateExistentState(t *testing.T) { + // We can't use statedb to make a test trie (circular dependency), so make + // a fake one manually. We're going with a small account trie of 3 accounts, + // two of which also has the same 3-slot storage trie attached. + var ( + diskdb = memorydb.New() + triedb = trie.NewDatabase(diskdb) + ) + stTrie, _ := trie.NewSecure(common.Hash{}, triedb) + stTrie.Update([]byte("key-1"), []byte("val-1")) // 0x1314700b81afc49f94db3623ef1df38f3ed18b73a1b7ea2f6c095118cf6118a0 + stTrie.Update([]byte("key-2"), []byte("val-2")) // 0x18a0f4d79cff4459642dd7604f303886ad9d77c30cf3d7d7cedb3a693ab6d371 + stTrie.Update([]byte("key-3"), []byte("val-3")) // 0x51c71a47af0695957647fb68766d0becee77e953df17c29b3c2f25436f055c78 + stTrie.Commit(nil) // Root: 0xddefcd9376dd029653ef384bd2f0a126bb755fe84fdcc9e7cf421ba454f2bc67 + + accTrie, _ := trie.NewSecure(common.Hash{}, triedb) + acc := &Account{Balance: big.NewInt(1), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-1"), val) // 0x9250573b9c18c664139f3b6a7a8081b7d8f8916a8fcc5d94feec6c29f5fd4e9e + rawdb.WriteAccountSnapshot(diskdb, hashData([]byte("acc-1")), val) + rawdb.WriteStorageSnapshot(diskdb, hashData([]byte("acc-1")), hashData([]byte("key-1")), []byte("val-1")) + rawdb.WriteStorageSnapshot(diskdb, hashData([]byte("acc-1")), hashData([]byte("key-2")), []byte("val-2")) + rawdb.WriteStorageSnapshot(diskdb, hashData([]byte("acc-1")), hashData([]byte("key-3")), []byte("val-3")) + + acc = &Account{Balance: big.NewInt(2), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()} + val, _ = rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-2"), val) // 0x65145f923027566669a1ae5ccac66f945b55ff6eaeb17d2ea8e048b7d381f2d7 + diskdb.Put(hashData([]byte("acc-2")).Bytes(), val) + rawdb.WriteAccountSnapshot(diskdb, hashData([]byte("acc-2")), val) + + acc = &Account{Balance: big.NewInt(3), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ = rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-3"), val) // 0x50815097425d000edfc8b3a4a13e175fc2bdcfee8bdfbf2d1ff61041d3c235b2 + rawdb.WriteAccountSnapshot(diskdb, hashData([]byte("acc-3")), val) + rawdb.WriteStorageSnapshot(diskdb, hashData([]byte("acc-3")), hashData([]byte("key-1")), []byte("val-1")) + rawdb.WriteStorageSnapshot(diskdb, hashData([]byte("acc-3")), hashData([]byte("key-2")), []byte("val-2")) + rawdb.WriteStorageSnapshot(diskdb, hashData([]byte("acc-3")), hashData([]byte("key-3")), []byte("val-3")) + + root, _ := accTrie.Commit(nil) // Root: 0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd + triedb.Commit(root, false, nil) + + snap := generateSnapshot(diskdb, triedb, 16, root) + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + +func checkSnapRoot(t *testing.T, snap *diskLayer, trieRoot common.Hash) { + t.Helper() + accIt := snap.AccountIterator(common.Hash{}) + defer accIt.Release() + snapRoot, err := generateTrieRoot(nil, accIt, common.Hash{}, stackTrieGenerate, + func(db ethdb.KeyValueWriter, accountHash, codeHash common.Hash, stat *generateStats) (common.Hash, error) { + storageIt, _ := snap.StorageIterator(accountHash, common.Hash{}) + defer storageIt.Release() + + hash, err := generateTrieRoot(nil, storageIt, accountHash, stackTrieGenerate, nil, stat, false) + if err != nil { + return common.Hash{}, err + } + return hash, nil + }, newGenerateStats(), true) + + if err != nil { + t.Fatal(err) + } + if snapRoot != trieRoot { + t.Fatalf("snaproot: %#x != trieroot #%x", snapRoot, trieRoot) + } +} + +type testHelper struct { + diskdb *memorydb.Database + triedb *trie.Database + accTrie *trie.SecureTrie +} + +func newHelper() *testHelper { + diskdb := memorydb.New() + triedb := trie.NewDatabase(diskdb) + accTrie, _ := trie.NewSecure(common.Hash{}, triedb) + return &testHelper{ + diskdb: diskdb, + triedb: triedb, + accTrie: accTrie, + } +} + +func (t *testHelper) addTrieAccount(acckey string, acc *Account) { + val, _ := rlp.EncodeToBytes(acc) + t.accTrie.Update([]byte(acckey), val) +} + +func (t *testHelper) addSnapAccount(acckey string, acc *Account) { + val, _ := rlp.EncodeToBytes(acc) + key := hashData([]byte(acckey)) + rawdb.WriteAccountSnapshot(t.diskdb, key, val) +} + +func (t *testHelper) addAccount(acckey string, acc *Account) { + t.addTrieAccount(acckey, acc) + t.addSnapAccount(acckey, acc) +} + +func (t *testHelper) addSnapStorage(accKey string, keys []string, vals []string) { + accHash := hashData([]byte(accKey)) + for i, key := range keys { + rawdb.WriteStorageSnapshot(t.diskdb, accHash, hashData([]byte(key)), []byte(vals[i])) + } +} + +func (t *testHelper) makeStorageTrie(keys []string, vals []string) []byte { + stTrie, _ := trie.NewSecure(common.Hash{}, t.triedb) + for i, k := range keys { + stTrie.Update([]byte(k), []byte(vals[i])) + } + root, _ := stTrie.Commit(nil) + return root.Bytes() +} + +func (t *testHelper) Generate() (common.Hash, *diskLayer) { + root, _ := t.accTrie.Commit(nil) + t.triedb.Commit(root, false, nil) + snap := generateSnapshot(t.diskdb, t.triedb, 16, root) + return root, snap +} + +// Tests that snapshot generation with existent flat state, where the flat state +// contains some errors: +// - the contract with empty storage root but has storage entries in the disk +// - the contract with non empty storage root but empty storage slots +// - the contract(non-empty storage) misses some storage slots +// - miss in the beginning +// - miss in the middle +// - miss in the end +// - the contract(non-empty storage) has wrong storage slots +// - wrong slots in the beginning +// - wrong slots in the middle +// - wrong slots in the end +// - the contract(non-empty storage) has extra storage slots +// - extra slots in the beginning +// - extra slots in the middle +// - extra slots in the end +func TestGenerateExistentStateWithWrongStorage(t *testing.T) { + helper := newHelper() + stRoot := helper.makeStorageTrie([]string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "val-3"}) + + // Account one, empty root but non-empty database + helper.addAccount("acc-1", &Account{Balance: big.NewInt(1), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-1", []string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "val-3"}) + + // Account two, non empty root but empty database + helper.addAccount("acc-2", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + + // Miss slots + { + // Account three, non empty root but misses slots in the beginning + helper.addAccount("acc-3", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-3", []string{"key-2", "key-3"}, []string{"val-2", "val-3"}) + + // Account four, non empty root but misses slots in the middle + helper.addAccount("acc-4", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-4", []string{"key-1", "key-3"}, []string{"val-1", "val-3"}) + + // Account five, non empty root but misses slots in the end + helper.addAccount("acc-5", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-5", []string{"key-1", "key-2"}, []string{"val-1", "val-2"}) + } + + // Wrong storage slots + { + // Account six, non empty root but wrong slots in the beginning + helper.addAccount("acc-6", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-6", []string{"key-1", "key-2", "key-3"}, []string{"badval-1", "val-2", "val-3"}) + + // Account seven, non empty root but wrong slots in the middle + helper.addAccount("acc-7", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-7", []string{"key-1", "key-2", "key-3"}, []string{"val-1", "badval-2", "val-3"}) + + // Account eight, non empty root but wrong slots in the end + helper.addAccount("acc-8", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-8", []string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "badval-3"}) + + // Account 9, non empty root but rotated slots + helper.addAccount("acc-9", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-9", []string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-3", "val-2"}) + } + + // Extra storage slots + { + // Account 10, non empty root but extra slots in the beginning + helper.addAccount("acc-10", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-10", []string{"key-0", "key-1", "key-2", "key-3"}, []string{"val-0", "val-1", "val-2", "val-3"}) + + // Account 11, non empty root but extra slots in the middle + helper.addAccount("acc-11", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-11", []string{"key-1", "key-2", "key-2-1", "key-3"}, []string{"val-1", "val-2", "val-2-1", "val-3"}) + + // Account 12, non empty root but extra slots in the end + helper.addAccount("acc-12", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapStorage("acc-12", []string{"key-1", "key-2", "key-3", "key-4"}, []string{"val-1", "val-2", "val-3", "val-4"}) + } + + root, snap := helper.Generate() + t.Logf("Root: %#x\n", root) // Root = 0x8746cce9fd9c658b2cfd639878ed6584b7a2b3e73bb40f607fcfa156002429a0 + + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + +// Tests that snapshot generation with existent flat state, where the flat state +// contains some errors: +// - miss accounts +// - wrong accounts +// - extra accounts +func TestGenerateExistentStateWithWrongAccounts(t *testing.T) { + helper := newHelper() + stRoot := helper.makeStorageTrie([]string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "val-3"}) + + // Trie accounts [acc-1, acc-2, acc-3, acc-4, acc-6] + // Extra accounts [acc-0, acc-5, acc-7] + + // Missing accounts, only in the trie + { + helper.addTrieAccount("acc-1", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) // Beginning + helper.addTrieAccount("acc-4", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) // Middle + helper.addTrieAccount("acc-6", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) // End + } + + // Wrong accounts + { + helper.addTrieAccount("acc-2", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapAccount("acc-2", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: common.Hex2Bytes("0x1234")}) + + helper.addTrieAccount("acc-3", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + helper.addSnapAccount("acc-3", &Account{Balance: big.NewInt(1), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()}) + } + + // Extra accounts, only in the snap + { + helper.addSnapAccount("acc-0", &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyRoot.Bytes()}) // before the beginning + helper.addSnapAccount("acc-5", &Account{Balance: big.NewInt(1), Root: emptyRoot.Bytes(), CodeHash: common.Hex2Bytes("0x1234")}) // Middle + helper.addSnapAccount("acc-7", &Account{Balance: big.NewInt(1), Root: emptyRoot.Bytes(), CodeHash: emptyRoot.Bytes()}) // after the end + } + + root, snap := helper.Generate() + t.Logf("Root: %#x\n", root) // Root = 0x825891472281463511e7ebcc7f109e4f9200c20fa384754e11fd605cd98464e8 + + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + // Tests that snapshot generation errors out correctly in case of a missing trie // node in the account trie. func TestGenerateCorruptAccountTrie(t *testing.T) { @@ -55,7 +400,7 @@ func TestGenerateCorruptAccountTrie(t *testing.T) { triedb.Commit(common.HexToHash("0xa04693ea110a31037fb5ee814308a6f1d76bdab0b11676bdf4541d2de55ba978"), false, nil) diskdb.Delete(common.HexToHash("0x65145f923027566669a1ae5ccac66f945b55ff6eaeb17d2ea8e048b7d381f2d7").Bytes()) - snap := generateSnapshot(diskdb, triedb, 16, common.HexToHash("0xa04693ea110a31037fb5ee814308a6f1d76bdab0b11676bdf4541d2de55ba978"), nil) + snap := generateSnapshot(diskdb, triedb, 16, common.HexToHash("0xa04693ea110a31037fb5ee814308a6f1d76bdab0b11676bdf4541d2de55ba978")) select { case <-snap.genPending: // Snapshot generation succeeded @@ -115,7 +460,7 @@ func TestGenerateMissingStorageTrie(t *testing.T) { // Delete a storage trie root and ensure the generator chokes diskdb.Delete(common.HexToHash("0xddefcd9376dd029653ef384bd2f0a126bb755fe84fdcc9e7cf421ba454f2bc67").Bytes()) - snap := generateSnapshot(diskdb, triedb, 16, common.HexToHash("0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd"), nil) + snap := generateSnapshot(diskdb, triedb, 16, common.HexToHash("0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd")) select { case <-snap.genPending: // Snapshot generation succeeded @@ -174,7 +519,7 @@ func TestGenerateCorruptStorageTrie(t *testing.T) { // Delete a storage trie leaf and ensure the generator chokes diskdb.Delete(common.HexToHash("0x18a0f4d79cff4459642dd7604f303886ad9d77c30cf3d7d7cedb3a693ab6d371").Bytes()) - snap := generateSnapshot(diskdb, triedb, 16, common.HexToHash("0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd"), nil) + snap := generateSnapshot(diskdb, triedb, 16, common.HexToHash("0xe3712f1a226f3782caca78ca770ccc19ee000552813a9f59d479f8611db9b1fd")) select { case <-snap.genPending: // Snapshot generation succeeded @@ -188,3 +533,301 @@ func TestGenerateCorruptStorageTrie(t *testing.T) { snap.genAbort <- stop <-stop } + +func getStorageTrie(n int, triedb *trie.Database) *trie.SecureTrie { + stTrie, _ := trie.NewSecure(common.Hash{}, triedb) + for i := 0; i < n; i++ { + k := fmt.Sprintf("key-%d", i) + v := fmt.Sprintf("val-%d", i) + stTrie.Update([]byte(k), []byte(v)) + } + stTrie.Commit(nil) + return stTrie +} + +// Tests that snapshot generation when an extra account with storage exists in the snap state. +func TestGenerateWithExtraAccounts(t *testing.T) { + var ( + diskdb = memorydb.New() + triedb = trie.NewDatabase(diskdb) + stTrie = getStorageTrie(5, triedb) + ) + accTrie, _ := trie.NewSecure(common.Hash{}, triedb) + { // Account one in the trie + acc := &Account{Balance: big.NewInt(1), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-1"), val) // 0x9250573b9c18c664139f3b6a7a8081b7d8f8916a8fcc5d94feec6c29f5fd4e9e + // Identical in the snap + key := hashData([]byte("acc-1")) + rawdb.WriteAccountSnapshot(diskdb, key, val) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-1")), []byte("val-1")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-2")), []byte("val-2")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-3")), []byte("val-3")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-4")), []byte("val-4")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-5")), []byte("val-5")) + } + { // Account two exists only in the snapshot + acc := &Account{Balance: big.NewInt(1), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + key := hashData([]byte("acc-2")) + rawdb.WriteAccountSnapshot(diskdb, key, val) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("b-key-1")), []byte("b-val-1")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("b-key-2")), []byte("b-val-2")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("b-key-3")), []byte("b-val-3")) + } + root, _ := accTrie.Commit(nil) + t.Logf("root: %x", root) + triedb.Commit(root, false, nil) + // To verify the test: If we now inspect the snap db, there should exist extraneous storage items + if data := rawdb.ReadStorageSnapshot(diskdb, hashData([]byte("acc-2")), hashData([]byte("b-key-1"))); data == nil { + t.Fatalf("expected snap storage to exist") + } + + snap := generateSnapshot(diskdb, triedb, 16, root) + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop + // If we now inspect the snap db, there should exist no extraneous storage items + if data := rawdb.ReadStorageSnapshot(diskdb, hashData([]byte("acc-2")), hashData([]byte("b-key-1"))); data != nil { + t.Fatalf("expected slot to be removed, got %v", string(data)) + } +} + +func enableLogging() { + log.Root().SetHandler(log.LvlFilterHandler(log.LvlTrace, log.StreamHandler(os.Stderr, log.TerminalFormat(true)))) +} + +// Tests that snapshot generation when an extra account with storage exists in the snap state. +func TestGenerateWithManyExtraAccounts(t *testing.T) { + if false { + enableLogging() + } + var ( + diskdb = memorydb.New() + triedb = trie.NewDatabase(diskdb) + stTrie = getStorageTrie(3, triedb) + ) + accTrie, _ := trie.NewSecure(common.Hash{}, triedb) + { // Account one in the trie + acc := &Account{Balance: big.NewInt(1), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + accTrie.Update([]byte("acc-1"), val) // 0x9250573b9c18c664139f3b6a7a8081b7d8f8916a8fcc5d94feec6c29f5fd4e9e + // Identical in the snap + key := hashData([]byte("acc-1")) + rawdb.WriteAccountSnapshot(diskdb, key, val) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-1")), []byte("val-1")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-2")), []byte("val-2")) + rawdb.WriteStorageSnapshot(diskdb, key, hashData([]byte("key-3")), []byte("val-3")) + } + { // 100 accounts exist only in snapshot + for i := 0; i < 1000; i++ { + //acc := &Account{Balance: big.NewInt(int64(i)), Root: stTrie.Hash().Bytes(), CodeHash: emptyCode.Bytes()} + acc := &Account{Balance: big.NewInt(int64(i)), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + key := hashData([]byte(fmt.Sprintf("acc-%d", i))) + rawdb.WriteAccountSnapshot(diskdb, key, val) + } + } + root, _ := accTrie.Commit(nil) + t.Logf("root: %x", root) + triedb.Commit(root, false, nil) + + snap := generateSnapshot(diskdb, triedb, 16, root) + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + +// Tests this case +// maxAccountRange 3 +// snapshot-accounts: 01, 02, 03, 04, 05, 06, 07 +// trie-accounts: 03, 07 +// +// We iterate three snapshot storage slots (max = 3) from the database. They are 0x01, 0x02, 0x03. +// The trie has a lot of deletions. +// So in trie, we iterate 2 entries 0x03, 0x07. We create the 0x07 in the database and abort the procedure, because the trie is exhausted. +// But in the database, we still have the stale storage slots 0x04, 0x05. They are not iterated yet, but the procedure is finished. +func TestGenerateWithExtraBeforeAndAfter(t *testing.T) { + accountCheckRange = 3 + if false { + enableLogging() + } + var ( + diskdb = memorydb.New() + triedb = trie.NewDatabase(diskdb) + ) + accTrie, _ := trie.New(common.Hash{}, triedb) + { + acc := &Account{Balance: big.NewInt(1), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + accTrie.Update(common.HexToHash("0x03").Bytes(), val) + accTrie.Update(common.HexToHash("0x07").Bytes(), val) + + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x01"), val) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x02"), val) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x03"), val) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x04"), val) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x05"), val) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x06"), val) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x07"), val) + } + + root, _ := accTrie.Commit(nil) + t.Logf("root: %x", root) + triedb.Commit(root, false, nil) + + snap := generateSnapshot(diskdb, triedb, 16, root) + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + +// TestGenerateWithMalformedSnapdata tests what happes if we have some junk +// in the snapshot database, which cannot be parsed back to an account +func TestGenerateWithMalformedSnapdata(t *testing.T) { + accountCheckRange = 3 + if false { + enableLogging() + } + var ( + diskdb = memorydb.New() + triedb = trie.NewDatabase(diskdb) + ) + accTrie, _ := trie.New(common.Hash{}, triedb) + { + acc := &Account{Balance: big.NewInt(1), Root: emptyRoot.Bytes(), CodeHash: emptyCode.Bytes()} + val, _ := rlp.EncodeToBytes(acc) + accTrie.Update(common.HexToHash("0x03").Bytes(), val) + + junk := make([]byte, 100) + copy(junk, []byte{0xde, 0xad}) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x02"), junk) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x03"), junk) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x04"), junk) + rawdb.WriteAccountSnapshot(diskdb, common.HexToHash("0x05"), junk) + } + + root, _ := accTrie.Commit(nil) + t.Logf("root: %x", root) + triedb.Commit(root, false, nil) + + snap := generateSnapshot(diskdb, triedb, 16, root) + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop + // If we now inspect the snap db, there should exist no extraneous storage items + if data := rawdb.ReadStorageSnapshot(diskdb, hashData([]byte("acc-2")), hashData([]byte("b-key-1"))); data != nil { + t.Fatalf("expected slot to be removed, got %v", string(data)) + } +} + +func TestGenerateFromEmptySnap(t *testing.T) { + //enableLogging() + accountCheckRange = 10 + storageCheckRange = 20 + helper := newHelper() + stRoot := helper.makeStorageTrie([]string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "val-3"}) + // Add 1K accounts to the trie + for i := 0; i < 400; i++ { + helper.addTrieAccount(fmt.Sprintf("acc-%d", i), + &Account{Balance: big.NewInt(1), Root: stRoot, CodeHash: emptyCode.Bytes()}) + } + root, snap := helper.Generate() + t.Logf("Root: %#x\n", root) // Root: 0x6f7af6d2e1a1bf2b84a3beb3f8b64388465fbc1e274ca5d5d3fc787ca78f59e4 + + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(1 * time.Second): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} + +// Tests that snapshot generation with existent flat state, where the flat state +// storage is correct, but incomplete. +// The incomplete part is on the second range +// snap: [ 0x01, 0x02, 0x03, 0x04] , [ 0x05, 0x06, 0x07, {missing}] (with storageCheck = 4) +// trie: 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08 +// This hits a case where the snap verification passes, but there are more elements in the trie +// which we must also add. +func TestGenerateWithIncompleteStorage(t *testing.T) { + storageCheckRange = 4 + helper := newHelper() + stKeys := []string{"1", "2", "3", "4", "5", "6", "7", "8"} + stVals := []string{"v1", "v2", "v3", "v4", "v5", "v6", "v7", "v8"} + stRoot := helper.makeStorageTrie(stKeys, stVals) + // We add 8 accounts, each one is missing exactly one of the storage slots. This means + // we don't have to order the keys and figure out exactly which hash-key winds up + // on the sensitive spots at the boundaries + for i := 0; i < 8; i++ { + accKey := fmt.Sprintf("acc-%d", i) + helper.addAccount(accKey, &Account{Balance: big.NewInt(int64(i)), Root: stRoot, CodeHash: emptyCode.Bytes()}) + var moddedKeys []string + var moddedVals []string + for ii := 0; ii < 8; ii++ { + if ii != i { + moddedKeys = append(moddedKeys, stKeys[ii]) + moddedVals = append(moddedVals, stVals[ii]) + } + } + helper.addSnapStorage(accKey, moddedKeys, moddedVals) + } + + root, snap := helper.Generate() + t.Logf("Root: %#x\n", root) // Root: 0xca73f6f05ba4ca3024ef340ef3dfca8fdabc1b677ff13f5a9571fd49c16e67ff + + select { + case <-snap.genPending: + // Snapshot generation succeeded + + case <-time.After(250 * time.Millisecond): + t.Errorf("Snapshot generation failed") + } + checkSnapRoot(t, snap, root) + // Signal abortion to the generator and wait for it to tear down + stop := make(chan *generatorStats) + snap.genAbort <- stop + <-stop +} diff --git a/core/state/snapshot/journal.go b/core/state/snapshot/journal.go index d7e454cceb..5cfb9a9f2a 100644 --- a/core/state/snapshot/journal.go +++ b/core/state/snapshot/journal.go @@ -37,7 +37,10 @@ const journalVersion uint64 = 0 // journalGenerator is a disk layer entry containing the generator progress marker. type journalGenerator struct { - Wiping bool // Whether the database was in progress of being wiped + // Indicator that whether the database was in progress of being wiped. + // It's deprecated but keep it here for background compatibility. + Wiping bool + Done bool // Whether the generator finished creating the snapshot Marker []byte Accounts uint64 @@ -63,30 +66,6 @@ type journalStorage struct { Vals [][]byte } -// loadAndParseLegacyJournal tries to parse the snapshot journal in legacy format. -func loadAndParseLegacyJournal(db ethdb.KeyValueStore, base *diskLayer) (snapshot, journalGenerator, error) { - // Retrieve the journal, for legacy journal it must exist since even for - // 0 layer it stores whether we've already generated the snapshot or are - // in progress only. - journal := rawdb.ReadSnapshotJournal(db) - if len(journal) == 0 { - return nil, journalGenerator{}, errors.New("missing or corrupted snapshot journal") - } - r := rlp.NewStream(bytes.NewReader(journal), 0) - - // Read the snapshot generation progress for the disk layer - var generator journalGenerator - if err := r.Decode(&generator); err != nil { - return nil, journalGenerator{}, fmt.Errorf("failed to load snapshot progress marker: %v", err) - } - // Load all the snapshot diffs from the journal - snapshot, err := loadDiffLayer(base, r) - if err != nil { - return nil, generator, err - } - return snapshot, generator, nil -} - // loadAndParseJournal tries to parse the snapshot journal in latest format. func loadAndParseJournal(db ethdb.KeyValueStore, base *diskLayer) (snapshot, journalGenerator, error) { // Retrieve the disk layer generator. It must exist, no matter the @@ -147,12 +126,17 @@ func loadAndParseJournal(db ethdb.KeyValueStore, base *diskLayer) (snapshot, jou } // loadSnapshot loads a pre-existing state snapshot backed by a key-value store. -func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, root common.Hash, recovery bool) (snapshot, error) { +func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, root common.Hash, recovery bool) (snapshot, bool, error) { + // If snapshotting is disabled (initial sync in progress), don't do anything, + // wait for the chain to permit us to do something meaningful + if rawdb.ReadSnapshotDisabled(diskdb) { + return nil, true, nil + } // Retrieve the block number and hash of the snapshot, failing if no snapshot // is present in the database (or crashed mid-update). baseRoot := rawdb.ReadSnapshotRoot(diskdb) if baseRoot == (common.Hash{}) { - return nil, errors.New("missing or corrupted snapshot") + return nil, false, errors.New("missing or corrupted snapshot") } base := &diskLayer{ diskdb: diskdb, @@ -160,15 +144,10 @@ func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, cache: fastcache.New(cache * 1024 * 1024), root: baseRoot, } - var legacy bool snapshot, generator, err := loadAndParseJournal(diskdb, base) if err != nil { log.Warn("Failed to load new-format journal", "error", err) - snapshot, generator, err = loadAndParseLegacyJournal(diskdb, base) - legacy = true - } - if err != nil { - return nil, err + return nil, false, err } // Entire snapshot journal loaded, sanity check the head. If the loaded // snapshot is not matched with current state root, print a warning log @@ -182,8 +161,8 @@ func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, // If it's legacy snapshot, or it's new-format snapshot but // it's not in recovery mode, returns the error here for // rebuilding the entire snapshot forcibly. - if legacy || !recovery { - return nil, fmt.Errorf("head doesn't match snapshot: have %#x, want %#x", head, root) + if !recovery { + return nil, false, fmt.Errorf("head doesn't match snapshot: have %#x, want %#x", head, root) } // It's in snapshot recovery, the assumption is held that // the disk layer is always higher than chain head. It can @@ -193,14 +172,6 @@ func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, } // Everything loaded correctly, resume any suspended operations if !generator.Done { - // If the generator was still wiping, restart one from scratch (fine for - // now as it's rare and the wiper deletes the stuff it touches anyway, so - // restarting won't incur a lot of extra database hops. - var wiper chan struct{} - if generator.Wiping { - log.Info("Resuming previous snapshot wipe") - wiper = wipeSnapshot(diskdb, false) - } // Whether or not wiping was in progress, load any generator progress too base.genMarker = generator.Marker if base.genMarker == nil { @@ -214,7 +185,6 @@ func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, origin = binary.BigEndian.Uint64(generator.Marker) } go base.generate(&generatorStats{ - wiping: wiper, origin: origin, start: time.Now(), accounts: generator.Accounts, @@ -222,7 +192,7 @@ func loadSnapshot(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, storage: common.StorageSize(generator.Storage), }) } - return snapshot, nil + return snapshot, false, nil } // loadDiffLayer reads the next sections of a snapshot journal, reconstructing a new @@ -352,95 +322,3 @@ func (dl *diffLayer) Journal(buffer *bytes.Buffer) (common.Hash, error) { log.Debug("Journalled diff layer", "root", dl.root, "parent", dl.parent.Root()) return base, nil } - -// LegacyJournal writes the persistent layer generator stats into a buffer -// to be stored in the database as the snapshot journal. -// -// Note it's the legacy version which is only used in testing right now. -func (dl *diskLayer) LegacyJournal(buffer *bytes.Buffer) (common.Hash, error) { - // If the snapshot is currently being generated, abort it - var stats *generatorStats - if dl.genAbort != nil { - abort := make(chan *generatorStats) - dl.genAbort <- abort - - if stats = <-abort; stats != nil { - stats.Log("Journalling in-progress snapshot", dl.root, dl.genMarker) - } - } - // Ensure the layer didn't get stale - dl.lock.RLock() - defer dl.lock.RUnlock() - - if dl.stale { - return common.Hash{}, ErrSnapshotStale - } - // Write out the generator marker - entry := journalGenerator{ - Done: dl.genMarker == nil, - Marker: dl.genMarker, - } - if stats != nil { - entry.Wiping = (stats.wiping != nil) - entry.Accounts = stats.accounts - entry.Slots = stats.slots - entry.Storage = uint64(stats.storage) - } - log.Debug("Legacy journalled disk layer", "root", dl.root) - if err := rlp.Encode(buffer, entry); err != nil { - return common.Hash{}, err - } - return dl.root, nil -} - -// Journal writes the memory layer contents into a buffer to be stored in the -// database as the snapshot journal. -// -// Note it's the legacy version which is only used in testing right now. -func (dl *diffLayer) LegacyJournal(buffer *bytes.Buffer) (common.Hash, error) { - // Journal the parent first - base, err := dl.parent.LegacyJournal(buffer) - if err != nil { - return common.Hash{}, err - } - // Ensure the layer didn't get stale - dl.lock.RLock() - defer dl.lock.RUnlock() - - if dl.Stale() { - return common.Hash{}, ErrSnapshotStale - } - // Everything below was journalled, persist this layer too - if err := rlp.Encode(buffer, dl.root); err != nil { - return common.Hash{}, err - } - destructs := make([]journalDestruct, 0, len(dl.destructSet)) - for hash := range dl.destructSet { - destructs = append(destructs, journalDestruct{Hash: hash}) - } - if err := rlp.Encode(buffer, destructs); err != nil { - return common.Hash{}, err - } - accounts := make([]journalAccount, 0, len(dl.accountData)) - for hash, blob := range dl.accountData { - accounts = append(accounts, journalAccount{Hash: hash, Blob: blob}) - } - if err := rlp.Encode(buffer, accounts); err != nil { - return common.Hash{}, err - } - storage := make([]journalStorage, 0, len(dl.storageData)) - for hash, slots := range dl.storageData { - keys := make([]common.Hash, 0, len(slots)) - vals := make([][]byte, 0, len(slots)) - for key, val := range slots { - keys = append(keys, key) - vals = append(vals, val) - } - storage = append(storage, journalStorage{Hash: hash, Keys: keys, Vals: vals}) - } - if err := rlp.Encode(buffer, storage); err != nil { - return common.Hash{}, err - } - log.Debug("Legacy journalled diff layer", "root", dl.root, "parent", dl.parent.Root()) - return base, nil -} diff --git a/core/state/snapshot/snapshot.go b/core/state/snapshot/snapshot.go index eccf377264..cb8ec7a707 100644 --- a/core/state/snapshot/snapshot.go +++ b/core/state/snapshot/snapshot.go @@ -137,10 +137,6 @@ type snapshot interface { // flattening everything down (bad for reorgs). Journal(buffer *bytes.Buffer) (common.Hash, error) - // LegacyJournal is basically identical to Journal. it's the legacy version for - // flushing legacy journal. Now the only purpose of this function is for testing. - LegacyJournal(buffer *bytes.Buffer) (common.Hash, error) - // Stale return whether this layer has become stale (was flattened across) or // if it's still live. Stale() bool @@ -152,11 +148,11 @@ type snapshot interface { StorageIterator(account common.Hash, seek common.Hash) (StorageIterator, bool) } -// SnapshotTree is an Ethereum state snapshot tree. It consists of one persistent -// base layer backed by a key-value store, on top of which arbitrarily many in- -// memory diff layers are topped. The memory diffs can form a tree with branching, -// but the disk layer is singleton and common to all. If a reorg goes deeper than -// the disk layer, everything needs to be deleted. +// Tree is an Ethereum state snapshot tree. It consists of one persistent base +// layer backed by a key-value store, on top of which arbitrarily many in-memory +// diff layers are topped. The memory diffs can form a tree with branching, but +// the disk layer is singleton and common to all. If a reorg goes deeper than the +// disk layer, everything needs to be deleted. // // The goal of a state snapshot is twofold: to allow direct access to account and // storage data to avoid expensive multi-level trie lookups; and to allow sorted, @@ -190,7 +186,11 @@ func New(diskdb ethdb.KeyValueStore, triedb *trie.Database, cache int, root comm defer snap.waitBuild() } // Attempt to load a previously persisted snapshot and rebuild one if failed - head, err := loadSnapshot(diskdb, triedb, cache, root, recovery) + head, disabled, err := loadSnapshot(diskdb, triedb, cache, root, recovery) + if disabled { + log.Warn("Snapshot maintenance disabled (syncing)") + return snap, nil + } if err != nil { if rebuild { log.Warn("Failed to load snapshot, regenerating", "err", err) @@ -228,6 +228,55 @@ func (t *Tree) waitBuild() { } } +// Disable interrupts any pending snapshot generator, deletes all the snapshot +// layers in memory and marks snapshots disabled globally. In order to resume +// the snapshot functionality, the caller must invoke Rebuild. +func (t *Tree) Disable() { + // Interrupt any live snapshot layers + t.lock.Lock() + defer t.lock.Unlock() + + for _, layer := range t.layers { + switch layer := layer.(type) { + case *diskLayer: + // If the base layer is generating, abort it + if layer.genAbort != nil { + abort := make(chan *generatorStats) + layer.genAbort <- abort + <-abort + } + // Layer should be inactive now, mark it as stale + layer.lock.Lock() + layer.stale = true + layer.lock.Unlock() + + case *diffLayer: + // If the layer is a simple diff, simply mark as stale + layer.lock.Lock() + atomic.StoreUint32(&layer.stale, 1) + layer.lock.Unlock() + + default: + panic(fmt.Sprintf("unknown layer type: %T", layer)) + } + } + t.layers = map[common.Hash]snapshot{} + + // Delete all snapshot liveness information from the database + batch := t.diskdb.NewBatch() + + rawdb.WriteSnapshotDisabled(batch) + rawdb.DeleteSnapshotRoot(batch) + rawdb.DeleteSnapshotJournal(batch) + rawdb.DeleteSnapshotGenerator(batch) + rawdb.DeleteSnapshotRecoveryNumber(batch) + // Note, we don't delete the sync progress + + if err := batch.Write(); err != nil { + log.Crit("Failed to disable snapshots", "err", err) + } +} + // Snapshot retrieves a snapshot belonging to the given block root, or nil if no // snapshot is maintained for that block. func (t *Tree) Snapshot(blockRoot common.Hash) Snapshot { @@ -622,29 +671,6 @@ func (t *Tree) Journal(root common.Hash) (common.Hash, error) { return base, nil } -// LegacyJournal is basically identical to Journal. it's the legacy -// version for flushing legacy journal. Now the only purpose of this -// function is for testing. -func (t *Tree) LegacyJournal(root common.Hash) (common.Hash, error) { - // Retrieve the head snapshot to journal from var snap snapshot - snap := t.Snapshot(root) - if snap == nil { - return common.Hash{}, fmt.Errorf("snapshot [%#x] missing", root) - } - // Run the journaling - t.lock.Lock() - defer t.lock.Unlock() - - journal := new(bytes.Buffer) - base, err := snap.(snapshot).LegacyJournal(journal) - if err != nil { - return common.Hash{}, err - } - // Store the journal into the database and return - rawdb.WriteSnapshotJournal(t.diskdb, journal.Bytes()) - return base, nil -} - // Rebuild wipes all available snapshot data from the persistent database and // discard all caches and diff layers. Afterwards, it starts a new snapshot // generator with the given root hash. @@ -653,11 +679,9 @@ func (t *Tree) Rebuild(root common.Hash) { defer t.lock.Unlock() // Firstly delete any recovery flag in the database. Because now we are - // building a brand new snapshot. + // building a brand new snapshot. Also reenable the snapshot feature. rawdb.DeleteSnapshotRecoveryNumber(t.diskdb) - - // Track whether there's a wipe currently running and keep it alive if so - var wiper chan struct{} + rawdb.DeleteSnapshotDisabled(t.diskdb) // Iterate over and mark all layers stale for _, layer := range t.layers { @@ -667,10 +691,7 @@ func (t *Tree) Rebuild(root common.Hash) { if layer.genAbort != nil { abort := make(chan *generatorStats) layer.genAbort <- abort - - if stats := <-abort; stats != nil { - wiper = stats.wiping - } + <-abort } // Layer should be inactive now, mark it as stale layer.lock.Lock() @@ -691,7 +712,7 @@ func (t *Tree) Rebuild(root common.Hash) { // generator will run a wiper first if there's not one running right now. log.Info("Rebuilding state snapshot") t.layers = map[common.Hash]snapshot{ - root: generateSnapshot(t.diskdb, t.triedb, t.cache, root, wiper), + root: generateSnapshot(t.diskdb, t.triedb, t.cache, root), } } diff --git a/core/state/snapshot/wipe.go b/core/state/snapshot/wipe.go index 14b63031a5..2cab57393b 100644 --- a/core/state/snapshot/wipe.go +++ b/core/state/snapshot/wipe.go @@ -24,10 +24,11 @@ import ( "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/metrics" ) // wipeSnapshot starts a goroutine to iterate over the entire key-value database -// and delete all the data associated with the snapshot (accounts, storage, +// and delete all the data associated with the snapshot (accounts, storage, // metadata). After all is done, the snapshot range of the database is compacted // to free up unused data blocks. func wipeSnapshot(db ethdb.KeyValueStore, full bool) chan struct{} { @@ -53,10 +54,10 @@ func wipeSnapshot(db ethdb.KeyValueStore, full bool) chan struct{} { // removed in sync to avoid data races. After all is done, the snapshot range of // the database is compacted to free up unused data blocks. func wipeContent(db ethdb.KeyValueStore) error { - if err := wipeKeyRange(db, "accounts", rawdb.SnapshotAccountPrefix, len(rawdb.SnapshotAccountPrefix)+common.HashLength); err != nil { + if err := wipeKeyRange(db, "accounts", rawdb.SnapshotAccountPrefix, nil, nil, len(rawdb.SnapshotAccountPrefix)+common.HashLength, snapWipedAccountMeter, true); err != nil { return err } - if err := wipeKeyRange(db, "storage", rawdb.SnapshotStoragePrefix, len(rawdb.SnapshotStoragePrefix)+2*common.HashLength); err != nil { + if err := wipeKeyRange(db, "storage", rawdb.SnapshotStoragePrefix, nil, nil, len(rawdb.SnapshotStoragePrefix)+2*common.HashLength, snapWipedStorageMeter, true); err != nil { return err } // Compact the snapshot section of the database to get rid of unused space @@ -82,8 +83,11 @@ func wipeContent(db ethdb.KeyValueStore) error { } // wipeKeyRange deletes a range of keys from the database starting with prefix -// and having a specific total key length. -func wipeKeyRange(db ethdb.KeyValueStore, kind string, prefix []byte, keylen int) error { +// and having a specific total key length. The start and limit is optional for +// specifying a particular key range for deletion. +// +// Origin is included for wiping and limit is excluded if they are specified. +func wipeKeyRange(db ethdb.KeyValueStore, kind string, prefix []byte, origin []byte, limit []byte, keylen int, meter metrics.Meter, report bool) error { // Batch deletions together to avoid holding an iterator for too long var ( batch = db.NewBatch() @@ -92,7 +96,11 @@ func wipeKeyRange(db ethdb.KeyValueStore, kind string, prefix []byte, keylen int // Iterate over the key-range and delete all of them start, logged := time.Now(), time.Now() - it := db.NewIterator(prefix, nil) + it := db.NewIterator(prefix, origin) + var stop []byte + if limit != nil { + stop = append(prefix, limit...) + } for it.Next() { // Skip any keys with the correct prefix but wrong length (trie nodes) key := it.Key() @@ -102,6 +110,9 @@ func wipeKeyRange(db ethdb.KeyValueStore, kind string, prefix []byte, keylen int if len(key) != keylen { continue } + if stop != nil && bytes.Compare(key, stop) >= 0 { + break + } // Delete the key and periodically recreate the batch and iterator batch.Delete(key) items++ @@ -116,7 +127,7 @@ func wipeKeyRange(db ethdb.KeyValueStore, kind string, prefix []byte, keylen int seekPos := key[len(prefix):] it = db.NewIterator(prefix, seekPos) - if time.Since(logged) > 8*time.Second { + if time.Since(logged) > 8*time.Second && report { log.Info("Deleting state snapshot leftovers", "kind", kind, "wiped", items, "elapsed", common.PrettyDuration(time.Since(start))) logged = time.Now() } @@ -126,6 +137,11 @@ func wipeKeyRange(db ethdb.KeyValueStore, kind string, prefix []byte, keylen int if err := batch.Write(); err != nil { return err } - log.Info("Deleted state snapshot leftovers", "kind", kind, "wiped", items, "elapsed", common.PrettyDuration(time.Since(start))) + if meter != nil { + meter.Mark(int64(items)) + } + if report { + log.Info("Deleted state snapshot leftovers", "kind", kind, "wiped", items, "elapsed", common.PrettyDuration(time.Since(start))) + } return nil } diff --git a/core/state/state_test.go b/core/state/state_test.go index 9bd8730e36..487319b618 100644 --- a/core/state/state_test.go +++ b/core/state/state_test.go @@ -30,8 +30,6 @@ import ( checker "gopkg.in/check.v1" ) -var toAddr = common.BytesToAddress - type stateTest struct { db ethdb.Database state *StateDB @@ -49,11 +47,11 @@ func TestDump(t *testing.T) { s := &stateTest{db: db, state: sdb} // generate a few entries - obj1 := s.state.GetOrNewStateObject(toAddr([]byte{0x01})) + obj1 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x01})) obj1.AddBalance(big.NewInt(22)) - obj2 := s.state.GetOrNewStateObject(toAddr([]byte{0x01, 0x02})) + obj2 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x01, 0x02})) obj2.SetCode(crypto.Keccak256Hash([]byte{3, 3, 3, 3, 3, 3, 3}), []byte{3, 3, 3, 3, 3, 3, 3}) - obj3 := s.state.GetOrNewStateObject(toAddr([]byte{0x02})) + obj3 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x02})) obj3.SetBalance(big.NewInt(44)) // write some of them to the trie @@ -94,11 +92,11 @@ func TestDump(t *testing.T) { func (s *stateTest) TestDumpAddress(c *checker.C) { // generate a few entries - obj1 := s.state.GetOrNewStateObject(toAddr([]byte{0x01})) + obj1 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x01})) obj1.AddBalance(big.NewInt(22)) - obj2 := s.state.GetOrNewStateObject(toAddr([]byte{0x01, 0x02})) + obj2 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x01, 0x02})) obj2.SetCode(crypto.Keccak256Hash([]byte{3, 3, 3, 3, 3, 3, 3}), []byte{3, 3, 3, 3, 3, 3, 3}) - obj3 := s.state.GetOrNewStateObject(toAddr([]byte{0x02})) + obj3 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x02})) obj3.SetBalance(big.NewInt(44)) // write some of them to the trie @@ -106,7 +104,7 @@ func (s *stateTest) TestDumpAddress(c *checker.C) { s.state.updateStateObject(obj2) s.state.Commit(false) - addressToDump := toAddr([]byte{0x01}) + addressToDump := common.BytesToAddress([]byte{0x01}) // check that dump contains the state objects that are in trie got, _ := s.state.DumpAddress(addressToDump) out, _ := json.Marshal(got) @@ -120,11 +118,11 @@ func (s *stateTest) TestDumpAddress(c *checker.C) { func (s *stateTest) TestDumpAddressNotFound(c *checker.C) { // generate a few entries - obj1 := s.state.GetOrNewStateObject(toAddr([]byte{0x01})) + obj1 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x01})) obj1.AddBalance(big.NewInt(22)) - obj2 := s.state.GetOrNewStateObject(toAddr([]byte{0x01, 0x02})) + obj2 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x01, 0x02})) obj2.SetCode(crypto.Keccak256Hash([]byte{3, 3, 3, 3, 3, 3, 3}), []byte{3, 3, 3, 3, 3, 3, 3}) - obj3 := s.state.GetOrNewStateObject(toAddr([]byte{0x02})) + obj3 := s.state.GetOrNewStateObject(common.BytesToAddress([]byte{0x02})) obj3.SetBalance(big.NewInt(44)) // write some of them to the trie @@ -132,7 +130,7 @@ func (s *stateTest) TestDumpAddressNotFound(c *checker.C) { s.state.updateStateObject(obj2) s.state.Commit(false) - addressToDump := toAddr([]byte{0x09}) + addressToDump := common.BytesToAddress([]byte{0x09}) // check that dump contains the state objects that are in trie _, found := s.state.DumpAddress(addressToDump) @@ -166,7 +164,7 @@ func TestNull(t *testing.T) { } func TestSnapshot(t *testing.T) { - stateobjaddr := toAddr([]byte("aa")) + stateobjaddr := common.BytesToAddress([]byte("aa")) var storageaddr common.Hash data1 := common.BytesToHash([]byte{42}) data2 := common.BytesToHash([]byte{43}) @@ -208,8 +206,8 @@ func TestSnapshotEmpty(t *testing.T) { func TestSnapshot2(t *testing.T) { state, _ := New(common.Hash{}, NewDatabase(rawdb.NewMemoryDatabase()), nil) - stateobjaddr0 := toAddr([]byte("so0")) - stateobjaddr1 := toAddr([]byte("so1")) + stateobjaddr0 := common.BytesToAddress([]byte("so0")) + stateobjaddr1 := common.BytesToAddress([]byte("so1")) var storageaddr common.Hash data0 := common.BytesToHash([]byte{17}) diff --git a/core/state/statedb.go b/core/state/statedb.go index 5f51096ffa..b3ed43419f 100644 --- a/core/state/statedb.go +++ b/core/state/statedb.go @@ -1127,7 +1127,7 @@ func (s *StateDB) Commit(deleteEmptyObjects bool) (common.Hash, error) { // The onleaf func is called _serially_, so we can reuse the same account // for unmarshalling every time. var account Account - root, err := s.trie.Commit(func(path []byte, leaf []byte, parent common.Hash) error { + root, err := s.trie.Commit(func(_ [][]byte, _ []byte, leaf []byte, parent common.Hash) error { if err := rlp.DecodeBytes(leaf, &account); err != nil { return nil } diff --git a/core/state/statedb_test.go b/core/state/statedb_test.go index 159d0dd90a..5137c8076f 100644 --- a/core/state/statedb_test.go +++ b/core/state/statedb_test.go @@ -692,7 +692,7 @@ func TestDeleteCreateRevert(t *testing.T) { // Create an initial state with a single contract state, _ := New(common.Hash{}, NewDatabase(rawdb.NewMemoryDatabase()), nil) - addr := toAddr([]byte("so")) + addr := common.BytesToAddress([]byte("so")) state.SetBalance(addr, big.NewInt(1)) root, _ := state.Commit(false) @@ -725,11 +725,11 @@ func TestMissingTrieNodes(t *testing.T) { db := NewDatabase(memDb) var root common.Hash state, _ := New(common.Hash{}, db, nil) - addr := toAddr([]byte("so")) + addr := common.BytesToAddress([]byte("so")) { state.SetBalance(addr, big.NewInt(1)) state.SetCode(addr, []byte{1, 2, 3}) - a2 := toAddr([]byte("another")) + a2 := common.BytesToAddress([]byte("another")) state.SetBalance(a2, big.NewInt(100)) state.SetCode(a2, []byte{1, 2, 4}) root, _ = state.Commit(false) diff --git a/core/state/sync.go b/core/state/sync.go index 1018b78e5e..7a5852fb19 100644 --- a/core/state/sync.go +++ b/core/state/sync.go @@ -26,17 +26,31 @@ import ( ) // NewStateSync create a new state trie download scheduler. -func NewStateSync(root common.Hash, database ethdb.KeyValueReader, bloom *trie.SyncBloom) *trie.Sync { +func NewStateSync(root common.Hash, database ethdb.KeyValueReader, bloom *trie.SyncBloom, onLeaf func(paths [][]byte, leaf []byte) error) *trie.Sync { + // Register the storage slot callback if the external callback is specified. + var onSlot func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error + if onLeaf != nil { + onSlot = func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error { + return onLeaf(paths, leaf) + } + } + // Register the account callback to connect the state trie and the storage + // trie belongs to the contract. var syncer *trie.Sync - callback := func(path []byte, leaf []byte, parent common.Hash) error { + onAccount := func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error { + if onLeaf != nil { + if err := onLeaf(paths, leaf); err != nil { + return err + } + } var obj Account if err := rlp.Decode(bytes.NewReader(leaf), &obj); err != nil { return err } - syncer.AddSubTrie(obj.Root, path, parent, nil) - syncer.AddCodeEntry(common.BytesToHash(obj.CodeHash), path, parent) + syncer.AddSubTrie(obj.Root, hexpath, parent, onSlot) + syncer.AddCodeEntry(common.BytesToHash(obj.CodeHash), hexpath, parent) return nil } - syncer = trie.NewSync(root, database, callback, bloom) + syncer = trie.NewSync(root, database, onAccount, bloom) return syncer } diff --git a/core/state/sync_test.go b/core/state/sync_test.go index 9c4867093d..a13fcf56a3 100644 --- a/core/state/sync_test.go +++ b/core/state/sync_test.go @@ -133,7 +133,7 @@ func checkStateConsistency(db ethdb.Database, root common.Hash) error { // Tests that an empty state is not scheduled for syncing. func TestEmptyStateSync(t *testing.T) { empty := common.HexToHash("56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421") - sync := NewStateSync(empty, rawdb.NewMemoryDatabase(), trie.NewSyncBloom(1, memorydb.New())) + sync := NewStateSync(empty, rawdb.NewMemoryDatabase(), trie.NewSyncBloom(1, memorydb.New()), nil) if nodes, paths, codes := sync.Missing(1); len(nodes) != 0 || len(paths) != 0 || len(codes) != 0 { t.Errorf(" content requested for empty state: %v, %v, %v", nodes, paths, codes) } @@ -170,7 +170,7 @@ func testIterativeStateSync(t *testing.T, count int, commit bool, bypath bool) { // Create a destination state and sync with the scheduler dstDb := rawdb.NewMemoryDatabase() - sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb)) + sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) nodes, paths, codes := sched.Missing(count) var ( @@ -249,7 +249,7 @@ func TestIterativeDelayedStateSync(t *testing.T) { // Create a destination state and sync with the scheduler dstDb := rawdb.NewMemoryDatabase() - sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb)) + sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) nodes, _, codes := sched.Missing(0) queue := append(append([]common.Hash{}, nodes...), codes...) @@ -297,7 +297,7 @@ func testIterativeRandomStateSync(t *testing.T, count int) { // Create a destination state and sync with the scheduler dstDb := rawdb.NewMemoryDatabase() - sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb)) + sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) queue := make(map[common.Hash]struct{}) nodes, _, codes := sched.Missing(count) @@ -347,7 +347,7 @@ func TestIterativeRandomDelayedStateSync(t *testing.T) { // Create a destination state and sync with the scheduler dstDb := rawdb.NewMemoryDatabase() - sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb)) + sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) queue := make(map[common.Hash]struct{}) nodes, _, codes := sched.Missing(0) @@ -414,7 +414,7 @@ func TestIncompleteStateSync(t *testing.T) { // Create a destination state and sync with the scheduler dstDb := rawdb.NewMemoryDatabase() - sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb)) + sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) var added []common.Hash diff --git a/core/types/block.go b/core/types/block.go index 5cbf59fb29..d613cfc57e 100644 --- a/core/types/block.go +++ b/core/types/block.go @@ -183,19 +183,6 @@ func (b *Block) String() string { return fmt.Sprintf("{Header: %v}", b.header) } -// DeprecatedTd is an old relic for extracting the TD of a block. It is in the -// code solely to facilitate upgrading the database from the old format to the -// new, after which it should be deleted. Do not use! -func (b *Block) DeprecatedTd() *big.Int { - return b.td -} - -// [deprecated by eth/63] -// StorageBlock defines the RLP encoding of a Block stored in the -// state database. The StorageBlock encoding contains fields that -// would otherwise need to be recomputed. -type StorageBlock Block - // "external" block encoding. used for eth protocol, etc. type extblock struct { Header *Header @@ -203,15 +190,6 @@ type extblock struct { Uncles []*Header } -// [deprecated by eth/63] -// "storage" block encoding. used for database. -type storageblock struct { - Header *Header - Txs []*Transaction - Uncles []*Header - TD *big.Int -} - // NewBlock creates a new block. The input data is copied, // changes to header and to the field values will not affect the // block. @@ -296,16 +274,6 @@ func (b *Block) EncodeRLP(w io.Writer) error { }) } -// [deprecated by eth/63] -func (b *StorageBlock) DecodeRLP(s *rlp.Stream) error { - var sb storageblock - if err := s.Decode(&sb); err != nil { - return err - } - b.header, b.uncles, b.transactions, b.td = sb.Header, sb.Uncles, sb.Txs, sb.TD - return nil -} - // TODO: copies func (b *Block) Uncles() []*Header { return b.uncles } diff --git a/core/types/receipt_test.go b/core/types/receipt_test.go index 968843e7a6..c2dbcf5eca 100644 --- a/core/types/receipt_test.go +++ b/core/types/receipt_test.go @@ -560,11 +560,6 @@ func testReceiptFields(t *testing.T, receipt *Receipt, txs Transactions, txIndex if receipt.Logs[j].TxIndex != uint(txIndex) { t.Errorf("%s.Logs[%d].TransactionIndex = %d, want %d", receiptName, j, receipt.Logs[j].TxIndex, txIndex) } - // TODO: @achraf have a look how to include this - //if receipts[i].Logs[j].Index != logIndex { - // t.Errorf("receipts[%d].Logs[%d].Index = %d, want %d", i, j, receipts[i].Logs[j].Index, logIndex) - //} - //logIndex++ } } @@ -594,6 +589,20 @@ func TestTypedReceiptEncodingDecoding(t *testing.T) { } check(bundle) } + { + var bundle []*Receipt + rlp.DecodeBytes(payload, &bundle) + check(bundle) + } + { + var bundle []*Receipt + r := bytes.NewReader(payload) + s := rlp.NewStream(r, uint64(len(payload))) + if err := s.Decode(&bundle); err != nil { + t.Fatal(err) + } + check(bundle) + } } func clearComputedFieldsOnReceipts(t *testing.T, receipts Receipts) { diff --git a/core/types/transaction.go b/core/types/transaction.go index 85deb27c65..fdd24fc729 100644 --- a/core/types/transaction.go +++ b/core/types/transaction.go @@ -489,15 +489,16 @@ func (t *TransactionsByPriceAndNonce) Pop() { // // NOTE: In a future PR this will be removed. type Message struct { - to *common.Address - from common.Address - nonce uint64 - amount *big.Int - gasLimit uint64 - gasPrice *big.Int - data []byte - accessList AccessList - checkNonce bool + to *common.Address + from common.Address + nonce uint64 + amount *big.Int + gasLimit uint64 + gasPrice *big.Int + data []byte + accessList AccessList + checkNonce bool + // Quorum isPrivate bool isInnerPrivate bool } diff --git a/core/types/transaction_marshalling.go b/core/types/transaction_marshalling.go index 184a17d5b5..e561485556 100644 --- a/core/types/transaction_marshalling.go +++ b/core/types/transaction_marshalling.go @@ -1,3 +1,19 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + package types import ( diff --git a/core/vm/errors.go b/core/vm/errors.go index a58b236248..3b185d131a 100644 --- a/core/vm/errors.go +++ b/core/vm/errors.go @@ -23,9 +23,6 @@ import ( // List evm execution errors var ( - // ErrInvalidSubroutineEntry means that a BEGINSUB was reached via iteration, - // as opposed to from a JUMPSUB instruction - ErrInvalidSubroutineEntry = errors.New("invalid subroutine entry") ErrOutOfGas = errors.New("out of gas") ErrCodeStoreOutOfGas = errors.New("contract creation code storage out of gas") ErrDepth = errors.New("max call depth exceeded") @@ -37,11 +34,8 @@ var ( ErrWriteProtection = errors.New("write protection") ErrReturnDataOutOfBounds = errors.New("return data out of bounds") ErrGasUintOverflow = errors.New("gas uint64 overflow") - ErrInvalidRetsub = errors.New("invalid retsub") - ErrReturnStackExceeded = errors.New("return stack limit reached") - - ErrReadOnlyValueTransfer = errors.New("VM in read-only mode. Value transfer prohibited.") - ErrNoCompatibleInterpreter = errors.New("no compatible interpreter") + ErrReadOnlyValueTransfer = errors.New("VM in read-only mode. Value transfer prohibited.") + ErrNoCompatibleInterpreter = errors.New("no compatible interpreter") ) // ErrStackUnderflow wraps an evm error when the items on the stack less diff --git a/core/vm/evm.go b/core/vm/evm.go index 6779279709..ffdd9043b5 100644 --- a/core/vm/evm.go +++ b/core/vm/evm.go @@ -627,13 +627,20 @@ func (evm *EVM) create(caller ContractRef, codeAndHash *codeAndHash, gas uint64, ret, err := run(evm, contract, nil, false) maxCodeSize := evm.ChainConfig().GetMaxCodeSize(evm.Context.BlockNumber) - // check whether the max code size has been exceeded, check maxcode size from chain config - maxCodeSizeExceeded := evm.chainRules.IsEIP158 && len(ret) > maxCodeSize + if maxCodeSize < params.MaxCodeSize { + maxCodeSize = params.MaxCodeSize + } + + // Check whether the max code size has been exceeded, assign err if the case. + if err == nil && evm.chainRules.IsEIP158 && len(ret) > maxCodeSize { + err = ErrMaxCodeSizeExceeded + } + // if the contract creation ran successfully and no errors were returned // calculate the gas required to store the code. If the code could not // be stored due to not enough gas set an error and let it be handled // by the error checking condition below. - if err == nil && !maxCodeSizeExceeded { + if err == nil { createDataGas := uint64(len(ret)) * params.CreateDataGas if contract.UseGas(createDataGas) { evm.StateDB.SetCode(address, ret) @@ -645,21 +652,17 @@ func (evm *EVM) create(caller ContractRef, codeAndHash *codeAndHash, gas uint64, // When an error was returned by the EVM or when setting the creation code // above we revert to the snapshot and consume any gas remaining. Additionally // when we're in homestead this also counts for code storage gas errors. - if maxCodeSizeExceeded || (err != nil && (evm.chainRules.IsHomestead || err != ErrCodeStoreOutOfGas)) { + if err != nil && (evm.chainRules.IsHomestead || err != ErrCodeStoreOutOfGas) { evm.StateDB.RevertToSnapshot(snapshot) if err != ErrExecutionReverted { contract.UseGas(contract.Gas) } } - // Assign err if contract code size exceeds the max while the err is still empty. - if maxCodeSizeExceeded && err == nil { - err = ErrMaxCodeSizeExceeded - } + if evm.vmConfig.Debug && evm.depth == 0 { evm.vmConfig.Tracer.CaptureEnd(ret, gas-contract.Gas, time.Since(start), err) } return ret, address, contract.Gas, err - } // Create creates a new contract using code as deployment code. diff --git a/core/vm/instructions.go b/core/vm/instructions.go index e8307cacb6..37ddf42599 100644 --- a/core/vm/instructions.go +++ b/core/vm/instructions.go @@ -258,10 +258,8 @@ func opAddress(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([] func opBalance(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { slot := scope.Stack.peek() - addr := common.Address(slot.Bytes20()) - // Quorum: get public/private state db based on addr - balance := getDualState(interpreter.evm, addr).GetBalance(addr) - slot.SetFromBig(balance) + address := common.Address(slot.Bytes20()) + slot.SetFromBig(getDualState(interpreter.evm, address).GetBalance(address)) // Quorum: get public/private state db based on addr return nil, nil } @@ -345,7 +343,7 @@ func opExtCodeSize(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) slot := scope.Stack.peek() addr := slot.Bytes20() // Quorum: get public/private state db based on addr - slot.SetUint64(uint64(getDualState(interpreter.evm, addr).GetCodeSize(addr))) + slot.SetUint64(uint64(getDualState(interpreter.evm, addr).GetCodeSize(slot.Bytes20()))) // Quorum: get public/private state db based on addr return nil, nil } @@ -385,8 +383,7 @@ func opExtCodeCopy(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) uint64CodeOffset = 0xffffffffffffffff } addr := common.Address(a.Bytes20()) - codeCopy := getData(getDualState(interpreter.evm, addr).GetCode(addr), uint64CodeOffset, length.Uint64()) - // Quorum: get public/private state db based on addr + codeCopy := getData(getDualState(interpreter.evm, addr).GetCode(addr), uint64CodeOffset, length.Uint64()) // Quorum: get public/private state db based on addr scope.Memory.Set(memOffset.Uint64(), length.Uint64(), codeCopy) return nil, nil @@ -514,8 +511,7 @@ func opMstore8(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([] func opSload(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { loc := scope.Stack.peek() hash := common.Hash(loc.Bytes32()) - // Quorum: get public/private state db based on addr - val := getDualState(interpreter.evm, scope.Contract.Address()).GetState(scope.Contract.Address(), hash) + val := getDualState(interpreter.evm, scope.Contract.Address()).GetState(scope.Contract.Address(), hash) // Quorum: get public/private state db based on addr loc.SetBytes(val.Bytes()) return nil, nil } diff --git a/core/vm/instructions_test.go b/core/vm/instructions_test.go index d30288951b..467ad5c6e0 100644 --- a/core/vm/instructions_test.go +++ b/core/vm/instructions_test.go @@ -40,6 +40,7 @@ type twoOperandParams struct { y string } +var alphabetSoup = "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" var commonParams []*twoOperandParams var twoOpMethods map[string]executionFunc @@ -347,8 +348,8 @@ func BenchmarkOpSub256(b *testing.B) { } func BenchmarkOpMul(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opMul, x, y) } @@ -379,64 +380,64 @@ func BenchmarkOpSdiv(b *testing.B) { } func BenchmarkOpMod(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opMod, x, y) } func BenchmarkOpSmod(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opSmod, x, y) } func BenchmarkOpExp(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opExp, x, y) } func BenchmarkOpSignExtend(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opSignExtend, x, y) } func BenchmarkOpLt(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opLt, x, y) } func BenchmarkOpGt(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opGt, x, y) } func BenchmarkOpSlt(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opSlt, x, y) } func BenchmarkOpSgt(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opSgt, x, y) } func BenchmarkOpEq(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opEq, x, y) } @@ -446,45 +447,45 @@ func BenchmarkOpEq2(b *testing.B) { opBenchmark(b, opEq, x, y) } func BenchmarkOpAnd(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opAnd, x, y) } func BenchmarkOpOr(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opOr, x, y) } func BenchmarkOpXor(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opXor, x, y) } func BenchmarkOpByte(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup opBenchmark(b, opByte, x, y) } func BenchmarkOpAddmod(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - z := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup + z := alphabetSoup opBenchmark(b, opAddmod, x, y, z) } func BenchmarkOpMulmod(b *testing.B) { - x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" - z := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff" + x := alphabetSoup + y := alphabetSoup + z := alphabetSoup opBenchmark(b, opMulmod, x, y, z) } diff --git a/core/vm/interpreter.go b/core/vm/interpreter.go index 68207e860b..6bb207acda 100644 --- a/core/vm/interpreter.go +++ b/core/vm/interpreter.go @@ -148,7 +148,7 @@ func (in *EVMInterpreter) Run(contract *Contract, input []byte, readOnly bool) ( defer func() { in.evm.depth-- }() // Make sure the readOnly is only set if we aren't in readOnly yet. - // This makes also sure that the readOnly flag isn't removed for child calls. + // This also makes sure that the readOnly flag isn't removed for child calls. if readOnly && !in.readOnly { in.readOnly = true defer func() { in.readOnly = false }() @@ -234,7 +234,7 @@ func (in *EVMInterpreter) Run(contract *Contract, input []byte, readOnly bool) ( if in.evm.quorumReadOnly && operation.writes { return nil, fmt.Errorf("VM in read-only mode. Mutating opcode prohibited") } - // If the operation is valid, enforce and write restrictions + // If the operation is valid, enforce write restrictions if in.readOnly && in.evm.chainRules.IsByzantium { // If the interpreter is operating in readonly mode, make sure no // state-modifying operation is performed. The 3rd stack item diff --git a/core/vm/operations_acl.go b/core/vm/operations_acl.go index 191953ce5e..c56941899e 100644 --- a/core/vm/operations_acl.go +++ b/core/vm/operations_acl.go @@ -30,7 +30,7 @@ const ( WarmStorageReadCostEIP2929 = uint64(100) // WARM_STORAGE_READ_COST ) -// gasSStoreEIP2929 implements gas cost for SSTORE according to EIP-2929" +// gasSStoreEIP2929 implements gas cost for SSTORE according to EIP-2929 // // When calling SSTORE, check if the (address, storage_key) pair is in accessed_storage_keys. // If it is not, charge an additional COLD_SLOAD_COST gas, and add the pair to accessed_storage_keys. @@ -177,10 +177,15 @@ func makeCallVariantGasCallEIP2929(oldCalculator gasFunc) gasFunc { return func(evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) { addr := common.Address(stack.Back(1).Bytes20()) // Check slot presence in the access list - if !evm.StateDB.AddressInAccessList(addr) { + warmAccess := evm.StateDB.AddressInAccessList(addr) + // The WarmStorageReadCostEIP2929 (100) is already deducted in the form of a constant cost, so + // the cost to charge for cold access, if any, is Cold - Warm + coldCost := ColdAccountAccessCostEIP2929 - WarmStorageReadCostEIP2929 + if !warmAccess { evm.StateDB.AddAddressToAccessList(addr) - // The WarmStorageReadCostEIP2929 (100) is already deducted in the form of a constant cost - if !contract.UseGas(ColdAccountAccessCostEIP2929 - WarmStorageReadCostEIP2929) { + // Charge the remaining difference here already, to correctly calculate available + // gas for call + if !contract.UseGas(coldCost) { return 0, ErrOutOfGas } } @@ -189,7 +194,16 @@ func makeCallVariantGasCallEIP2929(oldCalculator gasFunc) gasFunc { // - transfer value // - memory expansion // - 63/64ths rule - return oldCalculator(evm, contract, stack, mem, memorySize) + gas, err := oldCalculator(evm, contract, stack, mem, memorySize) + if warmAccess || err != nil { + return gas, err + } + // In case of a cold access, we temporarily add the cold charge back, and also + // add it to the returned gas. By adding it to the return, it will be charged + // outside of this function, as part of the dynamic gas, and that will make it + // also become correctly reported to tracers. + contract.Gas += coldCost + return gas + coldCost, nil } } diff --git a/core/vm/runtime/runtime_test.go b/core/vm/runtime/runtime_test.go index 9e9923b736..4e53802857 100644 --- a/core/vm/runtime/runtime_test.go +++ b/core/vm/runtime/runtime_test.go @@ -629,3 +629,83 @@ func TestEip2929Cases(t *testing.T) { "account (cheap)", code) } } + +// TestColdAccountAccessCost test that the cold account access cost is reported +// correctly +// see: https://github.com/ethereum/go-ethereum/issues/22649 +func TestColdAccountAccessCost(t *testing.T) { + for i, tc := range []struct { + code []byte + step int + want uint64 + }{ + { // EXTCODEHASH(0xff) + code: []byte{byte(vm.PUSH1), 0xFF, byte(vm.EXTCODEHASH), byte(vm.POP)}, + step: 1, + want: 2600, + }, + { // BALANCE(0xff) + code: []byte{byte(vm.PUSH1), 0xFF, byte(vm.BALANCE), byte(vm.POP)}, + step: 1, + want: 2600, + }, + { // CALL(0xff) + code: []byte{ + byte(vm.PUSH1), 0x0, + byte(vm.DUP1), byte(vm.DUP1), byte(vm.DUP1), byte(vm.DUP1), + byte(vm.PUSH1), 0xff, byte(vm.DUP1), byte(vm.CALL), byte(vm.POP), + }, + step: 7, + want: 2855, + }, + { // CALLCODE(0xff) + code: []byte{ + byte(vm.PUSH1), 0x0, + byte(vm.DUP1), byte(vm.DUP1), byte(vm.DUP1), byte(vm.DUP1), + byte(vm.PUSH1), 0xff, byte(vm.DUP1), byte(vm.CALLCODE), byte(vm.POP), + }, + step: 7, + want: 2855, + }, + { // DELEGATECALL(0xff) + code: []byte{ + byte(vm.PUSH1), 0x0, + byte(vm.DUP1), byte(vm.DUP1), byte(vm.DUP1), + byte(vm.PUSH1), 0xff, byte(vm.DUP1), byte(vm.DELEGATECALL), byte(vm.POP), + }, + step: 6, + want: 2855, + }, + { // STATICCALL(0xff) + code: []byte{ + byte(vm.PUSH1), 0x0, + byte(vm.DUP1), byte(vm.DUP1), byte(vm.DUP1), + byte(vm.PUSH1), 0xff, byte(vm.DUP1), byte(vm.STATICCALL), byte(vm.POP), + }, + step: 6, + want: 2855, + }, + { // SELFDESTRUCT(0xff) + code: []byte{ + byte(vm.PUSH1), 0xff, byte(vm.SELFDESTRUCT), + }, + step: 1, + want: 7600, + }, + } { + tracer := vm.NewStructLogger(nil) + Execute(tc.code, nil, &Config{ + EVMConfig: vm.Config{ + Debug: true, + Tracer: tracer, + }, + }) + have := tracer.StructLogs()[tc.step].GasCost + if want := tc.want; have != want { + for ii, op := range tracer.StructLogs() { + t.Logf("%d: %v %d", ii, op.OpName(), op.GasCost) + } + t.Fatalf("tescase %d, gas report wrong, step %d, have %d want %d", i, tc.step, have, want) + } + } +} diff --git a/eth/api.go b/eth/api.go index 103d8fd0fb..15a6527833 100644 --- a/eth/api.go +++ b/eth/api.go @@ -61,6 +61,11 @@ func (api *PublicEthereumAPI) Coinbase() (common.Address, error) { return api.Etherbase() } +// Hashrate returns the POW hashrate +func (api *PublicEthereumAPI) Hashrate() hexutil.Uint64 { + return hexutil.Uint64(api.e.Miner().Hashrate()) +} + // PublicMinerAPI provides an API to control the miner. // It offers only methods that operate on data that pose no security risk when it is publicly accessible. type PublicMinerAPI struct { diff --git a/eth/backend.go b/eth/backend.go index 7d8ba19ee7..2047815b5b 100644 --- a/eth/backend.go +++ b/eth/backend.go @@ -54,6 +54,7 @@ import ( "github.com/ethereum/go-ethereum/miner" "github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/p2p" + "github.com/ethereum/go-ethereum/p2p/dnsdisc" "github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/plugin" @@ -228,7 +229,9 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) { if bcVersion != nil && *bcVersion > core.BlockChainVersion { return nil, fmt.Errorf("database version is v%d, Geth %s only supports v%d", *bcVersion, params.VersionWithMeta, core.BlockChainVersion) } else if bcVersion == nil || *bcVersion < core.BlockChainVersion { - log.Warn("Upgrade blockchain database version", "from", dbVer, "to", core.BlockChainVersion) + if bcVersion != nil { // only print warning on upgrade, not on init + log.Warn("Upgrade blockchain database version", "from", dbVer, "to", core.BlockChainVersion) + } rawdb.WriteDatabaseVersion(chainDb, core.BlockChainVersion) } } @@ -382,6 +385,7 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) { } } } + eth.miner = miner.New(eth, &config.Miner, chainConfig, eth.EventMux(), eth.engine, eth.isLocalBlock) eth.miner.SetExtra(makeExtraData(config.Miner.ExtraData, eth.blockchain.Config().IsQuorum)) @@ -443,11 +447,13 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) { } eth.APIBackend.gpo = gasprice.NewOracle(eth.APIBackend, gpoParams) - eth.ethDialCandidates, err = setupDiscovery(eth.config.EthDiscoveryURLs) + // Setup DNS discovery iterators. + dnsclient := dnsdisc.NewClient(dnsdisc.Config{}) + eth.ethDialCandidates, err = dnsclient.NewIterator(eth.config.EthDiscoveryURLs...) if err != nil { return nil, err } - eth.snapDialCandidates, err = setupDiscovery(eth.config.SnapDiscoveryURLs) + eth.snapDialCandidates, err = dnsclient.NewIterator(eth.config.SnapDiscoveryURLs...) if err != nil { return nil, err } @@ -821,6 +827,8 @@ func (s *Ethereum) Stop() error { if s.qlightServerHandler != nil { s.qlightServerHandler.StopQLightServer() } + s.ethDialCandidates.Close() + s.snapDialCandidates.Close() s.handler.Stop() } diff --git a/eth/catalyst/api.go b/eth/catalyst/api.go new file mode 100644 index 0000000000..e440b2273f --- /dev/null +++ b/eth/catalyst/api.go @@ -0,0 +1,322 @@ +// Copyright 2020 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +// Package catalyst implements the temporary eth1/eth2 RPC integration. +package catalyst + +import ( + "errors" + "fmt" + "math/big" + "time" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/core" + "github.com/ethereum/go-ethereum/core/mps" + "github.com/ethereum/go-ethereum/core/state" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/eth" + "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/node" + chainParams "github.com/ethereum/go-ethereum/params" + "github.com/ethereum/go-ethereum/rpc" + "github.com/ethereum/go-ethereum/trie" +) + +// Register adds catalyst APIs to the node. +func Register(stack *node.Node, backend *eth.Ethereum) error { + chainconfig := backend.BlockChain().Config() + if chainconfig.CatalystBlock == nil { + return errors.New("catalystBlock is not set in genesis config") + } else if chainconfig.CatalystBlock.Sign() != 0 { + return errors.New("catalystBlock of genesis config must be zero") + } + + log.Warn("Catalyst mode enabled") + stack.RegisterAPIs([]rpc.API{ + { + Namespace: "consensus", + Version: "1.0", + Service: newConsensusAPI(backend), + Public: true, + }, + }) + return nil +} + +type consensusAPI struct { + eth *eth.Ethereum +} + +func newConsensusAPI(eth *eth.Ethereum) *consensusAPI { + return &consensusAPI{eth: eth} +} + +// blockExecutionEnv gathers all the data required to execute +// a block, either when assembling it or when inserting it. +type blockExecutionEnv struct { + chain *core.BlockChain + state *state.StateDB + tcount int + gasPool *core.GasPool + + header *types.Header + txs []*types.Transaction + receipts []*types.Receipt + + // Quorum + privateStateRepo mps.PrivateStateRepository + privateState *state.StateDB + forceNonParty bool + isInnerPrivateTxn bool + privateReceipts []*types.Receipt +} + +func (env *blockExecutionEnv) commitTransaction(tx *types.Transaction, coinbase common.Address) error { + vmconfig := *env.chain.GetVMConfig() + receipt, privateReceipt, err := core.ApplyTransaction(env.chain.Config(), env.chain, &coinbase, env.gasPool, env.state, env.privateState, env.header, tx, &env.header.GasUsed, vmconfig, env.forceNonParty, env.privateStateRepo, env.isInnerPrivateTxn) + if err != nil { + return err + } + env.txs = append(env.txs, tx) + env.receipts = append(env.receipts, receipt) + env.privateReceipts = append(env.privateReceipts, privateReceipt) + return nil +} + +func (api *consensusAPI) makeEnv(parent *types.Block, header *types.Header) (*blockExecutionEnv, error) { + state, mpsr, err := api.eth.BlockChain().StateAt(parent.Root()) + if err != nil { + return nil, err + } + privateState, err := mpsr.DefaultState() // TODO merge add PSI? + if err != nil { + return nil, err + } + env := &blockExecutionEnv{ + chain: api.eth.BlockChain(), + state: state, + header: header, + gasPool: new(core.GasPool).AddGas(header.GasLimit), + // Quorum + privateState: privateState, + } + return env, nil +} + +// AssembleBlock creates a new block, inserts it into the chain, and returns the "execution +// data" required for eth2 clients to process the new block. +func (api *consensusAPI) AssembleBlock(params assembleBlockParams) (*executableData, error) { + log.Info("Producing block", "parentHash", params.ParentHash) + + bc := api.eth.BlockChain() + parent := bc.GetBlockByHash(params.ParentHash) + if parent == nil { + log.Warn("Cannot assemble block with parent hash to unknown block", "parentHash", params.ParentHash) + return nil, fmt.Errorf("cannot assemble block with unknown parent %s", params.ParentHash) + } + + pool := api.eth.TxPool() + + if parent.Time() >= params.Timestamp { + return nil, fmt.Errorf("child timestamp lower than parent's: %d >= %d", parent.Time(), params.Timestamp) + } + if now := uint64(time.Now().Unix()); params.Timestamp > now+1 { + wait := time.Duration(params.Timestamp-now) * time.Second + log.Info("Producing block too far in the future", "wait", common.PrettyDuration(wait)) + time.Sleep(wait) + } + + pending, err := pool.Pending() + if err != nil { + return nil, err + } + + coinbase, err := api.eth.Etherbase() + if err != nil { + return nil, err + } + num := parent.Number() + header := &types.Header{ + ParentHash: parent.Hash(), + Number: num.Add(num, common.Big1), + Coinbase: coinbase, + GasLimit: parent.GasLimit(), // Keep the gas limit constant in this prototype + Extra: []byte{}, + Time: params.Timestamp, + } + err = api.eth.Engine().Prepare(bc, header) + if err != nil { + return nil, err + } + + env, err := api.makeEnv(parent, header) + if err != nil { + return nil, err + } + + var ( + signer = types.MakeSigner(bc.Config(), header.Number) + txHeap = types.NewTransactionsByPriceAndNonce(signer, pending) + transactions []*types.Transaction + ) + for { + if env.gasPool.Gas() < chainParams.TxGas { + log.Trace("Not enough gas for further transactions", "have", env.gasPool, "want", chainParams.TxGas) + break + } + tx := txHeap.Peek() + if tx == nil { + break + } + + // The sender is only for logging purposes, and it doesn't really matter if it's correct. + from, _ := types.Sender(signer, tx) + + // Execute the transaction + env.state.Prepare(tx.Hash(), common.Hash{}, env.tcount) + err = env.commitTransaction(tx, coinbase) + switch err { + case core.ErrGasLimitReached: + // Pop the current out-of-gas transaction without shifting in the next from the account + log.Trace("Gas limit exceeded for current block", "sender", from) + txHeap.Pop() + + case core.ErrNonceTooLow: + // New head notification data race between the transaction pool and miner, shift + log.Trace("Skipping transaction with low nonce", "sender", from, "nonce", tx.Nonce()) + txHeap.Shift() + + case core.ErrNonceTooHigh: + // Reorg notification data race between the transaction pool and miner, skip account = + log.Trace("Skipping account with high nonce", "sender", from, "nonce", tx.Nonce()) + txHeap.Pop() + + case nil: + // Everything ok, collect the logs and shift in the next transaction from the same account + env.tcount++ + txHeap.Shift() + transactions = append(transactions, tx) + + default: + // Strange error, discard the transaction and get the next in line (note, the + // nonce-too-high clause will prevent us from executing in vain). + log.Debug("Transaction failed, account skipped", "hash", tx.Hash(), "err", err) + txHeap.Shift() + } + } + + // Create the block. + block, err := api.eth.Engine().FinalizeAndAssemble(bc, header, env.state, transactions, nil /* uncles */, env.receipts) + if err != nil { + return nil, err + } + return &executableData{ + BlockHash: block.Hash(), + ParentHash: block.ParentHash(), + Miner: block.Coinbase(), + StateRoot: block.Root(), + Number: block.NumberU64(), + GasLimit: block.GasLimit(), + GasUsed: block.GasUsed(), + Timestamp: block.Time(), + ReceiptRoot: block.ReceiptHash(), + LogsBloom: block.Bloom().Bytes(), + Transactions: encodeTransactions(block.Transactions()), + }, nil +} + +func encodeTransactions(txs []*types.Transaction) [][]byte { + var enc = make([][]byte, len(txs)) + for i, tx := range txs { + enc[i], _ = tx.MarshalBinary() + } + return enc +} + +func decodeTransactions(enc [][]byte) ([]*types.Transaction, error) { + var txs = make([]*types.Transaction, len(enc)) + for i, encTx := range enc { + var tx types.Transaction + if err := tx.UnmarshalBinary(encTx); err != nil { + return nil, fmt.Errorf("invalid transaction %d: %v", i, err) + } + txs[i] = &tx + } + return txs, nil +} + +func insertBlockParamsToBlock(params executableData) (*types.Block, error) { + txs, err := decodeTransactions(params.Transactions) + if err != nil { + return nil, err + } + + number := big.NewInt(0) + number.SetUint64(params.Number) + header := &types.Header{ + ParentHash: params.ParentHash, + UncleHash: types.EmptyUncleHash, + Coinbase: params.Miner, + Root: params.StateRoot, + TxHash: types.DeriveSha(types.Transactions(txs), trie.NewStackTrie(nil)), + ReceiptHash: params.ReceiptRoot, + Bloom: types.BytesToBloom(params.LogsBloom), + Difficulty: big.NewInt(1), + Number: number, + GasLimit: params.GasLimit, + GasUsed: params.GasUsed, + Time: params.Timestamp, + } + block := types.NewBlockWithHeader(header).WithBody(txs, nil /* uncles */) + return block, nil +} + +// NewBlock creates an Eth1 block, inserts it in the chain, and either returns true, +// or false + an error. This is a bit redundant for go, but simplifies things on the +// eth2 side. +func (api *consensusAPI) NewBlock(params executableData) (*newBlockResponse, error) { + parent := api.eth.BlockChain().GetBlockByHash(params.ParentHash) + if parent == nil { + return &newBlockResponse{false}, fmt.Errorf("could not find parent %x", params.ParentHash) + } + block, err := insertBlockParamsToBlock(params) + if err != nil { + return nil, err + } + + _, err = api.eth.BlockChain().InsertChainWithoutSealVerification(block) + return &newBlockResponse{err == nil}, err +} + +// Used in tests to add a the list of transactions from a block to the tx pool. +func (api *consensusAPI) addBlockTxs(block *types.Block) error { + for _, tx := range block.Transactions() { + api.eth.TxPool().AddLocal(tx) + } + return nil +} + +// FinalizeBlock is called to mark a block as synchronized, so +// that data that is no longer needed can be removed. +func (api *consensusAPI) FinalizeBlock(blockHash common.Hash) (*genericResponse, error) { + return &genericResponse{true}, nil +} + +// SetHead is called to perform a force choice. +func (api *consensusAPI) SetHead(newHead common.Hash) (*genericResponse, error) { + return &genericResponse{true}, nil +} diff --git a/eth/catalyst/api_test.go b/eth/catalyst/api_test.go new file mode 100644 index 0000000000..b8a6e43fcd --- /dev/null +++ b/eth/catalyst/api_test.go @@ -0,0 +1,241 @@ +// Copyright 2020 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package catalyst + +import ( + "math/big" + "testing" + + "github.com/ethereum/go-ethereum/consensus/ethash" + "github.com/ethereum/go-ethereum/core" + "github.com/ethereum/go-ethereum/core/rawdb" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/crypto" + "github.com/ethereum/go-ethereum/eth" + "github.com/ethereum/go-ethereum/eth/ethconfig" + "github.com/ethereum/go-ethereum/node" + "github.com/ethereum/go-ethereum/params" +) + +var ( + // testKey is a private key to use for funding a tester account. + testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291") + + // testAddr is the Ethereum address of the tester account. + testAddr = crypto.PubkeyToAddress(testKey.PublicKey) + + testBalance = big.NewInt(2e10) +) + +func generateTestChain() (*core.Genesis, []*types.Block) { + db := rawdb.NewMemoryDatabase() + config := params.AllEthashProtocolChanges + genesis := &core.Genesis{ + Config: config, + Alloc: core.GenesisAlloc{testAddr: {Balance: testBalance}}, + ExtraData: []byte("test genesis"), + Timestamp: 9000, + } + generate := func(i int, g *core.BlockGen) { + g.OffsetTime(5) + g.SetExtra([]byte("test")) + } + gblock := genesis.ToBlock(db) + engine := ethash.NewFaker() + blocks, _ := core.GenerateChain(config, gblock, engine, db, 10, generate) + blocks = append([]*types.Block{gblock}, blocks...) + return genesis, blocks +} + +func generateTestChainWithFork(n int, fork int) (*core.Genesis, []*types.Block, []*types.Block) { + if fork >= n { + fork = n - 1 + } + db := rawdb.NewMemoryDatabase() + config := ¶ms.ChainConfig{ + ChainID: big.NewInt(1337), + HomesteadBlock: big.NewInt(0), + EIP150Block: big.NewInt(0), + EIP155Block: big.NewInt(0), + EIP158Block: big.NewInt(0), + ByzantiumBlock: big.NewInt(0), + ConstantinopleBlock: big.NewInt(0), + PetersburgBlock: big.NewInt(0), + IstanbulBlock: big.NewInt(0), + MuirGlacierBlock: big.NewInt(0), + BerlinBlock: big.NewInt(0), + CatalystBlock: big.NewInt(0), + Ethash: new(params.EthashConfig), + } + genesis := &core.Genesis{ + Config: config, + Alloc: core.GenesisAlloc{testAddr: {Balance: testBalance}}, + ExtraData: []byte("test genesis"), + Timestamp: 9000, + } + generate := func(i int, g *core.BlockGen) { + g.OffsetTime(5) + g.SetExtra([]byte("test")) + } + generateFork := func(i int, g *core.BlockGen) { + g.OffsetTime(5) + g.SetExtra([]byte("testF")) + } + gblock := genesis.ToBlock(db) + engine := ethash.NewFaker() + blocks, _ := core.GenerateChain(config, gblock, engine, db, n, generate) + blocks = append([]*types.Block{gblock}, blocks...) + forkedBlocks, _ := core.GenerateChain(config, blocks[fork], engine, db, n-fork, generateFork) + return genesis, blocks, forkedBlocks +} + +func TestEth2AssembleBlock(t *testing.T) { + genesis, blocks := generateTestChain() + n, ethservice := startEthService(t, genesis, blocks[1:9]) + defer n.Close() + + api := newConsensusAPI(ethservice) + signer := types.NewEIP155Signer(ethservice.BlockChain().Config().ChainID) + tx, err := types.SignTx(types.NewTransaction(0, blocks[8].Coinbase(), big.NewInt(1000), params.TxGas, nil, nil), signer, testKey) + if err != nil { + t.Fatalf("error signing transaction, err=%v", err) + } + ethservice.TxPool().AddLocal(tx) + blockParams := assembleBlockParams{ + ParentHash: blocks[8].ParentHash(), + Timestamp: blocks[8].Time(), + } + execData, err := api.AssembleBlock(blockParams) + + if err != nil { + t.Fatalf("error producing block, err=%v", err) + } + + if len(execData.Transactions) != 1 { + t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions)) + } +} + +func TestEth2AssembleBlockWithAnotherBlocksTxs(t *testing.T) { + genesis, blocks := generateTestChain() + n, ethservice := startEthService(t, genesis, blocks[1:9]) + defer n.Close() + + api := newConsensusAPI(ethservice) + + // Put the 10th block's tx in the pool and produce a new block + api.addBlockTxs(blocks[9]) + blockParams := assembleBlockParams{ + ParentHash: blocks[9].ParentHash(), + Timestamp: blocks[9].Time(), + } + execData, err := api.AssembleBlock(blockParams) + if err != nil { + t.Fatalf("error producing block, err=%v", err) + } + + if len(execData.Transactions) != blocks[9].Transactions().Len() { + t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions)) + } +} + +func TestEth2NewBlock(t *testing.T) { + genesis, blocks, forkedBlocks := generateTestChainWithFork(10, 4) + n, ethservice := startEthService(t, genesis, blocks[1:5]) + defer n.Close() + + api := newConsensusAPI(ethservice) + for i := 5; i < 10; i++ { + p := executableData{ + ParentHash: ethservice.BlockChain().CurrentBlock().Hash(), + Miner: blocks[i].Coinbase(), + StateRoot: blocks[i].Root(), + GasLimit: blocks[i].GasLimit(), + GasUsed: blocks[i].GasUsed(), + Transactions: encodeTransactions(blocks[i].Transactions()), + ReceiptRoot: blocks[i].ReceiptHash(), + LogsBloom: blocks[i].Bloom().Bytes(), + BlockHash: blocks[i].Hash(), + Timestamp: blocks[i].Time(), + Number: uint64(i), + } + success, err := api.NewBlock(p) + if err != nil || !success.Valid { + t.Fatalf("Failed to insert block: %v", err) + } + } + + exp := ethservice.BlockChain().CurrentBlock().Hash() + + // Introduce the fork point. + lastBlockNum := blocks[4].Number() + lastBlock := blocks[4] + for i := 0; i < 4; i++ { + lastBlockNum.Add(lastBlockNum, big.NewInt(1)) + p := executableData{ + ParentHash: lastBlock.Hash(), + Miner: forkedBlocks[i].Coinbase(), + StateRoot: forkedBlocks[i].Root(), + Number: lastBlockNum.Uint64(), + GasLimit: forkedBlocks[i].GasLimit(), + GasUsed: forkedBlocks[i].GasUsed(), + Transactions: encodeTransactions(blocks[i].Transactions()), + ReceiptRoot: forkedBlocks[i].ReceiptHash(), + LogsBloom: forkedBlocks[i].Bloom().Bytes(), + BlockHash: forkedBlocks[i].Hash(), + Timestamp: forkedBlocks[i].Time(), + } + success, err := api.NewBlock(p) + if err != nil || !success.Valid { + t.Fatalf("Failed to insert forked block #%d: %v", i, err) + } + lastBlock, err = insertBlockParamsToBlock(p) + if err != nil { + t.Fatal(err) + } + } + + if ethservice.BlockChain().CurrentBlock().Hash() != exp { + t.Fatalf("Wrong head after inserting fork %x != %x", exp, ethservice.BlockChain().CurrentBlock().Hash()) + } +} + +// startEthService creates a full node instance for testing. +func startEthService(t *testing.T, genesis *core.Genesis, blocks []*types.Block) (*node.Node, *eth.Ethereum) { + t.Helper() + + n, err := node.New(&node.Config{}) + if err != nil { + t.Fatal("can't create node:", err) + } + + ethcfg := ðconfig.Config{Genesis: genesis, Ethash: ethash.Config{PowMode: ethash.ModeFake}} + ethservice, err := eth.New(n, ethcfg) + if err != nil { + t.Fatal("can't create eth service:", err) + } + if err := n.Start(); err != nil { + t.Fatal("can't start node:", err) + } + if _, err := ethservice.BlockChain().InsertChain(blocks); err != nil { + n.Close() + t.Fatal("can't import test blocks:", err) + } + ethservice.SetEtherbase(testAddr) + + return n, ethservice +} diff --git a/eth/catalyst/api_types.go b/eth/catalyst/api_types.go new file mode 100644 index 0000000000..d5d351a991 --- /dev/null +++ b/eth/catalyst/api_types.go @@ -0,0 +1,70 @@ +// Copyright 2020 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package catalyst + +import ( + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" +) + +//go:generate go run github.com/fjl/gencodec -type assembleBlockParams -field-override assembleBlockParamsMarshaling -out gen_blockparams.go + +// Structure described at https://hackmd.io/T9x2mMA4S7us8tJwEB3FDQ +type assembleBlockParams struct { + ParentHash common.Hash `json:"parentHash" gencodec:"required"` + Timestamp uint64 `json:"timestamp" gencodec:"required"` +} + +// JSON type overrides for assembleBlockParams. +type assembleBlockParamsMarshaling struct { + Timestamp hexutil.Uint64 +} + +//go:generate go run github.com/fjl/gencodec -type executableData -field-override executableDataMarshaling -out gen_ed.go + +// Structure described at https://notes.ethereum.org/@n0ble/rayonism-the-merge-spec#Parameters1 +type executableData struct { + BlockHash common.Hash `json:"blockHash" gencodec:"required"` + ParentHash common.Hash `json:"parentHash" gencodec:"required"` + Miner common.Address `json:"miner" gencodec:"required"` + StateRoot common.Hash `json:"stateRoot" gencodec:"required"` + Number uint64 `json:"number" gencodec:"required"` + GasLimit uint64 `json:"gasLimit" gencodec:"required"` + GasUsed uint64 `json:"gasUsed" gencodec:"required"` + Timestamp uint64 `json:"timestamp" gencodec:"required"` + ReceiptRoot common.Hash `json:"receiptsRoot" gencodec:"required"` + LogsBloom []byte `json:"logsBloom" gencodec:"required"` + Transactions [][]byte `json:"transactions" gencodec:"required"` +} + +// JSON type overrides for executableData. +type executableDataMarshaling struct { + Number hexutil.Uint64 + GasLimit hexutil.Uint64 + GasUsed hexutil.Uint64 + Timestamp hexutil.Uint64 + LogsBloom hexutil.Bytes + Transactions []hexutil.Bytes +} + +type newBlockResponse struct { + Valid bool `json:"valid"` +} + +type genericResponse struct { + Success bool `json:"success"` +} diff --git a/eth/catalyst/gen_blockparams.go b/eth/catalyst/gen_blockparams.go new file mode 100644 index 0000000000..a9a08ec3a8 --- /dev/null +++ b/eth/catalyst/gen_blockparams.go @@ -0,0 +1,46 @@ +// Code generated by github.com/fjl/gencodec. DO NOT EDIT. + +package catalyst + +import ( + "encoding/json" + "errors" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" +) + +var _ = (*assembleBlockParamsMarshaling)(nil) + +// MarshalJSON marshals as JSON. +func (a assembleBlockParams) MarshalJSON() ([]byte, error) { + type assembleBlockParams struct { + ParentHash common.Hash `json:"parentHash" gencodec:"required"` + Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"` + } + var enc assembleBlockParams + enc.ParentHash = a.ParentHash + enc.Timestamp = hexutil.Uint64(a.Timestamp) + return json.Marshal(&enc) +} + +// UnmarshalJSON unmarshals from JSON. +func (a *assembleBlockParams) UnmarshalJSON(input []byte) error { + type assembleBlockParams struct { + ParentHash *common.Hash `json:"parentHash" gencodec:"required"` + Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"` + } + var dec assembleBlockParams + if err := json.Unmarshal(input, &dec); err != nil { + return err + } + if dec.ParentHash == nil { + return errors.New("missing required field 'parentHash' for assembleBlockParams") + } + a.ParentHash = *dec.ParentHash + if dec.Timestamp == nil { + return errors.New("missing required field 'timestamp' for assembleBlockParams") + } + a.Timestamp = uint64(*dec.Timestamp) + return nil +} diff --git a/eth/catalyst/gen_ed.go b/eth/catalyst/gen_ed.go new file mode 100644 index 0000000000..4c2e4c8ead --- /dev/null +++ b/eth/catalyst/gen_ed.go @@ -0,0 +1,117 @@ +// Code generated by github.com/fjl/gencodec. DO NOT EDIT. + +package catalyst + +import ( + "encoding/json" + "errors" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/hexutil" +) + +var _ = (*executableDataMarshaling)(nil) + +// MarshalJSON marshals as JSON. +func (e executableData) MarshalJSON() ([]byte, error) { + type executableData struct { + BlockHash common.Hash `json:"blockHash" gencodec:"required"` + ParentHash common.Hash `json:"parentHash" gencodec:"required"` + Miner common.Address `json:"miner" gencodec:"required"` + StateRoot common.Hash `json:"stateRoot" gencodec:"required"` + Number hexutil.Uint64 `json:"number" gencodec:"required"` + GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"` + GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"` + Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"` + ReceiptRoot common.Hash `json:"receiptsRoot" gencodec:"required"` + LogsBloom hexutil.Bytes `json:"logsBloom" gencodec:"required"` + Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"` + } + var enc executableData + enc.BlockHash = e.BlockHash + enc.ParentHash = e.ParentHash + enc.Miner = e.Miner + enc.StateRoot = e.StateRoot + enc.Number = hexutil.Uint64(e.Number) + enc.GasLimit = hexutil.Uint64(e.GasLimit) + enc.GasUsed = hexutil.Uint64(e.GasUsed) + enc.Timestamp = hexutil.Uint64(e.Timestamp) + enc.ReceiptRoot = e.ReceiptRoot + enc.LogsBloom = e.LogsBloom + if e.Transactions != nil { + enc.Transactions = make([]hexutil.Bytes, len(e.Transactions)) + for k, v := range e.Transactions { + enc.Transactions[k] = v + } + } + return json.Marshal(&enc) +} + +// UnmarshalJSON unmarshals from JSON. +func (e *executableData) UnmarshalJSON(input []byte) error { + type executableData struct { + BlockHash *common.Hash `json:"blockHash" gencodec:"required"` + ParentHash *common.Hash `json:"parentHash" gencodec:"required"` + Miner *common.Address `json:"miner" gencodec:"required"` + StateRoot *common.Hash `json:"stateRoot" gencodec:"required"` + Number *hexutil.Uint64 `json:"number" gencodec:"required"` + GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"` + GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"` + Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"` + ReceiptRoot *common.Hash `json:"receiptsRoot" gencodec:"required"` + LogsBloom *hexutil.Bytes `json:"logsBloom" gencodec:"required"` + Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"` + } + var dec executableData + if err := json.Unmarshal(input, &dec); err != nil { + return err + } + if dec.BlockHash == nil { + return errors.New("missing required field 'blockHash' for executableData") + } + e.BlockHash = *dec.BlockHash + if dec.ParentHash == nil { + return errors.New("missing required field 'parentHash' for executableData") + } + e.ParentHash = *dec.ParentHash + if dec.Miner == nil { + return errors.New("missing required field 'miner' for executableData") + } + e.Miner = *dec.Miner + if dec.StateRoot == nil { + return errors.New("missing required field 'stateRoot' for executableData") + } + e.StateRoot = *dec.StateRoot + if dec.Number == nil { + return errors.New("missing required field 'number' for executableData") + } + e.Number = uint64(*dec.Number) + if dec.GasLimit == nil { + return errors.New("missing required field 'gasLimit' for executableData") + } + e.GasLimit = uint64(*dec.GasLimit) + if dec.GasUsed == nil { + return errors.New("missing required field 'gasUsed' for executableData") + } + e.GasUsed = uint64(*dec.GasUsed) + if dec.Timestamp == nil { + return errors.New("missing required field 'timestamp' for executableData") + } + e.Timestamp = uint64(*dec.Timestamp) + if dec.ReceiptRoot == nil { + return errors.New("missing required field 'receiptsRoot' for executableData") + } + e.ReceiptRoot = *dec.ReceiptRoot + if dec.LogsBloom == nil { + return errors.New("missing required field 'logsBloom' for executableData") + } + e.LogsBloom = *dec.LogsBloom + if dec.Transactions == nil { + return errors.New("missing required field 'transactions' for executableData") + } + e.Transactions = make([][]byte, len(dec.Transactions)) + for k, v := range dec.Transactions { + e.Transactions[k] = v + } + return nil +} diff --git a/eth/discovery.go b/eth/discovery.go index 855ce3b0e1..70668b2b70 100644 --- a/eth/discovery.go +++ b/eth/discovery.go @@ -19,7 +19,6 @@ package eth import ( "github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core/forkid" - "github.com/ethereum/go-ethereum/p2p/dnsdisc" "github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/rlp" ) @@ -62,13 +61,3 @@ func (eth *Ethereum) currentEthEntry() *ethEntry { return ðEntry{ForkID: forkid.NewID(eth.blockchain.Config(), eth.blockchain.Genesis().Hash(), eth.blockchain.CurrentHeader().Number.Uint64())} } - -// setupDiscovery creates the node discovery source for the `eth` and `snap` -// protocols. -func setupDiscovery(urls []string) (enode.Iterator, error) { - if len(urls) == 0 { - return nil, nil - } - client := dnsdisc.NewClient(dnsdisc.Config{}) - return client.NewIterator(urls...) -} diff --git a/eth/downloader/downloader.go b/eth/downloader/downloader.go index f384e4e05f..6e3fda62d2 100644 --- a/eth/downloader/downloader.go +++ b/eth/downloader/downloader.go @@ -28,7 +28,9 @@ import ( "github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/rawdb" + "github.com/ethereum/go-ethereum/core/state/snapshot" "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/eth/protocols/eth" "github.com/ethereum/go-ethereum/eth/protocols/snap" "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/event" @@ -216,6 +218,9 @@ type BlockChain interface { // InsertReceiptChain inserts a batch of receipts into the local chain. InsertReceiptChain(types.Blocks, []types.Receipts, uint64) (int, error) + + // Snapshots returns the blockchain snapshot tree to paused it during sync. + Snapshots() *snapshot.Tree } // New creates a new downloader to fetch hashes and blocks from remote peers. @@ -405,6 +410,12 @@ func (d *Downloader) synchronise(id string, hash common.Hash, td *big.Int, mode // but until snap becomes prevalent, we should support both. TODO(karalabe). if mode == SnapSync { if !d.snapSync { + // Snap sync uses the snapshot namespace to store potentially flakey data until + // sync completely heals and finishes. Pause snapshot maintenance in the mean + // time to prevent access. + if snapshots := d.blockchain.Snapshots(); snapshots != nil { // Only nil in tests + snapshots.Disable() + } log.Warn("Enabling snapshot sync prototype") d.snapSync = true } @@ -475,8 +486,8 @@ func (d *Downloader) syncWithPeer(p *peerConnection, hash common.Hash, td *big.I d.mux.Post(DoneEvent{latest}) } }() - if p.version < 64 { - return fmt.Errorf("%w: advertized %d < required %d", errTooOld, p.version, 64) + if p.version < eth.ETH65 { + return fmt.Errorf("%w: advertized %d < required %d", errTooOld, p.version, eth.ETH65) } mode := d.getMode() diff --git a/eth/downloader/downloader_test.go b/eth/downloader/downloader_test.go index ed91db8651..a2b0e80342 100644 --- a/eth/downloader/downloader_test.go +++ b/eth/downloader/downloader_test.go @@ -29,7 +29,9 @@ import ( "github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/rawdb" + "github.com/ethereum/go-ethereum/core/state/snapshot" "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/eth/protocols/eth" "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/params" @@ -412,6 +414,11 @@ func (dl *downloadTester) dropPeer(id string) { dl.downloader.UnregisterPeer(id) } +// Snapshots implements the BlockChain interface for the downloader, but is a noop. +func (dl *downloadTester) Snapshots() *snapshot.Tree { + return nil +} + type downloadTesterPeer struct { dl *downloadTester id string @@ -519,16 +526,13 @@ func assertOwnForkedChain(t *testing.T, tester *downloadTester, common int, leng } } -func TestCanonicalSynchronisation64Full(t *testing.T) { testCanonSync(t, 64, FullSync) } -func TestCanonicalSynchronisation64Fast(t *testing.T) { testCanonSync(t, 64, FastSync) } +func TestCanonicalSynchronisation65Full(t *testing.T) { testCanonSync(t, eth.ETH65, FullSync) } +func TestCanonicalSynchronisation65Fast(t *testing.T) { testCanonSync(t, eth.ETH65, FastSync) } +func TestCanonicalSynchronisation65Light(t *testing.T) { testCanonSync(t, eth.ETH65, LightSync) } -func TestCanonicalSynchronisation65Full(t *testing.T) { testCanonSync(t, 65, FullSync) } -func TestCanonicalSynchronisation65Fast(t *testing.T) { testCanonSync(t, 65, FastSync) } -func TestCanonicalSynchronisation65Light(t *testing.T) { testCanonSync(t, 65, LightSync) } - -func TestCanonicalSynchronisation66Full(t *testing.T) { testCanonSync(t, 66, FullSync) } -func TestCanonicalSynchronisation66Fast(t *testing.T) { testCanonSync(t, 66, FastSync) } -func TestCanonicalSynchronisation66Light(t *testing.T) { testCanonSync(t, 66, LightSync) } +func TestCanonicalSynchronisation66Full(t *testing.T) { testCanonSync(t, eth.ETH66, FullSync) } +func TestCanonicalSynchronisation66Fast(t *testing.T) { testCanonSync(t, eth.ETH66, FastSync) } +func TestCanonicalSynchronisation66Light(t *testing.T) { testCanonSync(t, eth.ETH66, LightSync) } func testCanonSync(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -549,14 +553,11 @@ func testCanonSync(t *testing.T, protocol uint, mode SyncMode) { // Tests that if a large batch of blocks are being downloaded, it is throttled // until the cached blocks are retrieved. -func TestThrottling64Full(t *testing.T) { testThrottling(t, 64, FullSync) } -func TestThrottling64Fast(t *testing.T) { testThrottling(t, 64, FastSync) } - -func TestThrottling65Full(t *testing.T) { testThrottling(t, 65, FullSync) } -func TestThrottling65Fast(t *testing.T) { testThrottling(t, 65, FastSync) } +func TestThrottling65Full(t *testing.T) { testThrottling(t, eth.ETH65, FullSync) } +func TestThrottling65Fast(t *testing.T) { testThrottling(t, eth.ETH65, FastSync) } -func TestThrottling66Full(t *testing.T) { testThrottling(t, 66, FullSync) } -func TestThrottling66Fast(t *testing.T) { testThrottling(t, 66, FastSync) } +func TestThrottling66Full(t *testing.T) { testThrottling(t, eth.ETH66, FullSync) } +func TestThrottling66Fast(t *testing.T) { testThrottling(t, eth.ETH66, FastSync) } func testThrottling(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -637,16 +638,13 @@ func testThrottling(t *testing.T, protocol uint, mode SyncMode) { // Tests that simple synchronization against a forked chain works correctly. In // this test common ancestor lookup should *not* be short circuited, and a full // binary search should be executed. -func TestForkedSync64Full(t *testing.T) { testForkedSync(t, 64, FullSync) } -func TestForkedSync64Fast(t *testing.T) { testForkedSync(t, 64, FastSync) } +func TestForkedSync65Full(t *testing.T) { testForkedSync(t, eth.ETH65, FullSync) } +func TestForkedSync65Fast(t *testing.T) { testForkedSync(t, eth.ETH65, FastSync) } +func TestForkedSync65Light(t *testing.T) { testForkedSync(t, eth.ETH65, LightSync) } -func TestForkedSync65Full(t *testing.T) { testForkedSync(t, 65, FullSync) } -func TestForkedSync65Fast(t *testing.T) { testForkedSync(t, 65, FastSync) } -func TestForkedSync65Light(t *testing.T) { testForkedSync(t, 65, LightSync) } - -func TestForkedSync66Full(t *testing.T) { testForkedSync(t, 66, FullSync) } -func TestForkedSync66Fast(t *testing.T) { testForkedSync(t, 66, FastSync) } -func TestForkedSync66Light(t *testing.T) { testForkedSync(t, 66, LightSync) } +func TestForkedSync66Full(t *testing.T) { testForkedSync(t, eth.ETH66, FullSync) } +func TestForkedSync66Fast(t *testing.T) { testForkedSync(t, eth.ETH66, FastSync) } +func TestForkedSync66Light(t *testing.T) { testForkedSync(t, eth.ETH66, LightSync) } func testForkedSync(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -673,16 +671,13 @@ func testForkedSync(t *testing.T, protocol uint, mode SyncMode) { // Tests that synchronising against a much shorter but much heavyer fork works // corrently and is not dropped. -func TestHeavyForkedSync64Full(t *testing.T) { testHeavyForkedSync(t, 64, FullSync) } -func TestHeavyForkedSync64Fast(t *testing.T) { testHeavyForkedSync(t, 64, FastSync) } - -func TestHeavyForkedSync65Full(t *testing.T) { testHeavyForkedSync(t, 65, FullSync) } -func TestHeavyForkedSync65Fast(t *testing.T) { testHeavyForkedSync(t, 65, FastSync) } -func TestHeavyForkedSync65Light(t *testing.T) { testHeavyForkedSync(t, 65, LightSync) } +func TestHeavyForkedSync65Full(t *testing.T) { testHeavyForkedSync(t, eth.ETH65, FullSync) } +func TestHeavyForkedSync65Fast(t *testing.T) { testHeavyForkedSync(t, eth.ETH65, FastSync) } +func TestHeavyForkedSync65Light(t *testing.T) { testHeavyForkedSync(t, eth.ETH65, LightSync) } -func TestHeavyForkedSync66Full(t *testing.T) { testHeavyForkedSync(t, 66, FullSync) } -func TestHeavyForkedSync66Fast(t *testing.T) { testHeavyForkedSync(t, 66, FastSync) } -func TestHeavyForkedSync66Light(t *testing.T) { testHeavyForkedSync(t, 66, LightSync) } +func TestHeavyForkedSync66Full(t *testing.T) { testHeavyForkedSync(t, eth.ETH66, FullSync) } +func TestHeavyForkedSync66Fast(t *testing.T) { testHeavyForkedSync(t, eth.ETH66, FastSync) } +func TestHeavyForkedSync66Light(t *testing.T) { testHeavyForkedSync(t, eth.ETH66, LightSync) } func testHeavyForkedSync(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -711,16 +706,13 @@ func testHeavyForkedSync(t *testing.T, protocol uint, mode SyncMode) { // Tests that chain forks are contained within a certain interval of the current // chain head, ensuring that malicious peers cannot waste resources by feeding // long dead chains. -func TestBoundedForkedSync64Full(t *testing.T) { testBoundedForkedSync(t, 64, FullSync) } -func TestBoundedForkedSync64Fast(t *testing.T) { testBoundedForkedSync(t, 64, FastSync) } +func TestBoundedForkedSync65Full(t *testing.T) { testBoundedForkedSync(t, eth.ETH65, FullSync) } +func TestBoundedForkedSync65Fast(t *testing.T) { testBoundedForkedSync(t, eth.ETH65, FastSync) } +func TestBoundedForkedSync65Light(t *testing.T) { testBoundedForkedSync(t, eth.ETH65, LightSync) } -func TestBoundedForkedSync65Full(t *testing.T) { testBoundedForkedSync(t, 65, FullSync) } -func TestBoundedForkedSync65Fast(t *testing.T) { testBoundedForkedSync(t, 65, FastSync) } -func TestBoundedForkedSync65Light(t *testing.T) { testBoundedForkedSync(t, 65, LightSync) } - -func TestBoundedForkedSync66Full(t *testing.T) { testBoundedForkedSync(t, 66, FullSync) } -func TestBoundedForkedSync66Fast(t *testing.T) { testBoundedForkedSync(t, 66, FastSync) } -func TestBoundedForkedSync66Light(t *testing.T) { testBoundedForkedSync(t, 66, LightSync) } +func TestBoundedForkedSync66Full(t *testing.T) { testBoundedForkedSync(t, eth.ETH66, FullSync) } +func TestBoundedForkedSync66Fast(t *testing.T) { testBoundedForkedSync(t, eth.ETH66, FastSync) } +func TestBoundedForkedSync66Light(t *testing.T) { testBoundedForkedSync(t, eth.ETH66, LightSync) } func testBoundedForkedSync(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -748,16 +740,25 @@ func testBoundedForkedSync(t *testing.T, protocol uint, mode SyncMode) { // Tests that chain forks are contained within a certain interval of the current // chain head for short but heavy forks too. These are a bit special because they // take different ancestor lookup paths. -func TestBoundedHeavyForkedSync64Full(t *testing.T) { testBoundedHeavyForkedSync(t, 64, FullSync) } -func TestBoundedHeavyForkedSync64Fast(t *testing.T) { testBoundedHeavyForkedSync(t, 64, FastSync) } - -func TestBoundedHeavyForkedSync65Full(t *testing.T) { testBoundedHeavyForkedSync(t, 65, FullSync) } -func TestBoundedHeavyForkedSync65Fast(t *testing.T) { testBoundedHeavyForkedSync(t, 65, FastSync) } -func TestBoundedHeavyForkedSync65Light(t *testing.T) { testBoundedHeavyForkedSync(t, 65, LightSync) } +func TestBoundedHeavyForkedSync65Full(t *testing.T) { + testBoundedHeavyForkedSync(t, eth.ETH65, FullSync) +} +func TestBoundedHeavyForkedSync65Fast(t *testing.T) { + testBoundedHeavyForkedSync(t, eth.ETH65, FastSync) +} +func TestBoundedHeavyForkedSync65Light(t *testing.T) { + testBoundedHeavyForkedSync(t, eth.ETH65, LightSync) +} -func TestBoundedHeavyForkedSync66Full(t *testing.T) { testBoundedHeavyForkedSync(t, 66, FullSync) } -func TestBoundedHeavyForkedSync66Fast(t *testing.T) { testBoundedHeavyForkedSync(t, 66, FastSync) } -func TestBoundedHeavyForkedSync66Light(t *testing.T) { testBoundedHeavyForkedSync(t, 66, LightSync) } +func TestBoundedHeavyForkedSync66Full(t *testing.T) { + testBoundedHeavyForkedSync(t, eth.ETH66, FullSync) +} +func TestBoundedHeavyForkedSync66Fast(t *testing.T) { + testBoundedHeavyForkedSync(t, eth.ETH66, FastSync) +} +func TestBoundedHeavyForkedSync66Light(t *testing.T) { + testBoundedHeavyForkedSync(t, eth.ETH66, LightSync) +} func testBoundedHeavyForkedSync(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -803,16 +804,13 @@ func TestInactiveDownloader63(t *testing.T) { } // Tests that a canceled download wipes all previously accumulated state. -func TestCancel64Full(t *testing.T) { testCancel(t, 64, FullSync) } -func TestCancel64Fast(t *testing.T) { testCancel(t, 64, FastSync) } - -func TestCancel65Full(t *testing.T) { testCancel(t, 65, FullSync) } -func TestCancel65Fast(t *testing.T) { testCancel(t, 65, FastSync) } -func TestCancel65Light(t *testing.T) { testCancel(t, 65, LightSync) } +func TestCancel65Full(t *testing.T) { testCancel(t, eth.ETH65, FullSync) } +func TestCancel65Fast(t *testing.T) { testCancel(t, eth.ETH65, FastSync) } +func TestCancel65Light(t *testing.T) { testCancel(t, eth.ETH65, LightSync) } -func TestCancel66Full(t *testing.T) { testCancel(t, 66, FullSync) } -func TestCancel66Fast(t *testing.T) { testCancel(t, 66, FastSync) } -func TestCancel66Light(t *testing.T) { testCancel(t, 66, LightSync) } +func TestCancel66Full(t *testing.T) { testCancel(t, eth.ETH66, FullSync) } +func TestCancel66Fast(t *testing.T) { testCancel(t, eth.ETH66, FastSync) } +func TestCancel66Light(t *testing.T) { testCancel(t, eth.ETH66, LightSync) } func testCancel(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -839,16 +837,13 @@ func testCancel(t *testing.T, protocol uint, mode SyncMode) { } // Tests that synchronisation from multiple peers works as intended (multi thread sanity test). -func TestMultiSynchronisation64Full(t *testing.T) { testMultiSynchronisation(t, 64, FullSync) } -func TestMultiSynchronisation64Fast(t *testing.T) { testMultiSynchronisation(t, 64, FastSync) } +func TestMultiSynchronisation65Full(t *testing.T) { testMultiSynchronisation(t, eth.ETH65, FullSync) } +func TestMultiSynchronisation65Fast(t *testing.T) { testMultiSynchronisation(t, eth.ETH65, FastSync) } +func TestMultiSynchronisation65Light(t *testing.T) { testMultiSynchronisation(t, eth.ETH65, LightSync) } -func TestMultiSynchronisation65Full(t *testing.T) { testMultiSynchronisation(t, 65, FullSync) } -func TestMultiSynchronisation65Fast(t *testing.T) { testMultiSynchronisation(t, 65, FastSync) } -func TestMultiSynchronisation65Light(t *testing.T) { testMultiSynchronisation(t, 65, LightSync) } - -func TestMultiSynchronisation66Full(t *testing.T) { testMultiSynchronisation(t, 66, FullSync) } -func TestMultiSynchronisation66Fast(t *testing.T) { testMultiSynchronisation(t, 66, FastSync) } -func TestMultiSynchronisation66Light(t *testing.T) { testMultiSynchronisation(t, 66, LightSync) } +func TestMultiSynchronisation66Full(t *testing.T) { testMultiSynchronisation(t, eth.ETH66, FullSync) } +func TestMultiSynchronisation66Fast(t *testing.T) { testMultiSynchronisation(t, eth.ETH66, FastSync) } +func TestMultiSynchronisation66Light(t *testing.T) { testMultiSynchronisation(t, eth.ETH66, LightSync) } func testMultiSynchronisation(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -872,16 +867,13 @@ func testMultiSynchronisation(t *testing.T, protocol uint, mode SyncMode) { // Tests that synchronisations behave well in multi-version protocol environments // and not wreak havoc on other nodes in the network. -func TestMultiProtoSynchronisation64Full(t *testing.T) { testMultiProtoSync(t, 64, FullSync) } -func TestMultiProtoSynchronisation64Fast(t *testing.T) { testMultiProtoSync(t, 64, FastSync) } - -func TestMultiProtoSynchronisation65Full(t *testing.T) { testMultiProtoSync(t, 65, FullSync) } -func TestMultiProtoSynchronisation65Fast(t *testing.T) { testMultiProtoSync(t, 65, FastSync) } -func TestMultiProtoSynchronisation65Light(t *testing.T) { testMultiProtoSync(t, 65, LightSync) } +func TestMultiProtoSynchronisation65Full(t *testing.T) { testMultiProtoSync(t, eth.ETH65, FullSync) } +func TestMultiProtoSynchronisation65Fast(t *testing.T) { testMultiProtoSync(t, eth.ETH65, FastSync) } +func TestMultiProtoSynchronisation65Light(t *testing.T) { testMultiProtoSync(t, eth.ETH65, LightSync) } -func TestMultiProtoSynchronisation66Full(t *testing.T) { testMultiProtoSync(t, 66, FullSync) } -func TestMultiProtoSynchronisation66Fast(t *testing.T) { testMultiProtoSync(t, 66, FastSync) } -func TestMultiProtoSynchronisation66Light(t *testing.T) { testMultiProtoSync(t, 66, LightSync) } +func TestMultiProtoSynchronisation66Full(t *testing.T) { testMultiProtoSync(t, eth.ETH66, FullSync) } +func TestMultiProtoSynchronisation66Fast(t *testing.T) { testMultiProtoSync(t, eth.ETH66, FastSync) } +func TestMultiProtoSynchronisation66Light(t *testing.T) { testMultiProtoSync(t, eth.ETH66, LightSync) } func testMultiProtoSync(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -893,9 +885,8 @@ func testMultiProtoSync(t *testing.T, protocol uint, mode SyncMode) { chain := testChainBase.shorten(blockCacheMaxItems - 15) // Create peers of every type - tester.newPeer("peer 64", 64, chain) - tester.newPeer("peer 65", 65, chain) - tester.newPeer("peer 66", 66, chain) + tester.newPeer("peer 65", eth.ETH65, chain) + tester.newPeer("peer 66", eth.ETH66, chain) // Synchronise with the requested peer and make sure all blocks were retrieved if err := tester.sync(fmt.Sprintf("peer %d", protocol), nil, mode); err != nil { @@ -904,7 +895,7 @@ func testMultiProtoSync(t *testing.T, protocol uint, mode SyncMode) { assertOwnChain(t, tester, chain.len()) // Check that no peers have been dropped off - for _, version := range []int{64, 65, 66} { + for _, version := range []int{65, 66} { peer := fmt.Sprintf("peer %d", version) if _, ok := tester.peers[peer]; !ok { t.Errorf("%s dropped", peer) @@ -914,16 +905,13 @@ func testMultiProtoSync(t *testing.T, protocol uint, mode SyncMode) { // Tests that if a block is empty (e.g. header only), no body request should be // made, and instead the header should be assembled into a whole block in itself. -func TestEmptyShortCircuit64Full(t *testing.T) { testEmptyShortCircuit(t, 64, FullSync) } -func TestEmptyShortCircuit64Fast(t *testing.T) { testEmptyShortCircuit(t, 64, FastSync) } +func TestEmptyShortCircuit65Full(t *testing.T) { testEmptyShortCircuit(t, eth.ETH65, FullSync) } +func TestEmptyShortCircuit65Fast(t *testing.T) { testEmptyShortCircuit(t, eth.ETH65, FastSync) } +func TestEmptyShortCircuit65Light(t *testing.T) { testEmptyShortCircuit(t, eth.ETH65, LightSync) } -func TestEmptyShortCircuit65Full(t *testing.T) { testEmptyShortCircuit(t, 65, FullSync) } -func TestEmptyShortCircuit65Fast(t *testing.T) { testEmptyShortCircuit(t, 65, FastSync) } -func TestEmptyShortCircuit65Light(t *testing.T) { testEmptyShortCircuit(t, 65, LightSync) } - -func TestEmptyShortCircuit66Full(t *testing.T) { testEmptyShortCircuit(t, 66, FullSync) } -func TestEmptyShortCircuit66Fast(t *testing.T) { testEmptyShortCircuit(t, 66, FastSync) } -func TestEmptyShortCircuit66Light(t *testing.T) { testEmptyShortCircuit(t, 66, LightSync) } +func TestEmptyShortCircuit66Full(t *testing.T) { testEmptyShortCircuit(t, eth.ETH66, FullSync) } +func TestEmptyShortCircuit66Fast(t *testing.T) { testEmptyShortCircuit(t, eth.ETH66, FastSync) } +func TestEmptyShortCircuit66Light(t *testing.T) { testEmptyShortCircuit(t, eth.ETH66, LightSync) } func testEmptyShortCircuit(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -971,16 +959,13 @@ func testEmptyShortCircuit(t *testing.T, protocol uint, mode SyncMode) { // Tests that headers are enqueued continuously, preventing malicious nodes from // stalling the downloader by feeding gapped header chains. -func TestMissingHeaderAttack64Full(t *testing.T) { testMissingHeaderAttack(t, 64, FullSync) } -func TestMissingHeaderAttack64Fast(t *testing.T) { testMissingHeaderAttack(t, 64, FastSync) } - -func TestMissingHeaderAttack65Full(t *testing.T) { testMissingHeaderAttack(t, 65, FullSync) } -func TestMissingHeaderAttack65Fast(t *testing.T) { testMissingHeaderAttack(t, 65, FastSync) } -func TestMissingHeaderAttack65Light(t *testing.T) { testMissingHeaderAttack(t, 65, LightSync) } +func TestMissingHeaderAttack65Full(t *testing.T) { testMissingHeaderAttack(t, eth.ETH65, FullSync) } +func TestMissingHeaderAttack65Fast(t *testing.T) { testMissingHeaderAttack(t, eth.ETH65, FastSync) } +func TestMissingHeaderAttack65Light(t *testing.T) { testMissingHeaderAttack(t, eth.ETH65, LightSync) } -func TestMissingHeaderAttack66Full(t *testing.T) { testMissingHeaderAttack(t, 66, FullSync) } -func TestMissingHeaderAttack66Fast(t *testing.T) { testMissingHeaderAttack(t, 66, FastSync) } -func TestMissingHeaderAttack66Light(t *testing.T) { testMissingHeaderAttack(t, 66, LightSync) } +func TestMissingHeaderAttack66Full(t *testing.T) { testMissingHeaderAttack(t, eth.ETH66, FullSync) } +func TestMissingHeaderAttack66Fast(t *testing.T) { testMissingHeaderAttack(t, eth.ETH66, FastSync) } +func TestMissingHeaderAttack66Light(t *testing.T) { testMissingHeaderAttack(t, eth.ETH66, LightSync) } func testMissingHeaderAttack(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1006,16 +991,13 @@ func testMissingHeaderAttack(t *testing.T, protocol uint, mode SyncMode) { // Tests that if requested headers are shifted (i.e. first is missing), the queue // detects the invalid numbering. -func TestShiftedHeaderAttack64Full(t *testing.T) { testShiftedHeaderAttack(t, 64, FullSync) } -func TestShiftedHeaderAttack64Fast(t *testing.T) { testShiftedHeaderAttack(t, 64, FastSync) } - -func TestShiftedHeaderAttack65Full(t *testing.T) { testShiftedHeaderAttack(t, 65, FullSync) } -func TestShiftedHeaderAttack65Fast(t *testing.T) { testShiftedHeaderAttack(t, 65, FastSync) } -func TestShiftedHeaderAttack65Light(t *testing.T) { testShiftedHeaderAttack(t, 65, LightSync) } +func TestShiftedHeaderAttack65Full(t *testing.T) { testShiftedHeaderAttack(t, eth.ETH65, FullSync) } +func TestShiftedHeaderAttack65Fast(t *testing.T) { testShiftedHeaderAttack(t, eth.ETH65, FastSync) } +func TestShiftedHeaderAttack65Light(t *testing.T) { testShiftedHeaderAttack(t, eth.ETH65, LightSync) } -func TestShiftedHeaderAttack66Full(t *testing.T) { testShiftedHeaderAttack(t, 66, FullSync) } -func TestShiftedHeaderAttack66Fast(t *testing.T) { testShiftedHeaderAttack(t, 66, FastSync) } -func TestShiftedHeaderAttack66Light(t *testing.T) { testShiftedHeaderAttack(t, 66, LightSync) } +func TestShiftedHeaderAttack66Full(t *testing.T) { testShiftedHeaderAttack(t, eth.ETH66, FullSync) } +func TestShiftedHeaderAttack66Fast(t *testing.T) { testShiftedHeaderAttack(t, eth.ETH66, FastSync) } +func TestShiftedHeaderAttack66Light(t *testing.T) { testShiftedHeaderAttack(t, eth.ETH66, LightSync) } func testShiftedHeaderAttack(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1046,9 +1028,8 @@ func testShiftedHeaderAttack(t *testing.T, protocol uint, mode SyncMode) { // Tests that upon detecting an invalid header, the recent ones are rolled back // for various failure scenarios. Afterwards a full sync is attempted to make // sure no state was corrupted. -func TestInvalidHeaderRollback64Fast(t *testing.T) { testInvalidHeaderRollback(t, 64, FastSync) } -func TestInvalidHeaderRollback65Fast(t *testing.T) { testInvalidHeaderRollback(t, 65, FastSync) } -func TestInvalidHeaderRollback66Fast(t *testing.T) { testInvalidHeaderRollback(t, 66, FastSync) } +func TestInvalidHeaderRollback65Fast(t *testing.T) { testInvalidHeaderRollback(t, eth.ETH65, FastSync) } +func TestInvalidHeaderRollback66Fast(t *testing.T) { testInvalidHeaderRollback(t, eth.ETH66, FastSync) } func testInvalidHeaderRollback(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1138,16 +1119,25 @@ func testInvalidHeaderRollback(t *testing.T, protocol uint, mode SyncMode) { // Tests that a peer advertising a high TD doesn't get to stall the downloader // afterwards by not sending any useful hashes. -func TestHighTDStarvationAttack64Full(t *testing.T) { testHighTDStarvationAttack(t, 64, FullSync) } -func TestHighTDStarvationAttack64Fast(t *testing.T) { testHighTDStarvationAttack(t, 64, FastSync) } - -func TestHighTDStarvationAttack65Full(t *testing.T) { testHighTDStarvationAttack(t, 65, FullSync) } -func TestHighTDStarvationAttack65Fast(t *testing.T) { testHighTDStarvationAttack(t, 65, FastSync) } -func TestHighTDStarvationAttack65Light(t *testing.T) { testHighTDStarvationAttack(t, 65, LightSync) } +func TestHighTDStarvationAttack65Full(t *testing.T) { + testHighTDStarvationAttack(t, eth.ETH65, FullSync) +} +func TestHighTDStarvationAttack65Fast(t *testing.T) { + testHighTDStarvationAttack(t, eth.ETH65, FastSync) +} +func TestHighTDStarvationAttack65Light(t *testing.T) { + testHighTDStarvationAttack(t, eth.ETH65, LightSync) +} -func TestHighTDStarvationAttack66Full(t *testing.T) { testHighTDStarvationAttack(t, 66, FullSync) } -func TestHighTDStarvationAttack66Fast(t *testing.T) { testHighTDStarvationAttack(t, 66, FastSync) } -func TestHighTDStarvationAttack66Light(t *testing.T) { testHighTDStarvationAttack(t, 66, LightSync) } +func TestHighTDStarvationAttack66Full(t *testing.T) { + testHighTDStarvationAttack(t, eth.ETH66, FullSync) +} +func TestHighTDStarvationAttack66Fast(t *testing.T) { + testHighTDStarvationAttack(t, eth.ETH66, FastSync) +} +func TestHighTDStarvationAttack66Light(t *testing.T) { + testHighTDStarvationAttack(t, eth.ETH66, LightSync) +} func testHighTDStarvationAttack(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1163,9 +1153,8 @@ func testHighTDStarvationAttack(t *testing.T, protocol uint, mode SyncMode) { } // Tests that misbehaving peers are disconnected, whilst behaving ones are not. -func TestBlockHeaderAttackerDropping64(t *testing.T) { testBlockHeaderAttackerDropping(t, 64) } -func TestBlockHeaderAttackerDropping65(t *testing.T) { testBlockHeaderAttackerDropping(t, 65) } -func TestBlockHeaderAttackerDropping66(t *testing.T) { testBlockHeaderAttackerDropping(t, 66) } +func TestBlockHeaderAttackerDropping65(t *testing.T) { testBlockHeaderAttackerDropping(t, eth.ETH65) } +func TestBlockHeaderAttackerDropping66(t *testing.T) { testBlockHeaderAttackerDropping(t, eth.ETH66) } func testBlockHeaderAttackerDropping(t *testing.T, protocol uint) { t.Parallel() @@ -1217,16 +1206,13 @@ func testBlockHeaderAttackerDropping(t *testing.T, protocol uint) { // Tests that synchronisation progress (origin block number, current block number // and highest block number) is tracked and updated correctly. -func TestSyncProgress64Full(t *testing.T) { testSyncProgress(t, 64, FullSync) } -func TestSyncProgress64Fast(t *testing.T) { testSyncProgress(t, 64, FastSync) } - -func TestSyncProgress65Full(t *testing.T) { testSyncProgress(t, 65, FullSync) } -func TestSyncProgress65Fast(t *testing.T) { testSyncProgress(t, 65, FastSync) } -func TestSyncProgress65Light(t *testing.T) { testSyncProgress(t, 65, LightSync) } +func TestSyncProgress65Full(t *testing.T) { testSyncProgress(t, eth.ETH65, FullSync) } +func TestSyncProgress65Fast(t *testing.T) { testSyncProgress(t, eth.ETH65, FastSync) } +func TestSyncProgress65Light(t *testing.T) { testSyncProgress(t, eth.ETH65, LightSync) } -func TestSyncProgress66Full(t *testing.T) { testSyncProgress(t, 66, FullSync) } -func TestSyncProgress66Fast(t *testing.T) { testSyncProgress(t, 66, FastSync) } -func TestSyncProgress66Light(t *testing.T) { testSyncProgress(t, 66, LightSync) } +func TestSyncProgress66Full(t *testing.T) { testSyncProgress(t, eth.ETH66, FullSync) } +func TestSyncProgress66Fast(t *testing.T) { testSyncProgress(t, eth.ETH66, FastSync) } +func TestSyncProgress66Light(t *testing.T) { testSyncProgress(t, eth.ETH66, LightSync) } func testSyncProgress(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1304,16 +1290,13 @@ func checkProgress(t *testing.T, d *Downloader, stage string, want ethereum.Sync // Tests that synchronisation progress (origin block number and highest block // number) is tracked and updated correctly in case of a fork (or manual head // revertal). -func TestForkedSyncProgress64Full(t *testing.T) { testForkedSyncProgress(t, 64, FullSync) } -func TestForkedSyncProgress64Fast(t *testing.T) { testForkedSyncProgress(t, 64, FastSync) } - -func TestForkedSyncProgress65Full(t *testing.T) { testForkedSyncProgress(t, 65, FullSync) } -func TestForkedSyncProgress65Fast(t *testing.T) { testForkedSyncProgress(t, 65, FastSync) } -func TestForkedSyncProgress65Light(t *testing.T) { testForkedSyncProgress(t, 65, LightSync) } +func TestForkedSyncProgress65Full(t *testing.T) { testForkedSyncProgress(t, eth.ETH65, FullSync) } +func TestForkedSyncProgress65Fast(t *testing.T) { testForkedSyncProgress(t, eth.ETH65, FastSync) } +func TestForkedSyncProgress65Light(t *testing.T) { testForkedSyncProgress(t, eth.ETH65, LightSync) } -func TestForkedSyncProgress66Full(t *testing.T) { testForkedSyncProgress(t, 66, FullSync) } -func TestForkedSyncProgress66Fast(t *testing.T) { testForkedSyncProgress(t, 66, FastSync) } -func TestForkedSyncProgress66Light(t *testing.T) { testForkedSyncProgress(t, 66, LightSync) } +func TestForkedSyncProgress66Full(t *testing.T) { testForkedSyncProgress(t, eth.ETH66, FullSync) } +func TestForkedSyncProgress66Fast(t *testing.T) { testForkedSyncProgress(t, eth.ETH66, FastSync) } +func TestForkedSyncProgress66Light(t *testing.T) { testForkedSyncProgress(t, eth.ETH66, LightSync) } func testForkedSyncProgress(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1383,16 +1366,13 @@ func testForkedSyncProgress(t *testing.T, protocol uint, mode SyncMode) { // Tests that if synchronisation is aborted due to some failure, then the progress // origin is not updated in the next sync cycle, as it should be considered the // continuation of the previous sync and not a new instance. -func TestFailedSyncProgress64Full(t *testing.T) { testFailedSyncProgress(t, 64, FullSync) } -func TestFailedSyncProgress64Fast(t *testing.T) { testFailedSyncProgress(t, 64, FastSync) } +func TestFailedSyncProgress65Full(t *testing.T) { testFailedSyncProgress(t, eth.ETH65, FullSync) } +func TestFailedSyncProgress65Fast(t *testing.T) { testFailedSyncProgress(t, eth.ETH65, FastSync) } +func TestFailedSyncProgress65Light(t *testing.T) { testFailedSyncProgress(t, eth.ETH65, LightSync) } -func TestFailedSyncProgress65Full(t *testing.T) { testFailedSyncProgress(t, 65, FullSync) } -func TestFailedSyncProgress65Fast(t *testing.T) { testFailedSyncProgress(t, 65, FastSync) } -func TestFailedSyncProgress65Light(t *testing.T) { testFailedSyncProgress(t, 65, LightSync) } - -func TestFailedSyncProgress66Full(t *testing.T) { testFailedSyncProgress(t, 66, FullSync) } -func TestFailedSyncProgress66Fast(t *testing.T) { testFailedSyncProgress(t, 66, FastSync) } -func TestFailedSyncProgress66Light(t *testing.T) { testFailedSyncProgress(t, 66, LightSync) } +func TestFailedSyncProgress66Full(t *testing.T) { testFailedSyncProgress(t, eth.ETH66, FullSync) } +func TestFailedSyncProgress66Fast(t *testing.T) { testFailedSyncProgress(t, eth.ETH66, FastSync) } +func TestFailedSyncProgress66Light(t *testing.T) { testFailedSyncProgress(t, eth.ETH66, LightSync) } func testFailedSyncProgress(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1459,16 +1439,13 @@ func testFailedSyncProgress(t *testing.T, protocol uint, mode SyncMode) { // Tests that if an attacker fakes a chain height, after the attack is detected, // the progress height is successfully reduced at the next sync invocation. -func TestFakedSyncProgress64Full(t *testing.T) { testFakedSyncProgress(t, 64, FullSync) } -func TestFakedSyncProgress64Fast(t *testing.T) { testFakedSyncProgress(t, 64, FastSync) } - -func TestFakedSyncProgress65Full(t *testing.T) { testFakedSyncProgress(t, 65, FullSync) } -func TestFakedSyncProgress65Fast(t *testing.T) { testFakedSyncProgress(t, 65, FastSync) } -func TestFakedSyncProgress65Light(t *testing.T) { testFakedSyncProgress(t, 65, LightSync) } +func TestFakedSyncProgress65Full(t *testing.T) { testFakedSyncProgress(t, eth.ETH65, FullSync) } +func TestFakedSyncProgress65Fast(t *testing.T) { testFakedSyncProgress(t, eth.ETH65, FastSync) } +func TestFakedSyncProgress65Light(t *testing.T) { testFakedSyncProgress(t, eth.ETH65, LightSync) } -func TestFakedSyncProgress66Full(t *testing.T) { testFakedSyncProgress(t, 66, FullSync) } -func TestFakedSyncProgress66Fast(t *testing.T) { testFakedSyncProgress(t, 66, FastSync) } -func TestFakedSyncProgress66Light(t *testing.T) { testFakedSyncProgress(t, 66, LightSync) } +func TestFakedSyncProgress66Full(t *testing.T) { testFakedSyncProgress(t, eth.ETH66, FullSync) } +func TestFakedSyncProgress66Fast(t *testing.T) { testFakedSyncProgress(t, eth.ETH66, FastSync) } +func TestFakedSyncProgress66Light(t *testing.T) { testFakedSyncProgress(t, eth.ETH66, LightSync) } func testFakedSyncProgress(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1539,16 +1516,13 @@ func testFakedSyncProgress(t *testing.T, protocol uint, mode SyncMode) { // This test reproduces an issue where unexpected deliveries would // block indefinitely if they arrived at the right time. -func TestDeliverHeadersHang64Full(t *testing.T) { testDeliverHeadersHang(t, 64, FullSync) } -func TestDeliverHeadersHang64Fast(t *testing.T) { testDeliverHeadersHang(t, 64, FastSync) } +func TestDeliverHeadersHang65Full(t *testing.T) { testDeliverHeadersHang(t, eth.ETH65, FullSync) } +func TestDeliverHeadersHang65Fast(t *testing.T) { testDeliverHeadersHang(t, eth.ETH65, FastSync) } +func TestDeliverHeadersHang65Light(t *testing.T) { testDeliverHeadersHang(t, eth.ETH65, LightSync) } -func TestDeliverHeadersHang65Full(t *testing.T) { testDeliverHeadersHang(t, 65, FullSync) } -func TestDeliverHeadersHang65Fast(t *testing.T) { testDeliverHeadersHang(t, 65, FastSync) } -func TestDeliverHeadersHang65Light(t *testing.T) { testDeliverHeadersHang(t, 65, LightSync) } - -func TestDeliverHeadersHang66Full(t *testing.T) { testDeliverHeadersHang(t, 66, FullSync) } -func TestDeliverHeadersHang66Fast(t *testing.T) { testDeliverHeadersHang(t, 66, FastSync) } -func TestDeliverHeadersHang66Light(t *testing.T) { testDeliverHeadersHang(t, 66, LightSync) } +func TestDeliverHeadersHang66Full(t *testing.T) { testDeliverHeadersHang(t, eth.ETH66, FullSync) } +func TestDeliverHeadersHang66Fast(t *testing.T) { testDeliverHeadersHang(t, eth.ETH66, FastSync) } +func TestDeliverHeadersHang66Light(t *testing.T) { testDeliverHeadersHang(t, eth.ETH66, LightSync) } func testDeliverHeadersHang(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() @@ -1703,16 +1677,17 @@ func TestRemoteHeaderRequestSpan(t *testing.T) { // Tests that peers below a pre-configured checkpoint block are prevented from // being fast-synced from, avoiding potential cheap eclipse attacks. -func TestCheckpointEnforcement64Full(t *testing.T) { testCheckpointEnforcement(t, 64, FullSync) } -func TestCheckpointEnforcement64Fast(t *testing.T) { testCheckpointEnforcement(t, 64, FastSync) } - -func TestCheckpointEnforcement65Full(t *testing.T) { testCheckpointEnforcement(t, 65, FullSync) } -func TestCheckpointEnforcement65Fast(t *testing.T) { testCheckpointEnforcement(t, 65, FastSync) } -func TestCheckpointEnforcement65Light(t *testing.T) { testCheckpointEnforcement(t, 65, LightSync) } +func TestCheckpointEnforcement65Full(t *testing.T) { testCheckpointEnforcement(t, eth.ETH65, FullSync) } +func TestCheckpointEnforcement65Fast(t *testing.T) { testCheckpointEnforcement(t, eth.ETH65, FastSync) } +func TestCheckpointEnforcement65Light(t *testing.T) { + testCheckpointEnforcement(t, eth.ETH65, LightSync) +} -func TestCheckpointEnforcement66Full(t *testing.T) { testCheckpointEnforcement(t, 66, FullSync) } -func TestCheckpointEnforcement66Fast(t *testing.T) { testCheckpointEnforcement(t, 66, FastSync) } -func TestCheckpointEnforcement66Light(t *testing.T) { testCheckpointEnforcement(t, 66, LightSync) } +func TestCheckpointEnforcement66Full(t *testing.T) { testCheckpointEnforcement(t, eth.ETH66, FullSync) } +func TestCheckpointEnforcement66Fast(t *testing.T) { testCheckpointEnforcement(t, eth.ETH66, FastSync) } +func TestCheckpointEnforcement66Light(t *testing.T) { + testCheckpointEnforcement(t, eth.ETH66, LightSync) +} func testCheckpointEnforcement(t *testing.T, protocol uint, mode SyncMode) { t.Parallel() diff --git a/eth/downloader/peer.go b/eth/downloader/peer.go index 5f19ad40ce..0a1f57684a 100644 --- a/eth/downloader/peer.go +++ b/eth/downloader/peer.go @@ -459,7 +459,7 @@ func (ps *peerSet) HeaderIdlePeers() ([]*peerConnection, int) { defer p.lock.RUnlock() return p.headerThroughput } - return ps.idlePeers(eth.ETH64, eth.ETH66, idle, throughput) + return ps.idlePeers(eth.ETH65, eth.ETH66, idle, throughput) } // BodyIdlePeers retrieves a flat list of all the currently body-idle peers within @@ -473,7 +473,7 @@ func (ps *peerSet) BodyIdlePeers() ([]*peerConnection, int) { defer p.lock.RUnlock() return p.blockThroughput } - return ps.idlePeers(eth.ETH64, eth.ETH66, idle, throughput) + return ps.idlePeers(eth.ETH65, eth.ETH66, idle, throughput) } // ReceiptIdlePeers retrieves a flat list of all the currently receipt-idle peers @@ -487,7 +487,7 @@ func (ps *peerSet) ReceiptIdlePeers() ([]*peerConnection, int) { defer p.lock.RUnlock() return p.receiptThroughput } - return ps.idlePeers(eth.ETH64, eth.ETH66, idle, throughput) + return ps.idlePeers(eth.ETH65, eth.ETH66, idle, throughput) } // NodeDataIdlePeers retrieves a flat list of all the currently node-data-idle @@ -501,7 +501,7 @@ func (ps *peerSet) NodeDataIdlePeers() ([]*peerConnection, int) { defer p.lock.RUnlock() return p.stateThroughput } - return ps.idlePeers(eth.ETH64, eth.ETH66, idle, throughput) + return ps.idlePeers(eth.ETH65, eth.ETH66, idle, throughput) } // idlePeers retrieves a flat list of all currently idle peers satisfying the diff --git a/eth/downloader/statesync.go b/eth/downloader/statesync.go index 6231588ad2..ff84a3a8f0 100644 --- a/eth/downloader/statesync.go +++ b/eth/downloader/statesync.go @@ -298,7 +298,7 @@ func newStateSync(d *Downloader, root common.Hash) *stateSync { return &stateSync{ d: d, root: root, - sched: state.NewStateSync(root, d.stateDB, d.stateBloom), + sched: state.NewStateSync(root, d.stateDB, d.stateBloom, nil), keccak: sha3.NewLegacyKeccak256().(crypto.KeccakState), trieTasks: make(map[common.Hash]*trieTask), codeTasks: make(map[common.Hash]*codeTask), diff --git a/eth/fetcher/block_fetcher.go b/eth/fetcher/block_fetcher.go index 5ea8a128d9..3177a877ed 100644 --- a/eth/fetcher/block_fetcher.go +++ b/eth/fetcher/block_fetcher.go @@ -331,8 +331,12 @@ func (f *BlockFetcher) FilterBodies(peer string, transactions [][]*types.Transac // events. func (f *BlockFetcher) loop() { // Iterate the block fetching until a quit is requested - fetchTimer := time.NewTimer(0) - completeTimer := time.NewTimer(0) + var ( + fetchTimer = time.NewTimer(0) + completeTimer = time.NewTimer(0) + ) + <-fetchTimer.C // clear out the channel + <-completeTimer.C defer fetchTimer.Stop() defer completeTimer.Stop() diff --git a/eth/filters/api.go b/eth/filters/api.go index 2722b43f25..c464797765 100644 --- a/eth/filters/api.go +++ b/eth/filters/api.go @@ -72,8 +72,8 @@ func NewPublicFilterAPI(backend Backend, lightMode bool, timeout time.Duration) return api } -// timeoutLoop runs every 5 minutes and deletes filters that have not been recently used. -// Tt is started when the api is created. +// timeoutLoop runs at the interval set by 'timeout' and deletes filters +// that have not been recently used. It is started when the API is created. func (api *PublicFilterAPI) timeoutLoop(timeout time.Duration) { var toUninstall []*Subscription ticker := time.NewTicker(timeout) @@ -304,7 +304,7 @@ func (api *PublicFilterAPI) NewFilter(ctx context.Context, crit FilterCriteria) logs := make(chan []*types.Log) psm, err := api.backend.PSMR().ResolveForUserContext(ctx) if err != nil { - return rpc.ID(""), err + return "", err } crit.PSI = psm.ID logsSub, err := api.events.SubscribeLogs(ethereum.FilterQuery(crit), logs) @@ -443,7 +443,7 @@ func (api *PublicFilterAPI) GetFilterLogs(ctx context.Context, id rpc.ID) ([]*ty // (pending)Log filters return []Log. // // https://eth.wiki/json-rpc/API#eth_getfilterchanges -func (api *PublicFilterAPI) GetFilterChanges(ctx context.Context, id rpc.ID) (interface{}, error) { +func (api *PublicFilterAPI) GetFilterChanges(id rpc.ID) (interface{}, error) { api.filtersMu.Lock() defer api.filtersMu.Unlock() diff --git a/eth/filters/filter_system_test.go b/eth/filters/filter_system_test.go index 0248be0c55..17e0a666b7 100644 --- a/eth/filters/filter_system_test.go +++ b/eth/filters/filter_system_test.go @@ -268,7 +268,7 @@ func TestPendingTxFilter(t *testing.T) { timeout := time.Now().Add(1 * time.Second) for { - results, err := api.GetFilterChanges(context.Background(), fid0) + results, err := api.GetFilterChanges(fid0) if err != nil { t.Fatalf("Unable to retrieve logs: %v", err) } @@ -468,7 +468,7 @@ func TestLogFilter(t *testing.T) { var fetched []*types.Log timeout := time.Now().Add(1 * time.Second) for { // fetch all expected logs - results, err := api.GetFilterChanges(context.Background(), tt.id) + results, err := api.GetFilterChanges(tt.id) if err != nil { t.Fatalf("Unable to fetch logs: %v", err) } @@ -646,7 +646,6 @@ func TestPendingTxFilterDeadlock(t *testing.T) { backend = &testBackend{db: db} api = NewPublicFilterAPI(backend, false, timeout) done = make(chan struct{}) - ctx = context.TODO() ) go func() { @@ -673,7 +672,7 @@ func TestPendingTxFilterDeadlock(t *testing.T) { fids[i] = fid // Wait for at least one tx to arrive in filter for { - hashes, err := api.GetFilterChanges(ctx, fid) + hashes, err := api.GetFilterChanges(fid) if err != nil { t.Fatalf("Filter should exist: %v\n", err) } @@ -693,7 +692,7 @@ func TestPendingTxFilterDeadlock(t *testing.T) { case done <- struct{}{}: // Check that all filters have been uninstalled for _, fid := range fids { - if _, err := api.GetFilterChanges(ctx, fid); err == nil { + if _, err := api.GetFilterChanges(fid); err == nil { t.Errorf("Filter %s should have been uninstalled\n", fid) } } diff --git a/eth/gasprice/gasprice.go b/eth/gasprice/gasprice.go index 5d8be08e0b..560722bec0 100644 --- a/eth/gasprice/gasprice.go +++ b/eth/gasprice/gasprice.go @@ -199,6 +199,9 @@ func (gpo *Oracle) getBlockPrices(ctx context.Context, signer types.Signer, bloc var prices []*big.Int for _, tx := range txs { + if tx.GasPriceIntCmp(common.Big1) <= 0 { + continue + } sender, err := types.Sender(signer, tx) if err == nil && sender != block.Coinbase() { prices = append(prices, tx.GasPrice()) diff --git a/eth/handler.go b/eth/handler.go index 97d9c0e345..970fe4e425 100644 --- a/eth/handler.go +++ b/eth/handler.go @@ -599,14 +599,15 @@ func (h *handler) BroadcastTransactions(txs types.Transactions) { directCount += len(hashes) peer.AsyncSendTransactions(hashes) } + for peer, hashes := range txset { + directPeers++ + directCount += len(hashes) + peer.AsyncSendTransactions(hashes) + } for peer, hashes := range annos { annoPeers++ annoCount += len(hashes) - if peer.Version() >= eth.ETH65 { - peer.AsyncSendPooledTransactionHashes(hashes) - } else { - peer.AsyncSendTransactions(hashes) - } + peer.AsyncSendPooledTransactionHashes(hashes) } log.Debug("Transaction broadcast", "txs", len(txs), "announce packs", annoPeers, "announced hashes", annoCount, diff --git a/eth/handler_eth_test.go b/eth/handler_eth_test.go index 370a68d7c9..35c12a7e93 100644 --- a/eth/handler_eth_test.go +++ b/eth/handler_eth_test.go @@ -81,8 +81,8 @@ func (h *testEthHandler) Handle(peer *eth.Peer, packet eth.Packet) error { // Tests that peers are correctly accepted (or rejected) based on the advertised // fork IDs in the protocol handshake. -func TestForkIDSplit64(t *testing.T) { testForkIDSplit(t, 64) } -func TestForkIDSplit65(t *testing.T) { testForkIDSplit(t, 65) } +func TestForkIDSplit65(t *testing.T) { testForkIDSplit(t, eth.ETH65) } +func TestForkIDSplit66(t *testing.T) { testForkIDSplit(t, eth.ETH66) } func testForkIDSplit(t *testing.T, protocol uint) { t.Parallel() @@ -239,8 +239,8 @@ func testForkIDSplit(t *testing.T, protocol uint) { } // Tests that received transactions are added to the local pool. -func TestRecvTransactions64(t *testing.T) { testRecvTransactions(t, 64) } -func TestRecvTransactions65(t *testing.T) { testRecvTransactions(t, 65) } +func TestRecvTransactions65(t *testing.T) { testRecvTransactions(t, eth.ETH65) } +func TestRecvTransactions66(t *testing.T) { testRecvTransactions(t, eth.ETH66) } func testRecvTransactions(t *testing.T, protocol uint) { t.Parallel() @@ -297,8 +297,8 @@ func testRecvTransactions(t *testing.T, protocol uint) { } // This test checks that pending transactions are sent. -func TestSendTransactions64(t *testing.T) { testSendTransactions(t, 64) } -func TestSendTransactions65(t *testing.T) { testSendTransactions(t, 65) } +func TestSendTransactions65(t *testing.T) { testSendTransactions(t, eth.ETH65) } +func TestSendTransactions66(t *testing.T) { testSendTransactions(t, eth.ETH66) } func testSendTransactions(t *testing.T, protocol uint) { t.Parallel() @@ -357,19 +357,7 @@ func testSendTransactions(t *testing.T, protocol uint) { seen := make(map[common.Hash]struct{}) for len(seen) < len(insert) { switch protocol { - case 63, 64: - select { - case <-anns: - t.Errorf("tx announce received on pre eth/65") - case txs := <-bcasts: - for _, tx := range txs { - if _, ok := seen[tx.Hash()]; ok { - t.Errorf("duplicate transaction announced: %x", tx.Hash()) - } - seen[tx.Hash()] = struct{}{} - } - } - case 65: + case 65, 66: select { case hashes := <-anns: for _, hash := range hashes { @@ -395,8 +383,8 @@ func testSendTransactions(t *testing.T, protocol uint) { // Tests that transactions get propagated to all attached peers, either via direct // broadcasts or via announcements/retrievals. -func TestTransactionPropagation64(t *testing.T) { testTransactionPropagation(t, 64) } -func TestTransactionPropagation65(t *testing.T) { testTransactionPropagation(t, 65) } +func TestTransactionPropagation65(t *testing.T) { testTransactionPropagation(t, eth.ETH65) } +func TestTransactionPropagation66(t *testing.T) { testTransactionPropagation(t, eth.ETH66) } func testTransactionPropagation(t *testing.T, protocol uint) { t.Parallel() @@ -614,19 +602,14 @@ func testCheckpointChallenge(t *testing.T, syncmode downloader.SyncMode, checkpo defer p2pLocal.Close() defer p2pRemote.Close() - local := eth.NewPeer(eth.ETH64, p2p.NewPeerPipe(enode.ID{1}, "", nil, p2pLocal), p2pLocal, handler.txpool) - remote := eth.NewPeer(eth.ETH64, p2p.NewPeerPipe(enode.ID{2}, "", nil, p2pRemote), p2pRemote, handler.txpool) + local := eth.NewPeer(eth.ETH65, p2p.NewPeerPipe(enode.ID{1}, "", nil, p2pLocal), p2pLocal, handler.txpool) + remote := eth.NewPeer(eth.ETH65, p2p.NewPeerPipe(enode.ID{2}, "", nil, p2pRemote), p2pRemote, handler.txpool) defer local.Close() defer remote.Close() - handlerDone := make(chan struct{}) - go func() { - defer close(handlerDone) - handler.handler.runEthPeer(local, func(peer *eth.Peer) error { - err := eth.Handle((*ethHandler)(handler.handler), peer) - return err - }) - }() + go handler.handler.runEthPeer(local, func(peer *eth.Peer) error { + return eth.Handle((*ethHandler)(handler.handler), peer) + }) // Run the handshake locally to avoid spinning up a remote handler var ( genesis = handler.chain.Genesis() @@ -663,7 +646,6 @@ func testCheckpointChallenge(t *testing.T, syncmode downloader.SyncMode, checkpo // Verify that the remote peer is maintained or dropped if drop { - <-handlerDone if peers := handler.handler.peers.len(); peers != 0 { t.Fatalf("peer count mismatch: have %d, want %d", peers, 0) } @@ -710,8 +692,8 @@ func testBroadcastBlock(t *testing.T, peers, bcasts int) { defer sourcePipe.Close() defer sinkPipe.Close() - sourcePeer := eth.NewPeer(eth.ETH64, p2p.NewPeer(enode.ID{byte(i)}, "", nil), sourcePipe, nil) - sinkPeer := eth.NewPeer(eth.ETH64, p2p.NewPeer(enode.ID{0}, "", nil), sinkPipe, nil) + sourcePeer := eth.NewPeer(eth.ETH65, p2p.NewPeer(enode.ID{byte(i)}, "", nil), sourcePipe, nil) + sinkPeer := eth.NewPeer(eth.ETH65, p2p.NewPeer(enode.ID{0}, "", nil), sinkPipe, nil) defer sourcePeer.Close() defer sinkPeer.Close() @@ -762,8 +744,8 @@ func testBroadcastBlock(t *testing.T, peers, bcasts int) { // Tests that a propagated malformed block (uncles or transactions don't match // with the hashes in the header) gets discarded and not broadcast forward. -func TestBroadcastMalformedBlock64(t *testing.T) { testBroadcastMalformedBlock(t, 64) } -func TestBroadcastMalformedBlock65(t *testing.T) { testBroadcastMalformedBlock(t, 65) } +func TestBroadcastMalformedBlock65(t *testing.T) { testBroadcastMalformedBlock(t, eth.ETH65) } +func TestBroadcastMalformedBlock66(t *testing.T) { testBroadcastMalformedBlock(t, eth.ETH66) } func testBroadcastMalformedBlock(t *testing.T, protocol uint) { t.Parallel() diff --git a/eth/protocols/eth/handler.go b/eth/protocols/eth/handler.go index 0dc3de9898..6bbaa2f555 100644 --- a/eth/protocols/eth/handler.go +++ b/eth/protocols/eth/handler.go @@ -171,44 +171,27 @@ type Decoder interface { Time() time.Time } -var eth64 = map[uint64]msgHandler{ - GetBlockHeadersMsg: handleGetBlockHeaders, - BlockHeadersMsg: handleBlockHeaders, - GetBlockBodiesMsg: handleGetBlockBodies, - BlockBodiesMsg: handleBlockBodies, - GetNodeDataMsg: handleGetNodeData, - NodeDataMsg: handleNodeData, - GetReceiptsMsg: handleGetReceipts, - ReceiptsMsg: handleReceipts, - NewBlockHashesMsg: handleNewBlockhashes, - NewBlockMsg: handleNewBlock, - TransactionsMsg: handleTransactions, -} var eth65 = map[uint64]msgHandler{ - // old 64 messages - GetBlockHeadersMsg: handleGetBlockHeaders, - BlockHeadersMsg: handleBlockHeaders, - GetBlockBodiesMsg: handleGetBlockBodies, - BlockBodiesMsg: handleBlockBodies, - GetNodeDataMsg: handleGetNodeData, - NodeDataMsg: handleNodeData, - GetReceiptsMsg: handleGetReceipts, - ReceiptsMsg: handleReceipts, - NewBlockHashesMsg: handleNewBlockhashes, - NewBlockMsg: handleNewBlock, - TransactionsMsg: handleTransactions, - // New eth65 messages + GetBlockHeadersMsg: handleGetBlockHeaders, + BlockHeadersMsg: handleBlockHeaders, + GetBlockBodiesMsg: handleGetBlockBodies, + BlockBodiesMsg: handleBlockBodies, + GetNodeDataMsg: handleGetNodeData, + NodeDataMsg: handleNodeData, + GetReceiptsMsg: handleGetReceipts, + ReceiptsMsg: handleReceipts, + NewBlockHashesMsg: handleNewBlockhashes, + NewBlockMsg: handleNewBlock, + TransactionsMsg: handleTransactions, NewPooledTransactionHashesMsg: handleNewPooledTransactionHashes, GetPooledTransactionsMsg: handleGetPooledTransactions, PooledTransactionsMsg: handlePooledTransactions, } var eth66 = map[uint64]msgHandler{ - // eth64 announcement messages (no id) - NewBlockHashesMsg: handleNewBlockhashes, - NewBlockMsg: handleNewBlock, - TransactionsMsg: handleTransactions, - // eth65 announcement messages (no id) + NewBlockHashesMsg: handleNewBlockhashes, + NewBlockMsg: handleNewBlock, + TransactionsMsg: handleTransactions, NewPooledTransactionHashesMsg: handleNewPooledTransactionHashes, // eth66 messages with request-id GetBlockHeadersMsg: handleGetBlockHeaders66, @@ -236,13 +219,11 @@ func handleMessage(backend Backend, peer *Peer) error { } defer msg.Discard() - var handlers = eth64 - if peer.Version() == ETH65 { - handlers = eth65 - } else if peer.Version() >= ETH66 { + var handlers = eth65 + if peer.Version() >= ETH66 { handlers = eth66 } - // Track the emount of time it takes to serve the request and run the handler + // Track the amount of time it takes to serve the request and run the handler if metrics.Enabled { h := fmt.Sprintf("%s/%s/%d/%#02x", p2p.HandleHistName, ProtocolName, peer.Version(), msg.Code) defer func(start time.Time) { diff --git a/eth/protocols/eth/handler_test.go b/eth/protocols/eth/handler_test.go index 469bcdf2ae..50beb9de04 100644 --- a/eth/protocols/eth/handler_test.go +++ b/eth/protocols/eth/handler_test.go @@ -110,8 +110,8 @@ func (b *testBackend) Handle(*Peer, Packet) error { } // Tests that block headers can be retrieved from a remote chain based on user queries. -func TestGetBlockHeaders64(t *testing.T) { testGetBlockHeaders(t, 64) } -func TestGetBlockHeaders65(t *testing.T) { testGetBlockHeaders(t, 65) } +func TestGetBlockHeaders65(t *testing.T) { testGetBlockHeaders(t, ETH65) } +func TestGetBlockHeaders66(t *testing.T) { testGetBlockHeaders(t, ETH66) } func testGetBlockHeaders(t *testing.T, protocol uint) { t.Parallel() @@ -254,18 +254,44 @@ func testGetBlockHeaders(t *testing.T, protocol uint) { headers = append(headers, backend.chain.GetBlockByHash(hash).Header()) } // Send the hash request and verify the response - p2p.Send(peer.app, GetBlockHeadersMsg, tt.query) - if err := p2p.ExpectMsg(peer.app, BlockHeadersMsg, headers); err != nil { - t.Errorf("test %d: headers mismatch: %v", i, err) + if protocol <= ETH65 { + p2p.Send(peer.app, GetBlockHeadersMsg, tt.query) + if err := p2p.ExpectMsg(peer.app, BlockHeadersMsg, headers); err != nil { + t.Errorf("test %d: headers mismatch: %v", i, err) + } + } else { + p2p.Send(peer.app, GetBlockHeadersMsg, GetBlockHeadersPacket66{ + RequestId: 123, + GetBlockHeadersPacket: tt.query, + }) + if err := p2p.ExpectMsg(peer.app, BlockHeadersMsg, BlockHeadersPacket66{ + RequestId: 123, + BlockHeadersPacket: headers, + }); err != nil { + t.Errorf("test %d: headers mismatch: %v", i, err) + } } // If the test used number origins, repeat with hashes as the too if tt.query.Origin.Hash == (common.Hash{}) { if origin := backend.chain.GetBlockByNumber(tt.query.Origin.Number); origin != nil { tt.query.Origin.Hash, tt.query.Origin.Number = origin.Hash(), 0 - p2p.Send(peer.app, GetBlockHeadersMsg, tt.query) - if err := p2p.ExpectMsg(peer.app, BlockHeadersMsg, headers); err != nil { - t.Errorf("test %d: headers mismatch: %v", i, err) + if protocol <= ETH65 { + p2p.Send(peer.app, GetBlockHeadersMsg, tt.query) + if err := p2p.ExpectMsg(peer.app, BlockHeadersMsg, headers); err != nil { + t.Errorf("test %d: headers mismatch: %v", i, err) + } + } else { + p2p.Send(peer.app, GetBlockHeadersMsg, GetBlockHeadersPacket66{ + RequestId: 456, + GetBlockHeadersPacket: tt.query, + }) + if err := p2p.ExpectMsg(peer.app, BlockHeadersMsg, BlockHeadersPacket66{ + RequestId: 456, + BlockHeadersPacket: headers, + }); err != nil { + t.Errorf("test %d: headers mismatch: %v", i, err) + } } } } @@ -273,8 +299,8 @@ func testGetBlockHeaders(t *testing.T, protocol uint) { } // Tests that block contents can be retrieved from a remote chain based on their hashes. -func TestGetBlockBodies64(t *testing.T) { testGetBlockBodies(t, 64) } -func TestGetBlockBodies65(t *testing.T) { testGetBlockBodies(t, 65) } +func TestGetBlockBodies65(t *testing.T) { testGetBlockBodies(t, ETH65) } +func TestGetBlockBodies66(t *testing.T) { testGetBlockBodies(t, ETH66) } func testGetBlockBodies(t *testing.T, protocol uint) { t.Parallel() @@ -343,16 +369,29 @@ func testGetBlockBodies(t *testing.T, protocol uint) { } } // Send the hash request and verify the response - p2p.Send(peer.app, GetBlockBodiesMsg, hashes) - if err := p2p.ExpectMsg(peer.app, BlockBodiesMsg, bodies); err != nil { - t.Errorf("test %d: bodies mismatch: %v", i, err) + if protocol <= ETH65 { + p2p.Send(peer.app, GetBlockBodiesMsg, hashes) + if err := p2p.ExpectMsg(peer.app, BlockBodiesMsg, bodies); err != nil { + t.Errorf("test %d: bodies mismatch: %v", i, err) + } + } else { + p2p.Send(peer.app, GetBlockBodiesMsg, GetBlockBodiesPacket66{ + RequestId: 123, + GetBlockBodiesPacket: hashes, + }) + if err := p2p.ExpectMsg(peer.app, BlockBodiesMsg, BlockBodiesPacket66{ + RequestId: 123, + BlockBodiesPacket: bodies, + }); err != nil { + t.Errorf("test %d: bodies mismatch: %v", i, err) + } } } } // Tests that the state trie nodes can be retrieved based on hashes. -func TestGetNodeData64(t *testing.T) { testGetNodeData(t, 64) } -func TestGetNodeData65(t *testing.T) { testGetNodeData(t, 65) } +func TestGetNodeData65(t *testing.T) { testGetNodeData(t, ETH65) } +func TestGetNodeData66(t *testing.T) { testGetNodeData(t, ETH66) } func testGetNodeData(t *testing.T, protocol uint) { t.Parallel() @@ -410,7 +449,14 @@ func testGetNodeData(t *testing.T, protocol uint) { } it.Release() - p2p.Send(peer.app, GetNodeDataMsg, hashes) + if protocol <= ETH65 { + p2p.Send(peer.app, GetNodeDataMsg, hashes) + } else { + p2p.Send(peer.app, GetNodeDataMsg, GetNodeDataPacket66{ + RequestId: 123, + GetNodeDataPacket: hashes, + }) + } msg, err := peer.app.ReadMsg() if err != nil { t.Fatalf("failed to read node data response: %v", err) @@ -419,8 +465,16 @@ func testGetNodeData(t *testing.T, protocol uint) { t.Fatalf("response packet code mismatch: have %x, want %x", msg.Code, NodeDataMsg) } var data [][]byte - if err := msg.Decode(&data); err != nil { - t.Fatalf("failed to decode response node data: %v", err) + if protocol <= ETH65 { + if err := msg.Decode(&data); err != nil { + t.Fatalf("failed to decode response node data: %v", err) + } + } else { + var res NodeDataPacket66 + if err := msg.Decode(&res); err != nil { + t.Fatalf("failed to decode response node data: %v", err) + } + data = res.NodeDataPacket } // Verify that all hashes correspond to the requested data, and reconstruct a state tree for i, want := range hashes { @@ -452,8 +506,8 @@ func testGetNodeData(t *testing.T, protocol uint) { } // Tests that the transaction receipts can be retrieved based on hashes. -func TestGetBlockReceipts64(t *testing.T) { testGetBlockReceipts(t, 64) } -func TestGetBlockReceipts65(t *testing.T) { testGetBlockReceipts(t, 65) } +func TestGetBlockReceipts65(t *testing.T) { testGetBlockReceipts(t, ETH65) } +func TestGetBlockReceipts66(t *testing.T) { testGetBlockReceipts(t, ETH66) } func testGetBlockReceipts(t *testing.T, protocol uint) { t.Parallel() @@ -503,7 +557,7 @@ func testGetBlockReceipts(t *testing.T, protocol uint) { // Collect the hashes to request, and the response to expect var ( hashes []common.Hash - receipts []types.Receipts + receipts [][]*types.Receipt ) for i := uint64(0); i <= backend.chain.CurrentBlock().NumberU64(); i++ { block := backend.chain.GetBlockByNumber(i) @@ -512,8 +566,21 @@ func testGetBlockReceipts(t *testing.T, protocol uint) { receipts = append(receipts, backend.chain.GetReceiptsByHash(block.Hash())) } // Send the hash request and verify the response - p2p.Send(peer.app, GetReceiptsMsg, hashes) - if err := p2p.ExpectMsg(peer.app, ReceiptsMsg, receipts); err != nil { - t.Errorf("receipts mismatch: %v", err) + if protocol <= ETH65 { + p2p.Send(peer.app, GetReceiptsMsg, hashes) + if err := p2p.ExpectMsg(peer.app, ReceiptsMsg, receipts); err != nil { + t.Errorf("receipts mismatch: %v", err) + } + } else { + p2p.Send(peer.app, GetReceiptsMsg, GetReceiptsPacket66{ + RequestId: 123, + GetReceiptsPacket: hashes, + }) + if err := p2p.ExpectMsg(peer.app, ReceiptsMsg, ReceiptsPacket66{ + RequestId: 123, + ReceiptsPacket: receipts, + }); err != nil { + t.Errorf("receipts mismatch: %v", err) + } } } diff --git a/eth/protocols/eth/handlers.go b/eth/protocols/eth/handlers.go index 8433fa343a..d7d993a23d 100644 --- a/eth/protocols/eth/handlers.go +++ b/eth/protocols/eth/handlers.go @@ -292,6 +292,9 @@ func handleNewBlock(backend Backend, msg Decoder, peer *Peer) error { if err := msg.Decode(ann); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + if err := ann.sanityCheck(); err != nil { + return err + } if hash := types.CalcUncleHash(ann.Block.Uncles()); hash != ann.Block.UncleHash() { log.Warn("Propagated block has invalid uncles", "have", hash, "exp", ann.Block.UncleHash()) return nil // TODO(karalabe): return error eventually, but wait a few releases @@ -300,9 +303,6 @@ func handleNewBlock(backend Backend, msg Decoder, peer *Peer) error { log.Warn("Propagated block has invalid body", "have", hash, "exp", ann.Block.TxHash()) return nil // TODO(karalabe): return error eventually, but wait a few releases } - if err := ann.sanityCheck(); err != nil { - return err - } ann.Block.ReceivedAt = msg.Time() ann.Block.ReceivedFrom = peer @@ -327,6 +327,8 @@ func handleBlockHeaders66(backend Backend, msg Decoder, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + requestTracker.Fulfil(peer.id, peer.version, BlockHeadersMsg, res.RequestId) + return backend.Handle(peer, &res.BlockHeadersPacket) } @@ -345,6 +347,8 @@ func handleBlockBodies66(backend Backend, msg Decoder, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + requestTracker.Fulfil(peer.id, peer.version, BlockBodiesMsg, res.RequestId) + return backend.Handle(peer, &res.BlockBodiesPacket) } @@ -363,6 +367,8 @@ func handleNodeData66(backend Backend, msg Decoder, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + requestTracker.Fulfil(peer.id, peer.version, NodeDataMsg, res.RequestId) + return backend.Handle(peer, &res.NodeDataPacket) } @@ -381,6 +387,8 @@ func handleReceipts66(backend Backend, msg Decoder, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + requestTracker.Fulfil(peer.id, peer.version, ReceiptsMsg, res.RequestId) + return backend.Handle(peer, &res.ReceiptsPacket) } @@ -506,5 +514,7 @@ func handlePooledTransactions66(backend Backend, msg Decoder, peer *Peer) error } peer.markTransaction(tx.Hash()) } + requestTracker.Fulfil(peer.id, peer.version, PooledTransactionsMsg, txs.RequestId) + return backend.Handle(peer, &txs.PooledTransactionsPacket) } diff --git a/eth/protocols/eth/handshake_test.go b/eth/protocols/eth/handshake_test.go index 65f9a00064..3bebda2dcc 100644 --- a/eth/protocols/eth/handshake_test.go +++ b/eth/protocols/eth/handshake_test.go @@ -27,8 +27,8 @@ import ( ) // Tests that handshake failures are detected and reported correctly. -func TestHandshake64(t *testing.T) { testHandshake(t, 64) } -func TestHandshake65(t *testing.T) { testHandshake(t, 65) } +func TestHandshake65(t *testing.T) { testHandshake(t, ETH65) } +func TestHandshake66(t *testing.T) { testHandshake(t, ETH66) } func testHandshake(t *testing.T, protocol uint) { t.Parallel() diff --git a/eth/protocols/eth/peer.go b/eth/protocols/eth/peer.go index ec2f97931e..5ced71ebde 100644 --- a/eth/protocols/eth/peer.go +++ b/eth/protocols/eth/peer.go @@ -414,9 +414,12 @@ func (p *Peer) RequestOneHeader(hash common.Hash) error { Skip: uint64(0), Reverse: false, } - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetBlockHeadersMsg, BlockHeadersMsg, id) return p2p.Send(p.rw, GetBlockHeadersMsg, &GetBlockHeadersPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetBlockHeadersPacket: &query, }) } @@ -433,9 +436,12 @@ func (p *Peer) RequestHeadersByHash(origin common.Hash, amount int, skip int, re Skip: uint64(skip), Reverse: reverse, } - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetBlockHeadersMsg, BlockHeadersMsg, id) return p2p.Send(p.rw, GetBlockHeadersMsg, &GetBlockHeadersPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetBlockHeadersPacket: &query, }) } @@ -452,15 +458,22 @@ func (p *Peer) RequestHeadersByNumber(origin uint64, amount int, skip int, rever Skip: uint64(skip), Reverse: reverse, } - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetBlockHeadersMsg, BlockHeadersMsg, id) return p2p.Send(p.rw, GetBlockHeadersMsg, &GetBlockHeadersPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetBlockHeadersPacket: &query, }) } return p2p.Send(p.rw, GetBlockHeadersMsg, &query) } +func (p *Peer) ExpectPeerMessage(code uint64, content types.Transactions) error { + return p2p.ExpectMsg(p.rw, code, content) +} + // ExpectRequestHeadersByNumber is a testing method to mirror the recipient side // of the RequestHeadersByNumber operation. func (p *Peer) ExpectRequestHeadersByNumber(origin uint64, amount int, skip int, reverse bool) error { @@ -473,17 +486,16 @@ func (p *Peer) ExpectRequestHeadersByNumber(origin uint64, amount int, skip int, return p2p.ExpectMsg(p.rw, GetBlockHeadersMsg, req) } -func (p *Peer) ExpectPeerMessage(code uint64, content types.Transactions) error { - return p2p.ExpectMsg(p.rw, code, content) -} - // RequestBodies fetches a batch of blocks' bodies corresponding to the hashes // specified. func (p *Peer) RequestBodies(hashes []common.Hash) error { p.Log().Debug("Fetching batch of block bodies", "count", len(hashes)) - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetBlockBodiesMsg, BlockBodiesMsg, id) return p2p.Send(p.rw, GetBlockBodiesMsg, &GetBlockBodiesPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetBlockBodiesPacket: hashes, }) } @@ -494,9 +506,12 @@ func (p *Peer) RequestBodies(hashes []common.Hash) error { // data, corresponding to the specified hashes. func (p *Peer) RequestNodeData(hashes []common.Hash) error { p.Log().Debug("Fetching batch of state data", "count", len(hashes)) - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetNodeDataMsg, NodeDataMsg, id) return p2p.Send(p.rw, GetNodeDataMsg, &GetNodeDataPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetNodeDataPacket: hashes, }) } @@ -506,9 +521,12 @@ func (p *Peer) RequestNodeData(hashes []common.Hash) error { // RequestReceipts fetches a batch of transaction receipts from a remote node. func (p *Peer) RequestReceipts(hashes []common.Hash) error { p.Log().Debug("Fetching batch of receipts", "count", len(hashes)) - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetReceiptsMsg, ReceiptsMsg, id) return p2p.Send(p.rw, GetReceiptsMsg, &GetReceiptsPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetReceiptsPacket: hashes, }) } @@ -518,9 +536,12 @@ func (p *Peer) RequestReceipts(hashes []common.Hash) error { // RequestTxs fetches a batch of transactions from a remote node. func (p *Peer) RequestTxs(hashes []common.Hash) error { p.Log().Debug("Fetching batch of transactions", "count", len(hashes)) - if p.Version() == ETH66 { + if p.Version() >= ETH66 { + id := rand.Uint64() + + requestTracker.Track(p.id, p.version, GetPooledTransactionsMsg, PooledTransactionsMsg, id) return p2p.Send(p.rw, GetPooledTransactionsMsg, &GetPooledTransactionsPacket66{ - RequestId: rand.Uint64(), + RequestId: id, GetPooledTransactionsPacket: hashes, }) } diff --git a/eth/protocols/eth/protocol.go b/eth/protocols/eth/protocol.go index 7f1832754f..62c018ef8e 100644 --- a/eth/protocols/eth/protocol.go +++ b/eth/protocols/eth/protocol.go @@ -30,7 +30,6 @@ import ( // Constants to match up protocol versions and messages const ( - ETH64 = 64 ETH65 = 65 ETH66 = 66 ) @@ -41,11 +40,11 @@ const ProtocolName = "eth" // ProtocolVersions are the supported versions of the `eth` protocol (first // is primary). -var ProtocolVersions = []uint{ETH66, ETH65, ETH64} +var ProtocolVersions = []uint{ETH66, ETH65} // protocolLengths are the number of implemented message corresponding to // different protocol versions. -var protocolLengths = map[uint]uint64{ETH66: 17, ETH65: 17, ETH64: 17} +var protocolLengths = map[uint]uint64{ETH66: 17, ETH65: 17} // maxMessageSize is the maximum cap on the size of a protocol message. const maxMessageSize = 10 * 1024 * 1024 diff --git a/eth/protocols/eth/tracker.go b/eth/protocols/eth/tracker.go new file mode 100644 index 0000000000..324fd22839 --- /dev/null +++ b/eth/protocols/eth/tracker.go @@ -0,0 +1,26 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package eth + +import ( + "time" + + "github.com/ethereum/go-ethereum/p2p/tracker" +) + +// requestTracker is a singleton tracker for eth/66 and newer request times. +var requestTracker = tracker.New(ProtocolName, 5*time.Minute) diff --git a/eth/protocols/snap/handler.go b/eth/protocols/snap/handler.go index f7939964f3..3d668a2ebb 100644 --- a/eth/protocols/snap/handler.go +++ b/eth/protocols/snap/handler.go @@ -84,6 +84,12 @@ type Backend interface { // MakeProtocols constructs the P2P protocol definitions for `snap`. func MakeProtocols(backend Backend, dnsdisc enode.Iterator) []p2p.Protocol { + // Filter the discovery iterator for nodes advertising snap support. + dnsdisc = enode.Filter(dnsdisc, func(n *enode.Node) bool { + var snap enrEntry + return n.Load(&snap) == nil + }) + protocols := make([]p2p.Protocol, len(ProtocolVersions)) for i, version := range ProtocolVersions { version := version // Closure @@ -227,6 +233,8 @@ func handleMessage(backend Backend, peer *Peer) error { return fmt.Errorf("accounts not monotonically increasing: #%d [%x] vs #%d [%x]", i-1, res.Accounts[i-1].Hash[:], i, res.Accounts[i].Hash[:]) } } + requestTracker.Fulfil(peer.id, peer.version, AccountRangeMsg, res.ID) + return backend.Handle(peer, res) case msg.Code == GetStorageRangesMsg: @@ -352,7 +360,7 @@ func handleMessage(backend Backend, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } - // Ensure the ranges ae monotonically increasing + // Ensure the ranges are monotonically increasing for i, slots := range res.Slots { for j := 1; j < len(slots); j++ { if bytes.Compare(slots[j-1].Hash[:], slots[j].Hash[:]) >= 0 { @@ -360,6 +368,8 @@ func handleMessage(backend Backend, peer *Peer) error { } } } + requestTracker.Fulfil(peer.id, peer.version, StorageRangesMsg, res.ID) + return backend.Handle(peer, res) case msg.Code == GetByteCodesMsg: @@ -404,6 +414,8 @@ func handleMessage(backend Backend, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + requestTracker.Fulfil(peer.id, peer.version, ByteCodesMsg, res.ID) + return backend.Handle(peer, res) case msg.Code == GetTrieNodesMsg: @@ -497,6 +509,8 @@ func handleMessage(backend Backend, peer *Peer) error { if err := msg.Decode(res); err != nil { return fmt.Errorf("%w: message %v: %v", errDecode, msg, err) } + requestTracker.Fulfil(peer.id, peer.version, TrieNodesMsg, res.ID) + return backend.Handle(peer, res) default: diff --git a/eth/protocols/snap/peer.go b/eth/protocols/snap/peer.go index 4f3d550f1f..cf0ce65bd7 100644 --- a/eth/protocols/snap/peer.go +++ b/eth/protocols/snap/peer.go @@ -65,6 +65,8 @@ func (p *Peer) Log() log.Logger { // trie, starting with the origin. func (p *Peer) RequestAccountRange(id uint64, root common.Hash, origin, limit common.Hash, bytes uint64) error { p.logger.Trace("Fetching range of accounts", "reqid", id, "root", root, "origin", origin, "limit", limit, "bytes", common.StorageSize(bytes)) + + requestTracker.Track(p.id, p.version, GetAccountRangeMsg, AccountRangeMsg, id) return p2p.Send(p.rw, GetAccountRangeMsg, &GetAccountRangePacket{ ID: id, Root: root, @@ -83,6 +85,7 @@ func (p *Peer) RequestStorageRanges(id uint64, root common.Hash, accounts []comm } else { p.logger.Trace("Fetching ranges of small storage slots", "reqid", id, "root", root, "accounts", len(accounts), "first", accounts[0], "bytes", common.StorageSize(bytes)) } + requestTracker.Track(p.id, p.version, GetStorageRangesMsg, StorageRangesMsg, id) return p2p.Send(p.rw, GetStorageRangesMsg, &GetStorageRangesPacket{ ID: id, Root: root, @@ -96,6 +99,8 @@ func (p *Peer) RequestStorageRanges(id uint64, root common.Hash, accounts []comm // RequestByteCodes fetches a batch of bytecodes by hash. func (p *Peer) RequestByteCodes(id uint64, hashes []common.Hash, bytes uint64) error { p.logger.Trace("Fetching set of byte codes", "reqid", id, "hashes", len(hashes), "bytes", common.StorageSize(bytes)) + + requestTracker.Track(p.id, p.version, GetByteCodesMsg, ByteCodesMsg, id) return p2p.Send(p.rw, GetByteCodesMsg, &GetByteCodesPacket{ ID: id, Hashes: hashes, @@ -107,6 +112,8 @@ func (p *Peer) RequestByteCodes(id uint64, hashes []common.Hash, bytes uint64) e // a specificstate trie. func (p *Peer) RequestTrieNodes(id uint64, root common.Hash, paths []TrieNodePathSet, bytes uint64) error { p.logger.Trace("Fetching set of trie nodes", "reqid", id, "root", root, "pathsets", len(paths), "bytes", common.StorageSize(bytes)) + + requestTracker.Track(p.id, p.version, GetTrieNodesMsg, TrieNodesMsg, id) return p2p.Send(p.rw, GetTrieNodesMsg, &GetTrieNodesPacket{ ID: id, Root: root, diff --git a/eth/protocols/snap/range.go b/eth/protocols/snap/range.go new file mode 100644 index 0000000000..dd380ff471 --- /dev/null +++ b/eth/protocols/snap/range.go @@ -0,0 +1,80 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package snap + +import ( + "math/big" + + "github.com/ethereum/go-ethereum/common" + "github.com/holiman/uint256" +) + +// hashRange is a utility to handle ranges of hashes, Split up the +// hash-space into sections, and 'walk' over the sections +type hashRange struct { + current *uint256.Int + step *uint256.Int +} + +// newHashRange creates a new hashRange, initiated at the start position, +// and with the step set to fill the desired 'num' chunks +func newHashRange(start common.Hash, num uint64) *hashRange { + left := new(big.Int).Sub(hashSpace, start.Big()) + step := new(big.Int).Div( + new(big.Int).Add(left, new(big.Int).SetUint64(num-1)), + new(big.Int).SetUint64(num), + ) + step256 := new(uint256.Int) + step256.SetFromBig(step) + + return &hashRange{ + current: uint256.NewInt().SetBytes32(start[:]), + step: step256, + } +} + +// Next pushes the hash range to the next interval. +func (r *hashRange) Next() bool { + next := new(uint256.Int) + if overflow := next.AddOverflow(r.current, r.step); overflow { + return false + } + r.current = next + return true +} + +// Start returns the first hash in the current interval. +func (r *hashRange) Start() common.Hash { + return r.current.Bytes32() +} + +// End returns the last hash in the current interval. +func (r *hashRange) End() common.Hash { + // If the end overflows (non divisible range), return a shorter interval + next := new(uint256.Int) + if overflow := next.AddOverflow(r.current, r.step); overflow { + return common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff") + } + return new(uint256.Int).Sub(next, uint256.NewInt().SetOne()).Bytes32() +} + +// incHash returns the next hash, in lexicographical order (a.k.a plus one) +func incHash(h common.Hash) common.Hash { + a := uint256.NewInt().SetBytes32(h[:]) + a.Add(a, uint256.NewInt().SetOne()) + return common.Hash(a.Bytes32()) +} diff --git a/eth/protocols/snap/range_test.go b/eth/protocols/snap/range_test.go new file mode 100644 index 0000000000..23273e50bf --- /dev/null +++ b/eth/protocols/snap/range_test.go @@ -0,0 +1,143 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package snap + +import ( + "testing" + + "github.com/ethereum/go-ethereum/common" +) + +// Tests that given a starting hash and a density, the hash ranger can correctly +// split up the remaining hash space into a fixed number of chunks. +func TestHashRanges(t *testing.T) { + tests := []struct { + head common.Hash + chunks uint64 + starts []common.Hash + ends []common.Hash + }{ + // Simple test case to split the entire hash range into 4 chunks + { + head: common.Hash{}, + chunks: 4, + starts: []common.Hash{ + {}, + common.HexToHash("0x4000000000000000000000000000000000000000000000000000000000000000"), + common.HexToHash("0x8000000000000000000000000000000000000000000000000000000000000000"), + common.HexToHash("0xc000000000000000000000000000000000000000000000000000000000000000"), + }, + ends: []common.Hash{ + common.HexToHash("0x3fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + common.HexToHash("0xbfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + }, + }, + // Split a divisible part of the hash range up into 2 chunks + { + head: common.HexToHash("0x2000000000000000000000000000000000000000000000000000000000000000"), + chunks: 2, + starts: []common.Hash{ + common.Hash{}, + common.HexToHash("0x9000000000000000000000000000000000000000000000000000000000000000"), + }, + ends: []common.Hash{ + common.HexToHash("0x8fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + }, + }, + // Split the entire hash range into a non divisible 3 chunks + { + head: common.Hash{}, + chunks: 3, + starts: []common.Hash{ + {}, + common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555556"), + common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaac"), + }, + ends: []common.Hash{ + common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"), + common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab"), + common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + }, + }, + // Split a part of hash range into a non divisible 3 chunks + { + head: common.HexToHash("0x2000000000000000000000000000000000000000000000000000000000000000"), + chunks: 3, + starts: []common.Hash{ + {}, + common.HexToHash("0x6aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab"), + common.HexToHash("0xb555555555555555555555555555555555555555555555555555555555555556"), + }, + ends: []common.Hash{ + common.HexToHash("0x6aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"), + common.HexToHash("0xb555555555555555555555555555555555555555555555555555555555555555"), + common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + }, + }, + // Split a part of hash range into a non divisible 3 chunks, but with a + // meaningful space size for manual verification. + // - The head being 0xff...f0, we have 14 hashes left in the space + // - Chunking up 14 into 3 pieces is 4.(6), but we need the ceil of 5 to avoid a micro-last-chunk + // - Since the range is not divisible, the last interval will be shrter, capped at 0xff...f + // - The chunk ranges thus needs to be [..0, ..5], [..6, ..b], [..c, ..f] + { + head: common.HexToHash("0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff0"), + chunks: 3, + starts: []common.Hash{ + {}, + common.HexToHash("0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff6"), + common.HexToHash("0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc"), + }, + ends: []common.Hash{ + common.HexToHash("0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff5"), + common.HexToHash("0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffb"), + common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + }, + }, + } + for i, tt := range tests { + r := newHashRange(tt.head, tt.chunks) + + var ( + starts = []common.Hash{{}} + ends = []common.Hash{r.End()} + ) + for r.Next() { + starts = append(starts, r.Start()) + ends = append(ends, r.End()) + } + if len(starts) != len(tt.starts) { + t.Errorf("test %d: starts count mismatch: have %d, want %d", i, len(starts), len(tt.starts)) + } + for j := 0; j < len(starts) && j < len(tt.starts); j++ { + if starts[j] != tt.starts[j] { + t.Errorf("test %d, start %d: hash mismatch: have %x, want %x", i, j, starts[j], tt.starts[j]) + } + } + if len(ends) != len(tt.ends) { + t.Errorf("test %d: ends count mismatch: have %d, want %d", i, len(ends), len(tt.ends)) + } + for j := 0; j < len(ends) && j < len(tt.ends); j++ { + if ends[j] != tt.ends[j] { + t.Errorf("test %d, end %d: hash mismatch: have %x, want %x", i, j, ends[j], tt.ends[j]) + } + } + } +} diff --git a/eth/protocols/snap/sync.go b/eth/protocols/snap/sync.go index 2924fa0802..e283473207 100644 --- a/eth/protocols/snap/sync.go +++ b/eth/protocols/snap/sync.go @@ -23,12 +23,15 @@ import ( "fmt" "math/big" "math/rand" + "sort" "sync" "time" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/state" + "github.com/ethereum/go-ethereum/core/state/snapshot" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/event" @@ -49,9 +52,9 @@ var ( const ( // maxRequestSize is the maximum number of bytes to request from a remote peer. - maxRequestSize = 512 * 1024 + maxRequestSize = 128 * 1024 - // maxStorageSetRequestCountis th maximum number of contracts to request the + // maxStorageSetRequestCount is the maximum number of contracts to request the // storage of in a single query. If this number is too low, we're not filling // responses fully and waste round trip times. If it's too high, we're capping // responses and waste bandwidth. @@ -71,8 +74,10 @@ const ( // a single query. If this number is too low, we're not filling responses fully // and waste round trip times. If it's too high, we're capping responses and // waste bandwidth. - maxTrieRequestCount = 512 + maxTrieRequestCount = 256 +) +var ( // accountConcurrency is the number of chunks to split the account trie into // to allow concurrent retrievals. accountConcurrency = 16 @@ -80,9 +85,7 @@ const ( // storageConcurrency is the number of chunks to split the a large contract // storage trie into to allow concurrent retrievals. storageConcurrency = 16 -) -var ( // requestTimeout is the maximum time a peer is allowed to spend on serving // a single network request. requestTimeout = 15 * time.Second // TODO(karalabe): Make it dynamic ala fast-sync? @@ -105,9 +108,11 @@ type accountRequest struct { peer string // Peer to which this request is assigned id uint64 // Request ID of this request - cancel chan struct{} // Channel to track sync cancellation - timeout *time.Timer // Timer to track delivery timeout - stale chan struct{} // Channel to signal the request was dropped + deliver chan *accountResponse // Channel to deliver successful response on + revert chan *accountRequest // Channel to deliver request failure on + cancel chan struct{} // Channel to track sync cancellation + timeout *time.Timer // Timer to track delivery timeout + stale chan struct{} // Channel to signal the request was dropped origin common.Hash // First account requested to allow continuation checks limit common.Hash // Last account requested to allow non-overlapping chunking @@ -124,12 +129,6 @@ type accountResponse struct { hashes []common.Hash // Account hashes in the returned range accounts []*state.Account // Expanded accounts in the returned range - nodes ethdb.KeyValueStore // Database containing the reconstructed trie nodes - trie *trie.Trie // Reconstructed trie to reject incomplete account paths - - bounds map[common.Hash]struct{} // Boundary nodes to avoid persisting incomplete accounts - overflow *light.NodeSet // Overflow nodes to avoid persisting across chunk boundaries - cont bool // Whether the account range has a continuation } @@ -146,9 +145,11 @@ type bytecodeRequest struct { peer string // Peer to which this request is assigned id uint64 // Request ID of this request - cancel chan struct{} // Channel to track sync cancellation - timeout *time.Timer // Timer to track delivery timeout - stale chan struct{} // Channel to signal the request was dropped + deliver chan *bytecodeResponse // Channel to deliver successful response on + revert chan *bytecodeRequest // Channel to deliver request failure on + cancel chan struct{} // Channel to track sync cancellation + timeout *time.Timer // Timer to track delivery timeout + stale chan struct{} // Channel to signal the request was dropped hashes []common.Hash // Bytecode hashes to validate responses task *accountTask // Task which this request is filling (only access fields through the runloop!!) @@ -175,9 +176,11 @@ type storageRequest struct { peer string // Peer to which this request is assigned id uint64 // Request ID of this request - cancel chan struct{} // Channel to track sync cancellation - timeout *time.Timer // Timer to track delivery timeout - stale chan struct{} // Channel to signal the request was dropped + deliver chan *storageResponse // Channel to deliver successful response on + revert chan *storageRequest // Channel to deliver request failure on + cancel chan struct{} // Channel to track sync cancellation + timeout *time.Timer // Timer to track delivery timeout + stale chan struct{} // Channel to signal the request was dropped accounts []common.Hash // Account hashes to validate responses roots []common.Hash // Storage roots to validate responses @@ -199,15 +202,10 @@ type storageResponse struct { accounts []common.Hash // Account hashes requested, may be only partially filled roots []common.Hash // Storage roots requested, may be only partially filled - hashes [][]common.Hash // Storage slot hashes in the returned range - slots [][][]byte // Storage slot values in the returned range - nodes []ethdb.KeyValueStore // Database containing the reconstructed trie nodes - tries []*trie.Trie // Reconstructed tries to reject overflown slots + hashes [][]common.Hash // Storage slot hashes in the returned range + slots [][][]byte // Storage slot values in the returned range - // Fields relevant for the last account only - bounds map[common.Hash]struct{} // Boundary nodes to avoid persisting (incomplete) - overflow *light.NodeSet // Overflow nodes to avoid persisting across chunk boundaries - cont bool // Whether the last storage range has a continuation + cont bool // Whether the last storage range has a continuation } // trienodeHealRequest tracks a pending state trie request to ensure responses @@ -223,9 +221,11 @@ type trienodeHealRequest struct { peer string // Peer to which this request is assigned id uint64 // Request ID of this request - cancel chan struct{} // Channel to track sync cancellation - timeout *time.Timer // Timer to track delivery timeout - stale chan struct{} // Channel to signal the request was dropped + deliver chan *trienodeHealResponse // Channel to deliver successful response on + revert chan *trienodeHealRequest // Channel to deliver request failure on + cancel chan struct{} // Channel to track sync cancellation + timeout *time.Timer // Timer to track delivery timeout + stale chan struct{} // Channel to signal the request was dropped hashes []common.Hash // Trie node hashes to validate responses paths []trie.SyncPath // Trie node paths requested for rescheduling @@ -255,9 +255,11 @@ type bytecodeHealRequest struct { peer string // Peer to which this request is assigned id uint64 // Request ID of this request - cancel chan struct{} // Channel to track sync cancellation - timeout *time.Timer // Timer to track delivery timeout - stale chan struct{} // Channel to signal the request was dropped + deliver chan *bytecodeHealResponse // Channel to deliver successful response on + revert chan *bytecodeHealRequest // Channel to deliver request failure on + cancel chan struct{} // Channel to track sync cancellation + timeout *time.Timer // Timer to track delivery timeout + stale chan struct{} // Channel to signal the request was dropped hashes []common.Hash // Bytecode hashes to validate responses task *healTask // Task which this request is filling (only access fields through the runloop!!) @@ -290,6 +292,9 @@ type accountTask struct { codeTasks map[common.Hash]struct{} // Code hashes that need retrieval stateTasks map[common.Hash]common.Hash // Account hashes->roots that need full state retrieval + genBatch ethdb.Batch // Batch used by the node generator + genTrie *trie.StackTrie // Node generator from storage slots + done bool // Flag whether the task can be removed } @@ -301,7 +306,11 @@ type storageTask struct { // These fields are internals used during runtime root common.Hash // Storage root hash for this instance req *storageRequest // Pending request to fill this task - done bool // Flag whether the task can be removed + + genBatch ethdb.Batch // Batch used by the node generator + genTrie *trie.StackTrie // Node generator from storage slots + + done bool // Flag whether the task can be removed } // healTask represents the sync task for healing the snap-synced chunk boundaries. @@ -348,7 +357,7 @@ type SyncPeer interface { // trie, starting with the origin. RequestAccountRange(id uint64, root, origin, limit common.Hash, bytes uint64) error - // RequestStorageRange fetches a batch of storage slots belonging to one or + // RequestStorageRanges fetches a batch of storage slots belonging to one or // more accounts. If slots from only one accout is requested, an origin marker // may also be used to retrieve from there. RequestStorageRanges(id uint64, root common.Hash, accounts []common.Hash, origin, limit []byte, bytes uint64) error @@ -398,14 +407,6 @@ type Syncer struct { bytecodeReqs map[uint64]*bytecodeRequest // Bytecode requests currently running storageReqs map[uint64]*storageRequest // Storage requests currently running - accountReqFails chan *accountRequest // Failed account range requests to revert - bytecodeReqFails chan *bytecodeRequest // Failed bytecode requests to revert - storageReqFails chan *storageRequest // Failed storage requests to revert - - accountResps chan *accountResponse // Account sub-tries to integrate into the database - bytecodeResps chan *bytecodeResponse // Bytecodes to integrate into the database - storageResps chan *storageResponse // Storage sub-tries to integrate into the database - accountSynced uint64 // Number of accounts downloaded accountBytes common.StorageSize // Number of account trie bytes persisted to disk bytecodeSynced uint64 // Number of bytecodes downloaded @@ -420,12 +421,6 @@ type Syncer struct { trienodeHealReqs map[uint64]*trienodeHealRequest // Trie node requests currently running bytecodeHealReqs map[uint64]*bytecodeHealRequest // Bytecode requests currently running - trienodeHealReqFails chan *trienodeHealRequest // Failed trienode requests to revert - bytecodeHealReqFails chan *bytecodeHealRequest // Failed bytecode requests to revert - - trienodeHealResps chan *trienodeHealResponse // Trie nodes to integrate into the database - bytecodeHealResps chan *bytecodeHealResponse // Bytecodes to integrate into the database - trienodeHealSynced uint64 // Number of state trie nodes downloaded trienodeHealBytes common.StorageSize // Number of state trie bytes persisted to disk trienodeHealDups uint64 // Number of state trie nodes already processed @@ -435,9 +430,14 @@ type Syncer struct { bytecodeHealDups uint64 // Number of bytecodes already processed bytecodeHealNops uint64 // Number of bytecodes not requested - startTime time.Time // Time instance when snapshot sync started - startAcc common.Hash // Account hash where sync started from - logTime time.Time // Time instance when status was last reported + stateWriter ethdb.Batch // Shared batch writer used for persisting raw states + accountHealed uint64 // Number of accounts downloaded during the healing stage + accountHealedBytes common.StorageSize // Number of raw account bytes persisted to disk during the healing stage + storageHealed uint64 // Number of storage slots downloaded during the healing stage + storageHealedBytes common.StorageSize // Number of raw storage bytes persisted to disk during the healing stage + + startTime time.Time // Time instance when snapshot sync started + logTime time.Time // Time instance when status was last reported pend sync.WaitGroup // Tracks network request goroutines for graceful shutdown lock sync.RWMutex // Protects fields that can change outside of sync (peers, reqs, root) @@ -458,25 +458,16 @@ func NewSyncer(db ethdb.KeyValueStore) *Syncer { storageIdlers: make(map[string]struct{}), bytecodeIdlers: make(map[string]struct{}), - accountReqs: make(map[uint64]*accountRequest), - storageReqs: make(map[uint64]*storageRequest), - bytecodeReqs: make(map[uint64]*bytecodeRequest), - accountReqFails: make(chan *accountRequest), - storageReqFails: make(chan *storageRequest), - bytecodeReqFails: make(chan *bytecodeRequest), - accountResps: make(chan *accountResponse), - storageResps: make(chan *storageResponse), - bytecodeResps: make(chan *bytecodeResponse), + accountReqs: make(map[uint64]*accountRequest), + storageReqs: make(map[uint64]*storageRequest), + bytecodeReqs: make(map[uint64]*bytecodeRequest), trienodeHealIdlers: make(map[string]struct{}), bytecodeHealIdlers: make(map[string]struct{}), - trienodeHealReqs: make(map[uint64]*trienodeHealRequest), - bytecodeHealReqs: make(map[uint64]*bytecodeHealRequest), - trienodeHealReqFails: make(chan *trienodeHealRequest), - bytecodeHealReqFails: make(chan *bytecodeHealRequest), - trienodeHealResps: make(chan *trienodeHealResponse), - bytecodeHealResps: make(chan *bytecodeHealResponse), + trienodeHealReqs: make(map[uint64]*trienodeHealRequest), + bytecodeHealReqs: make(map[uint64]*bytecodeHealRequest), + stateWriter: db.NewBatch(), } } @@ -544,7 +535,7 @@ func (s *Syncer) Sync(root common.Hash, cancel chan struct{}) error { s.lock.Lock() s.root = root s.healer = &healTask{ - scheduler: state.NewStateSync(root, s.db, nil), + scheduler: state.NewStateSync(root, s.db, nil, s.onHealState), trieTasks: make(map[common.Hash]trie.SyncPath), codeTasks: make(map[common.Hash]struct{}), } @@ -569,6 +560,14 @@ func (s *Syncer) Sync(root common.Hash, cancel chan struct{}) error { }() log.Debug("Starting snapshot sync cycle", "root", root) + + // Flush out the last committed raw states + defer func() { + if s.stateWriter.ValueSize() > 0 { + s.stateWriter.Write() + s.stateWriter.Reset() + } + }() defer s.report(true) // Whether sync completed or not, disregard any future packets @@ -591,6 +590,21 @@ func (s *Syncer) Sync(root common.Hash, cancel chan struct{}) error { peerDropSub := s.peerDrop.Subscribe(peerDrop) defer peerDropSub.Unsubscribe() + // Create a set of unique channels for this sync cycle. We need these to be + // ephemeral so a data race doesn't accidentally deliver something stale on + // a persistent channel across syncs (yup, this happened) + var ( + accountReqFails = make(chan *accountRequest) + storageReqFails = make(chan *storageRequest) + bytecodeReqFails = make(chan *bytecodeRequest) + accountResps = make(chan *accountResponse) + storageResps = make(chan *storageResponse) + bytecodeResps = make(chan *bytecodeResponse) + trienodeHealReqFails = make(chan *trienodeHealRequest) + bytecodeHealReqFails = make(chan *bytecodeHealRequest) + trienodeHealResps = make(chan *trienodeHealResponse) + bytecodeHealResps = make(chan *bytecodeHealResponse) + ) for { // Remove all completed tasks and terminate sync if everything's done s.cleanStorageTasks() @@ -599,14 +613,14 @@ func (s *Syncer) Sync(root common.Hash, cancel chan struct{}) error { return nil } // Assign all the data retrieval tasks to any free peers - s.assignAccountTasks(cancel) - s.assignBytecodeTasks(cancel) - s.assignStorageTasks(cancel) + s.assignAccountTasks(accountResps, accountReqFails, cancel) + s.assignBytecodeTasks(bytecodeResps, bytecodeReqFails, cancel) + s.assignStorageTasks(storageResps, storageReqFails, cancel) if len(s.tasks) == 0 { // Sync phase done, run heal phase - s.assignTrienodeHealTasks(cancel) - s.assignBytecodeHealTasks(cancel) + s.assignTrienodeHealTasks(trienodeHealResps, trienodeHealReqFails, cancel) + s.assignBytecodeHealTasks(bytecodeHealResps, bytecodeHealReqFails, cancel) } // Wait for something to happen select { @@ -619,26 +633,26 @@ func (s *Syncer) Sync(root common.Hash, cancel chan struct{}) error { case <-cancel: return ErrCancelled - case req := <-s.accountReqFails: + case req := <-accountReqFails: s.revertAccountRequest(req) - case req := <-s.bytecodeReqFails: + case req := <-bytecodeReqFails: s.revertBytecodeRequest(req) - case req := <-s.storageReqFails: + case req := <-storageReqFails: s.revertStorageRequest(req) - case req := <-s.trienodeHealReqFails: + case req := <-trienodeHealReqFails: s.revertTrienodeHealRequest(req) - case req := <-s.bytecodeHealReqFails: + case req := <-bytecodeHealReqFails: s.revertBytecodeHealRequest(req) - case res := <-s.accountResps: + case res := <-accountResps: s.processAccountResponse(res) - case res := <-s.bytecodeResps: + case res := <-bytecodeResps: s.processBytecodeResponse(res) - case res := <-s.storageResps: + case res := <-storageResps: s.processStorageResponse(res) - case res := <-s.trienodeHealResps: + case res := <-trienodeHealResps: s.processTrienodeHealResponse(res) - case res := <-s.bytecodeHealResps: + case res := <-bytecodeHealResps: s.processBytecodeHealResponse(res) } // Report stats if something meaningful happened @@ -659,6 +673,27 @@ func (s *Syncer) loadSyncStatus() { log.Debug("Scheduled account sync task", "from", task.Next, "last", task.Last) } s.tasks = progress.Tasks + for _, task := range s.tasks { + task.genBatch = ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.accountBytes += common.StorageSize(len(key) + len(value)) + }, + } + task.genTrie = trie.NewStackTrie(task.genBatch) + + for _, subtasks := range task.SubTasks { + for _, subtask := range subtasks { + subtask.genBatch = ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.storageBytes += common.StorageSize(len(key) + len(value)) + }, + } + subtask.genTrie = trie.NewStackTrie(subtask.genBatch) + } + } + } s.snapped = len(s.tasks) == 0 s.accountSynced = progress.AccountSynced @@ -689,7 +724,7 @@ func (s *Syncer) loadSyncStatus() { step := new(big.Int).Sub( new(big.Int).Div( new(big.Int).Exp(common.Big2, common.Big256, nil), - big.NewInt(accountConcurrency), + big.NewInt(int64(accountConcurrency)), ), common.Big1, ) for i := 0; i < accountConcurrency; i++ { @@ -698,10 +733,18 @@ func (s *Syncer) loadSyncStatus() { // Make sure we don't overflow if the step is not a proper divisor last = common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff") } + batch := ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.accountBytes += common.StorageSize(len(key) + len(value)) + }, + } s.tasks = append(s.tasks, &accountTask{ Next: next, Last: last, SubTasks: make(map[common.Hash][]*storageTask), + genBatch: batch, + genTrie: trie.NewStackTrie(batch), }) log.Debug("Created account sync task", "from", next, "last", last) next = common.BigToHash(new(big.Int).Add(last.Big(), common.Big1)) @@ -710,6 +753,20 @@ func (s *Syncer) loadSyncStatus() { // saveSyncStatus marshals the remaining sync tasks into leveldb. func (s *Syncer) saveSyncStatus() { + // Serialize any partial progress to disk before spinning down + for _, task := range s.tasks { + if err := task.genBatch.Write(); err != nil { + log.Error("Failed to persist account slots", "err", err) + } + for _, subtasks := range task.SubTasks { + for _, subtask := range subtasks { + if err := subtask.genBatch.Write(); err != nil { + log.Error("Failed to persist storage slots", "err", err) + } + } + } + } + // Store the actual progress markers progress := &syncProgress{ Tasks: s.tasks, AccountSynced: s.accountSynced, @@ -733,16 +790,25 @@ func (s *Syncer) saveSyncStatus() { // cleanAccountTasks removes account range retrieval tasks that have already been // completed. func (s *Syncer) cleanAccountTasks() { + // If the sync was already done before, don't even bother + if len(s.tasks) == 0 { + return + } + // Sync wasn't finished previously, check for any task that can be finalized for i := 0; i < len(s.tasks); i++ { if s.tasks[i].done { s.tasks = append(s.tasks[:i], s.tasks[i+1:]...) i-- } } + // If everything was just finalized just, generate the account trie and start heal if len(s.tasks) == 0 { s.lock.Lock() s.snapped = true s.lock.Unlock() + + // Push the final sync report + s.reportSyncProgress(true) } } @@ -781,7 +847,7 @@ func (s *Syncer) cleanStorageTasks() { // assignAccountTasks attempts to match idle peers to pending account range // retrievals. -func (s *Syncer) assignAccountTasks(cancel chan struct{}) { +func (s *Syncer) assignAccountTasks(success chan *accountResponse, fail chan *accountRequest, cancel chan struct{}) { s.lock.Lock() defer s.lock.Unlock() @@ -827,13 +893,15 @@ func (s *Syncer) assignAccountTasks(cancel chan struct{}) { } // Generate the network query and send it to the peer req := &accountRequest{ - peer: idle, - id: reqid, - cancel: cancel, - stale: make(chan struct{}), - origin: task.Next, - limit: task.Last, - task: task, + peer: idle, + id: reqid, + deliver: success, + revert: fail, + cancel: cancel, + stale: make(chan struct{}), + origin: task.Next, + limit: task.Last, + task: task, } req.timeout = time.AfterFunc(requestTimeout, func() { peer.Log().Debug("Account range request timed out", "reqid", reqid) @@ -859,7 +927,7 @@ func (s *Syncer) assignAccountTasks(cancel chan struct{}) { } // assignBytecodeTasks attempts to match idle peers to pending code retrievals. -func (s *Syncer) assignBytecodeTasks(cancel chan struct{}) { +func (s *Syncer) assignBytecodeTasks(success chan *bytecodeResponse, fail chan *bytecodeRequest, cancel chan struct{}) { s.lock.Lock() defer s.lock.Unlock() @@ -917,12 +985,14 @@ func (s *Syncer) assignBytecodeTasks(cancel chan struct{}) { } } req := &bytecodeRequest{ - peer: idle, - id: reqid, - cancel: cancel, - stale: make(chan struct{}), - hashes: hashes, - task: task, + peer: idle, + id: reqid, + deliver: success, + revert: fail, + cancel: cancel, + stale: make(chan struct{}), + hashes: hashes, + task: task, } req.timeout = time.AfterFunc(requestTimeout, func() { peer.Log().Debug("Bytecode request timed out", "reqid", reqid) @@ -946,7 +1016,7 @@ func (s *Syncer) assignBytecodeTasks(cancel chan struct{}) { // assignStorageTasks attempts to match idle peers to pending storage range // retrievals. -func (s *Syncer) assignStorageTasks(cancel chan struct{}) { +func (s *Syncer) assignStorageTasks(success chan *storageResponse, fail chan *storageRequest, cancel chan struct{}) { s.lock.Lock() defer s.lock.Unlock() @@ -1039,6 +1109,8 @@ func (s *Syncer) assignStorageTasks(cancel chan struct{}) { req := &storageRequest{ peer: idle, id: reqid, + deliver: success, + revert: fail, cancel: cancel, stale: make(chan struct{}), accounts: accounts, @@ -1081,7 +1153,7 @@ func (s *Syncer) assignStorageTasks(cancel chan struct{}) { // assignTrienodeHealTasks attempts to match idle peers to trie node requests to // heal any trie errors caused by the snap sync's chunked retrieval model. -func (s *Syncer) assignTrienodeHealTasks(cancel chan struct{}) { +func (s *Syncer) assignTrienodeHealTasks(success chan *trienodeHealResponse, fail chan *trienodeHealRequest, cancel chan struct{}) { s.lock.Lock() defer s.lock.Unlock() @@ -1159,13 +1231,15 @@ func (s *Syncer) assignTrienodeHealTasks(cancel chan struct{}) { } } req := &trienodeHealRequest{ - peer: idle, - id: reqid, - cancel: cancel, - stale: make(chan struct{}), - hashes: hashes, - paths: paths, - task: s.healer, + peer: idle, + id: reqid, + deliver: success, + revert: fail, + cancel: cancel, + stale: make(chan struct{}), + hashes: hashes, + paths: paths, + task: s.healer, } req.timeout = time.AfterFunc(requestTimeout, func() { peer.Log().Debug("Trienode heal request timed out", "reqid", reqid) @@ -1189,7 +1263,7 @@ func (s *Syncer) assignTrienodeHealTasks(cancel chan struct{}) { // assignBytecodeHealTasks attempts to match idle peers to bytecode requests to // heal any trie errors caused by the snap sync's chunked retrieval model. -func (s *Syncer) assignBytecodeHealTasks(cancel chan struct{}) { +func (s *Syncer) assignBytecodeHealTasks(success chan *bytecodeHealResponse, fail chan *bytecodeHealRequest, cancel chan struct{}) { s.lock.Lock() defer s.lock.Unlock() @@ -1260,12 +1334,14 @@ func (s *Syncer) assignBytecodeHealTasks(cancel chan struct{}) { } } req := &bytecodeHealRequest{ - peer: idle, - id: reqid, - cancel: cancel, - stale: make(chan struct{}), - hashes: hashes, - task: s.healer, + peer: idle, + id: reqid, + deliver: success, + revert: fail, + cancel: cancel, + stale: make(chan struct{}), + hashes: hashes, + task: s.healer, } req.timeout = time.AfterFunc(requestTimeout, func() { peer.Log().Debug("Bytecode heal request timed out", "reqid", reqid) @@ -1346,7 +1422,7 @@ func (s *Syncer) revertRequests(peer string) { // request and return all failed retrieval tasks to the scheduler for reassignment. func (s *Syncer) scheduleRevertAccountRequest(req *accountRequest) { select { - case s.accountReqFails <- req: + case req.revert <- req: // Sync event loop notified case <-req.cancel: // Sync cycle got cancelled @@ -1387,7 +1463,7 @@ func (s *Syncer) revertAccountRequest(req *accountRequest) { // and return all failed retrieval tasks to the scheduler for reassignment. func (s *Syncer) scheduleRevertBytecodeRequest(req *bytecodeRequest) { select { - case s.bytecodeReqFails <- req: + case req.revert <- req: // Sync event loop notified case <-req.cancel: // Sync cycle got cancelled @@ -1428,7 +1504,7 @@ func (s *Syncer) revertBytecodeRequest(req *bytecodeRequest) { // request and return all failed retrieval tasks to the scheduler for reassignment. func (s *Syncer) scheduleRevertStorageRequest(req *storageRequest) { select { - case s.storageReqFails <- req: + case req.revert <- req: // Sync event loop notified case <-req.cancel: // Sync cycle got cancelled @@ -1473,7 +1549,7 @@ func (s *Syncer) revertStorageRequest(req *storageRequest) { // request and return all failed retrieval tasks to the scheduler for reassignment. func (s *Syncer) scheduleRevertTrienodeHealRequest(req *trienodeHealRequest) { select { - case s.trienodeHealReqFails <- req: + case req.revert <- req: // Sync event loop notified case <-req.cancel: // Sync cycle got cancelled @@ -1514,7 +1590,7 @@ func (s *Syncer) revertTrienodeHealRequest(req *trienodeHealRequest) { // request and return all failed retrieval tasks to the scheduler for reassignment. func (s *Syncer) scheduleRevertBytecodeHealRequest(req *bytecodeHealRequest) { select { - case s.bytecodeHealReqFails <- req: + case req.revert <- req: // Sync event loop notified case <-req.cancel: // Sync cycle got cancelled @@ -1569,12 +1645,7 @@ func (s *Syncer) processAccountResponse(res *accountResponse) { continue } if cmp > 0 { - // Chunk overflown, cut off excess, but also update the boundary nodes - for j := i; j < len(res.hashes); j++ { - if err := res.trie.Prove(res.hashes[j][:], 0, res.overflow); err != nil { - panic(err) // Account range was already proven, what happened - } - } + // Chunk overflown, cut off excess res.hashes = res.hashes[:i] res.accounts = res.accounts[:i] res.cont = false // Mark range completed @@ -1650,7 +1721,6 @@ func (s *Syncer) processBytecodeResponse(res *bytecodeResponse) { var ( codes uint64 - bytes common.StorageSize ) for i, hash := range res.hashes { code := res.codes[i] @@ -1668,17 +1738,16 @@ func (s *Syncer) processBytecodeResponse(res *bytecodeResponse) { } } // Push the bytecode into a database batch - s.bytecodeSynced++ - s.bytecodeBytes += common.StorageSize(len(code)) - codes++ - bytes += common.StorageSize(len(code)) - rawdb.WriteCode(batch, hash, code) } + bytes := common.StorageSize(batch.ValueSize()) if err := batch.Write(); err != nil { log.Crit("Failed to persist bytecodes", "err", err) } + s.bytecodeSynced += codes + s.bytecodeBytes += bytes + log.Debug("Persisted set of bytecodes", "count", codes, "bytes", bytes) // If this delivery completed the last pending task, forward the account task @@ -1694,17 +1763,19 @@ func (s *Syncer) processBytecodeResponse(res *bytecodeResponse) { // processStorageResponse integrates an already validated storage response // into the account tasks. func (s *Syncer) processStorageResponse(res *storageResponse) { - // Switch the suntask from pending to idle + // Switch the subtask from pending to idle if res.subTask != nil { res.subTask.req = nil } - batch := s.db.NewBatch() - + batch := ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.storageBytes += common.StorageSize(len(key) + len(value)) + }, + } var ( - slots int - nodes int - skipped int - bytes common.StorageSize + slots int + oldStorageBytes = s.storageBytes ) // Iterate over all the accounts and reconstruct their storage tries from the // delivered slots @@ -1741,27 +1812,60 @@ func (s *Syncer) processStorageResponse(res *storageResponse) { // the subtasks for it within the main account task if tasks, ok := res.mainTask.SubTasks[account]; !ok { var ( - next common.Hash + keys = res.hashes[i] + chunks = uint64(storageConcurrency) + lastKey common.Hash ) - step := new(big.Int).Sub( - new(big.Int).Div( - new(big.Int).Exp(common.Big2, common.Big256, nil), - big.NewInt(storageConcurrency), - ), common.Big1, - ) - for k := 0; k < storageConcurrency; k++ { - last := common.BigToHash(new(big.Int).Add(next.Big(), step)) - if k == storageConcurrency-1 { - // Make sure we don't overflow if the step is not a proper divisor - last = common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff") + if len(keys) > 0 { + lastKey = keys[len(keys)-1] + } + // If the number of slots remaining is low, decrease the + // number of chunks. Somewhere on the order of 10-15K slots + // fit into a packet of 500KB. A key/slot pair is maximum 64 + // bytes, so pessimistically maxRequestSize/64 = 8K. + // + // Chunk so that at least 2 packets are needed to fill a task. + if estimate, err := estimateRemainingSlots(len(keys), lastKey); err == nil { + if n := estimate / (2 * (maxRequestSize / 64)); n+1 < chunks { + chunks = n + 1 + } + log.Debug("Chunked large contract", "initiators", len(keys), "tail", lastKey, "remaining", estimate, "chunks", chunks) + } else { + log.Debug("Chunked large contract", "initiators", len(keys), "tail", lastKey, "chunks", chunks) + } + r := newHashRange(lastKey, chunks) + + // Our first task is the one that was just filled by this response. + batch := ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.storageBytes += common.StorageSize(len(key) + len(value)) + }, + } + tasks = append(tasks, &storageTask{ + Next: common.Hash{}, + Last: r.End(), + root: acc.Root, + genBatch: batch, + genTrie: trie.NewStackTrie(batch), + }) + for r.Next() { + batch := ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.storageBytes += common.StorageSize(len(key) + len(value)) + }, } tasks = append(tasks, &storageTask{ - Next: next, - Last: last, - root: acc.Root, + Next: r.Start(), + Last: r.End(), + root: acc.Root, + genBatch: batch, + genTrie: trie.NewStackTrie(batch), }) - log.Debug("Created storage sync task", "account", account, "root", acc.Root, "from", next, "last", last) - next = common.BigToHash(new(big.Int).Add(last.Big(), common.Big1)) + } + for _, task := range tasks { + log.Debug("Created storage sync task", "account", account, "root", acc.Root, "from", task.Next, "last", task.Last) } res.mainTask.SubTasks[account] = tasks @@ -1774,66 +1878,81 @@ func (s *Syncer) processStorageResponse(res *storageResponse) { if res.subTask != nil { // Ensure the response doesn't overflow into the subsequent task last := res.subTask.Last.Big() - for k, hash := range res.hashes[i] { - // Mark the range complete if the last is already included. - // Keep iteration to delete the extra states if exists. - cmp := hash.Big().Cmp(last) - if cmp == 0 { + // Find the first overflowing key. While at it, mark res as complete + // if we find the range to include or pass the 'last' + index := sort.Search(len(res.hashes[i]), func(k int) bool { + cmp := res.hashes[i][k].Big().Cmp(last) + if cmp >= 0 { res.cont = false - continue - } - if cmp > 0 { - // Chunk overflown, cut off excess, but also update the boundary - for l := k; l < len(res.hashes[i]); l++ { - if err := res.tries[i].Prove(res.hashes[i][l][:], 0, res.overflow); err != nil { - panic(err) // Account range was already proven, what happened - } - } - res.hashes[i] = res.hashes[i][:k] - res.slots[i] = res.slots[i][:k] - res.cont = false // Mark range completed - break } + return cmp > 0 + }) + if index >= 0 { + // cut off excess + res.hashes[i] = res.hashes[i][:index] + res.slots[i] = res.slots[i][:index] } // Forward the relevant storage chunk (even if created just now) if res.cont { - res.subTask.Next = common.BigToHash(new(big.Int).Add(res.hashes[i][len(res.hashes[i])-1].Big(), big.NewInt(1))) + res.subTask.Next = incHash(res.hashes[i][len(res.hashes[i])-1]) } else { res.subTask.done = true } } } - // Iterate over all the reconstructed trie nodes and push them to disk + // Iterate over all the complete contracts, reconstruct the trie nodes and + // push them to disk. If the contract is chunked, the trie nodes will be + // reconstructed later. slots += len(res.hashes[i]) - it := res.nodes[i].NewIterator(nil, nil) - for it.Next() { - // Boundary nodes are not written for the last result, since they are incomplete + if i < len(res.hashes)-1 || res.subTask == nil { + tr := trie.NewStackTrie(batch) + for j := 0; j < len(res.hashes[i]); j++ { + tr.Update(res.hashes[i][j][:], res.slots[i][j]) + } + tr.Commit() + } + // Persist the received storage segements. These flat state maybe + // outdated during the sync, but it can be fixed later during the + // snapshot generation. + for j := 0; j < len(res.hashes[i]); j++ { + rawdb.WriteStorageSnapshot(batch, account, res.hashes[i][j], res.slots[i][j]) + + // If we're storing large contracts, generate the trie nodes + // on the fly to not trash the gluing points if i == len(res.hashes)-1 && res.subTask != nil { - if _, ok := res.bounds[common.BytesToHash(it.Key())]; ok { - skipped++ - continue - } - if _, err := res.overflow.Get(it.Key()); err == nil { - skipped++ - continue + res.subTask.genTrie.Update(res.hashes[i][j][:], res.slots[i][j]) + } + } + } + // Large contracts could have generated new trie nodes, flush them to disk + if res.subTask != nil { + if res.subTask.done { + if root, err := res.subTask.genTrie.Commit(); err != nil { + log.Error("Failed to commit stack slots", "err", err) + } else if root == res.subTask.root { + // If the chunk's root is an overflown but full delivery, clear the heal request + for i, account := range res.mainTask.res.hashes { + if account == res.accounts[len(res.accounts)-1] { + res.mainTask.needHeal[i] = false + } } } - // Node is not a boundary, persist to disk - batch.Put(it.Key(), it.Value()) - - bytes += common.StorageSize(common.HashLength + len(it.Value())) - nodes++ } - it.Release() + if res.subTask.genBatch.ValueSize() > ethdb.IdealBatchSize || res.subTask.done { + if err := res.subTask.genBatch.Write(); err != nil { + log.Error("Failed to persist stack slots", "err", err) + } + res.subTask.genBatch.Reset() + } } + // Flush anything written just now and update the stats if err := batch.Write(); err != nil { log.Crit("Failed to persist storage slots", "err", err) } s.storageSynced += uint64(slots) - s.storageBytes += bytes - log.Debug("Persisted set of storage slots", "accounts", len(res.hashes), "slots", slots, "nodes", nodes, "skipped", skipped, "bytes", bytes) + log.Debug("Persisted set of storage slots", "accounts", len(res.hashes), "slots", slots, "bytes", s.storageBytes-oldStorageBytes) // If this delivery completed the last pending task, forward the account task // to the next chunk @@ -1928,79 +2047,66 @@ func (s *Syncer) forwardAccountTask(task *accountTask) { } task.res = nil - // Iterate over all the accounts and gather all the incomplete trie nodes. A - // node is incomplete if we haven't yet filled it (sync was interrupted), or - // if we filled it in multiple chunks (storage trie), in which case the few - // nodes on the chunk boundaries are missing. - incompletes := light.NewNodeSet() - for i := range res.accounts { - // If the filling was interrupted, mark everything after as incomplete + // Persist the received account segements. These flat state maybe + // outdated during the sync, but it can be fixed later during the + // snapshot generation. + oldAccountBytes := s.accountBytes + + batch := ethdb.HookedBatch{ + Batch: s.db.NewBatch(), + OnPut: func(key []byte, value []byte) { + s.accountBytes += common.StorageSize(len(key) + len(value)) + }, + } + for i, hash := range res.hashes { if task.needCode[i] || task.needState[i] { - for j := i; j < len(res.accounts); j++ { - if err := res.trie.Prove(res.hashes[j][:], 0, incompletes); err != nil { - panic(err) // Account range was already proven, what happened - } - } break } - // Filling not interrupted until this point, mark incomplete if needs healing - if task.needHeal[i] { - if err := res.trie.Prove(res.hashes[i][:], 0, incompletes); err != nil { - panic(err) // Account range was already proven, what happened - } - } - } - // Persist every finalized trie node that's not on the boundary - batch := s.db.NewBatch() + slim := snapshot.SlimAccountRLP(res.accounts[i].Nonce, res.accounts[i].Balance, res.accounts[i].Root, res.accounts[i].CodeHash) + rawdb.WriteAccountSnapshot(batch, hash, slim) - var ( - nodes int - skipped int - bytes common.StorageSize - ) - it := res.nodes.NewIterator(nil, nil) - for it.Next() { - // Boundary nodes are not written, since they are incomplete - if _, ok := res.bounds[common.BytesToHash(it.Key())]; ok { - skipped++ - continue - } - // Overflow nodes are not written, since they mess with another task - if _, err := res.overflow.Get(it.Key()); err == nil { - skipped++ - continue - } - // Accounts with split storage requests are incomplete - if _, err := incompletes.Get(it.Key()); err == nil { - skipped++ - continue + // If the task is complete, drop it into the stack trie to generate + // account trie nodes for it + if !task.needHeal[i] { + full, err := snapshot.FullAccountRLP(slim) // TODO(karalabe): Slim parsing can be omitted + if err != nil { + panic(err) // Really shouldn't ever happen + } + task.genTrie.Update(hash[:], full) } - // Node is neither a boundary, not an incomplete account, persist to disk - batch.Put(it.Key(), it.Value()) - - bytes += common.StorageSize(common.HashLength + len(it.Value())) - nodes++ } - it.Release() - + // Flush anything written just now and update the stats if err := batch.Write(); err != nil { log.Crit("Failed to persist accounts", "err", err) } - s.accountBytes += bytes s.accountSynced += uint64(len(res.accounts)) - log.Debug("Persisted range of accounts", "accounts", len(res.accounts), "nodes", nodes, "skipped", skipped, "bytes", bytes) - // Task filling persisted, push it the chunk marker forward to the first // account still missing data. for i, hash := range res.hashes { if task.needCode[i] || task.needState[i] { return } - task.Next = common.BigToHash(new(big.Int).Add(hash.Big(), big.NewInt(1))) + task.Next = incHash(hash) } // All accounts marked as complete, track if the entire task is done task.done = !res.cont + + // Stack trie could have generated trie nodes, push them to disk (we need to + // flush after finalizing task.done. It's fine even if we crash and lose this + // write as it will only cause more data to be downloaded during heal. + if task.done { + if _, err := task.genTrie.Commit(); err != nil { + log.Error("Failed to commit stack account", "err", err) + } + } + if task.genBatch.ValueSize() > ethdb.IdealBatchSize || task.done { + if err := task.genBatch.Write(); err != nil { + log.Error("Failed to persist stack account", "err", err) + } + task.genBatch.Reset() + } + log.Debug("Persisted range of accounts", "accounts", len(res.accounts), "bytes", s.accountBytes-oldAccountBytes) } // OnAccounts is a callback method to invoke when a range of accounts are @@ -2044,7 +2150,6 @@ func (s *Syncer) OnAccounts(peer SyncPeer, id uint64, hashes []common.Hash, acco s.lock.Unlock() return nil } - // Response is valid, but check if peer is signalling that it does not have // the requested data. For account range queries that means the state being // retrieved was either already pruned remotely, or the peer is not yet @@ -2076,22 +2181,13 @@ func (s *Syncer) OnAccounts(peer SyncPeer, id uint64, hashes []common.Hash, acco if len(keys) > 0 { end = keys[len(keys)-1] } - db, tr, notary, cont, err := trie.VerifyRangeProof(root, req.origin[:], end, keys, accounts, proofdb) + cont, err := trie.VerifyRangeProof(root, req.origin[:], end, keys, accounts, proofdb) if err != nil { logger.Warn("Account range failed proof", "err", err) // Signal this request as failed, and ready for rescheduling s.scheduleRevertAccountRequest(req) return err } - // Partial trie reconstructed, send it to the scheduler for storage filling - bounds := make(map[common.Hash]struct{}) - - it := notary.Accessed().NewIterator(nil, nil) - for it.Next() { - bounds[common.BytesToHash(it.Key())] = struct{}{} - } - it.Release() - accs := make([]*state.Account, len(accounts)) for i, account := range accounts { acc := new(state.Account) @@ -2104,14 +2200,10 @@ func (s *Syncer) OnAccounts(peer SyncPeer, id uint64, hashes []common.Hash, acco task: req.task, hashes: hashes, accounts: accs, - nodes: db, - trie: tr, - bounds: bounds, - overflow: light.NewNodeSet(), cont: cont, } select { - case s.accountResps <- response: + case req.deliver <- response: case <-req.cancel: case <-req.stale: } @@ -2217,7 +2309,7 @@ func (s *Syncer) onByteCodes(peer SyncPeer, id uint64, bytecodes [][]byte) error codes: codes, } select { - case s.bytecodeResps <- response: + case req.deliver <- response: case <-req.cancel: case <-req.stale: } @@ -2306,12 +2398,8 @@ func (s *Syncer) OnStorage(peer SyncPeer, id uint64, hashes [][]common.Hash, slo s.lock.Unlock() // Reconstruct the partial tries from the response and verify them - var ( - dbs = make([]ethdb.KeyValueStore, len(hashes)) - tries = make([]*trie.Trie, len(hashes)) - notary *trie.KeyValueNotary - cont bool - ) + var cont bool + for i := 0; i < len(hashes); i++ { // Convert the keys and proofs into an internal format keys := make([][]byte, len(hashes[i])) @@ -2328,7 +2416,7 @@ func (s *Syncer) OnStorage(peer SyncPeer, id uint64, hashes [][]common.Hash, slo if len(nodes) == 0 { // No proof has been attached, the response must cover the entire key // space and hash to the origin root. - dbs[i], tries[i], _, _, err = trie.VerifyRangeProof(req.roots[i], nil, nil, keys, slots[i], nil) + _, err = trie.VerifyRangeProof(req.roots[i], nil, nil, keys, slots[i], nil) if err != nil { s.scheduleRevertStorageRequest(req) // reschedule request logger.Warn("Storage slots failed proof", "err", err) @@ -2343,7 +2431,7 @@ func (s *Syncer) OnStorage(peer SyncPeer, id uint64, hashes [][]common.Hash, slo if len(keys) > 0 { end = keys[len(keys)-1] } - dbs[i], tries[i], notary, cont, err = trie.VerifyRangeProof(req.roots[i], req.origin[:], end, keys, slots[i], proofdb) + cont, err = trie.VerifyRangeProof(req.roots[i], req.origin[:], end, keys, slots[i], proofdb) if err != nil { s.scheduleRevertStorageRequest(req) // reschedule request logger.Warn("Storage range failed proof", "err", err) @@ -2352,15 +2440,6 @@ func (s *Syncer) OnStorage(peer SyncPeer, id uint64, hashes [][]common.Hash, slo } } // Partial tries reconstructed, send them to the scheduler for storage filling - bounds := make(map[common.Hash]struct{}) - - if notary != nil { // if all contract storages are delivered in full, no notary will be created - it := notary.Accessed().NewIterator(nil, nil) - for it.Next() { - bounds[common.BytesToHash(it.Key())] = struct{}{} - } - it.Release() - } response := &storageResponse{ mainTask: req.mainTask, subTask: req.subTask, @@ -2368,14 +2447,10 @@ func (s *Syncer) OnStorage(peer SyncPeer, id uint64, hashes [][]common.Hash, slo roots: req.roots, hashes: hashes, slots: slots, - nodes: dbs, - tries: tries, - bounds: bounds, - overflow: light.NewNodeSet(), cont: cont, } select { - case s.storageResps <- response: + case req.deliver <- response: case <-req.cancel: case <-req.stale: } @@ -2469,7 +2544,7 @@ func (s *Syncer) OnTrieNodes(peer SyncPeer, id uint64, trienodes [][]byte) error nodes: nodes, } select { - case s.trienodeHealResps <- response: + case req.deliver <- response: case <-req.cancel: case <-req.stale: } @@ -2562,13 +2637,40 @@ func (s *Syncer) onHealByteCodes(peer SyncPeer, id uint64, bytecodes [][]byte) e codes: codes, } select { - case s.bytecodeHealResps <- response: + case req.deliver <- response: case <-req.cancel: case <-req.stale: } return nil } +// onHealState is a callback method to invoke when a flat state(account +// or storage slot) is downloded during the healing stage. The flat states +// can be persisted blindly and can be fixed later in the generation stage. +// Note it's not concurrent safe, please handle the concurrent issue outside. +func (s *Syncer) onHealState(paths [][]byte, value []byte) error { + if len(paths) == 1 { + var account state.Account + if err := rlp.DecodeBytes(value, &account); err != nil { + return nil + } + blob := snapshot.SlimAccountRLP(account.Nonce, account.Balance, account.Root, account.CodeHash) + rawdb.WriteAccountSnapshot(s.stateWriter, common.BytesToHash(paths[0]), blob) + s.accountHealed += 1 + s.accountHealedBytes += common.StorageSize(1 + common.HashLength + len(blob)) + } + if len(paths) == 2 { + rawdb.WriteStorageSnapshot(s.stateWriter, common.BytesToHash(paths[0]), common.BytesToHash(paths[1]), value) + s.storageHealed += 1 + s.storageHealedBytes += common.StorageSize(1 + 2*common.HashLength + len(value)) + } + if s.stateWriter.ValueSize() > ethdb.IdealBatchSize { + s.stateWriter.Write() // It's fine to ignore the error here + s.stateWriter.Reset() + } + return nil +} + // hashSpace is the total size of the 256 bit hash space for accounts. var hashSpace = new(big.Int).Exp(common.Big2, common.Big256, nil) @@ -2584,7 +2686,7 @@ func (s *Syncer) report(force bool) { // reportSyncProgress calculates various status reports and provides it to the user. func (s *Syncer) reportSyncProgress(force bool) { // Don't report all the events, just occasionally - if !force && time.Since(s.logTime) < 3*time.Second { + if !force && time.Since(s.logTime) < 8*time.Second { return } // Don't report anything until we have a meaningful progress @@ -2612,9 +2714,9 @@ func (s *Syncer) reportSyncProgress(force bool) { // Create a mega progress report var ( progress = fmt.Sprintf("%.2f%%", float64(synced)*100/estBytes) - accounts = fmt.Sprintf("%d@%v", s.accountSynced, s.accountBytes.TerminalString()) - storage = fmt.Sprintf("%d@%v", s.storageSynced, s.storageBytes.TerminalString()) - bytecode = fmt.Sprintf("%d@%v", s.bytecodeSynced, s.bytecodeBytes.TerminalString()) + accounts = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.accountSynced), s.accountBytes.TerminalString()) + storage = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.storageSynced), s.storageBytes.TerminalString()) + bytecode = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.bytecodeSynced), s.bytecodeBytes.TerminalString()) ) log.Info("State sync in progress", "synced", progress, "state", synced, "accounts", accounts, "slots", storage, "codes", bytecode, "eta", common.PrettyDuration(estTime-elapsed)) @@ -2623,16 +2725,34 @@ func (s *Syncer) reportSyncProgress(force bool) { // reportHealProgress calculates various status reports and provides it to the user. func (s *Syncer) reportHealProgress(force bool) { // Don't report all the events, just occasionally - if !force && time.Since(s.logTime) < 3*time.Second { + if !force && time.Since(s.logTime) < 8*time.Second { return } s.logTime = time.Now() // Create a mega progress report var ( - trienode = fmt.Sprintf("%d@%v", s.trienodeHealSynced, s.trienodeHealBytes.TerminalString()) - bytecode = fmt.Sprintf("%d@%v", s.bytecodeHealSynced, s.bytecodeHealBytes.TerminalString()) + trienode = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.trienodeHealSynced), s.trienodeHealBytes.TerminalString()) + bytecode = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.bytecodeHealSynced), s.bytecodeHealBytes.TerminalString()) + accounts = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.accountHealed), s.accountHealedBytes.TerminalString()) + storage = fmt.Sprintf("%v@%v", log.FormatLogfmtUint64(s.storageHealed), s.storageHealedBytes.TerminalString()) ) - log.Info("State heal in progress", "nodes", trienode, "codes", bytecode, - "pending", s.healer.scheduler.Pending()) + log.Info("State heal in progress", "accounts", accounts, "slots", storage, + "codes", bytecode, "nodes", trienode, "pending", s.healer.scheduler.Pending()) +} + +// estimateRemainingSlots tries to determine roughly how many slots are left in +// a contract storage, based on the number of keys and the last hash. This method +// assumes that the hashes are lexicographically ordered and evenly distributed. +func estimateRemainingSlots(hashes int, last common.Hash) (uint64, error) { + if last == (common.Hash{}) { + return 0, errors.New("last hash empty") + } + space := new(big.Int).Mul(math.MaxBig256, big.NewInt(int64(hashes))) + space.Div(space, last.Big()) + if !space.IsUint64() { + // Gigantic address space probably due to too few or malicious slots + return 0, errors.New("too few slots for estimation") + } + return space.Uint64() - uint64(hashes), nil } diff --git a/eth/protocols/snap/sync_test.go b/eth/protocols/snap/sync_test.go index 3e9778dbc7..a1cc3581a8 100644 --- a/eth/protocols/snap/sync_test.go +++ b/eth/protocols/snap/sync_test.go @@ -135,6 +135,12 @@ type testPeer struct { trieRequestHandler trieHandlerFunc codeRequestHandler codeHandlerFunc term func() + + // counters + nAccountRequests int + nStorageRequests int + nBytecodeRequests int + nTrienodeRequests int } func newTestPeer(id string, t *testing.T, term func()) *testPeer { @@ -156,19 +162,30 @@ func newTestPeer(id string, t *testing.T, term func()) *testPeer { func (t *testPeer) ID() string { return t.id } func (t *testPeer) Log() log.Logger { return t.logger } +func (t *testPeer) Stats() string { + return fmt.Sprintf(`Account requests: %d +Storage requests: %d +Bytecode requests: %d +Trienode requests: %d +`, t.nAccountRequests, t.nStorageRequests, t.nBytecodeRequests, t.nTrienodeRequests) +} + func (t *testPeer) RequestAccountRange(id uint64, root, origin, limit common.Hash, bytes uint64) error { t.logger.Trace("Fetching range of accounts", "reqid", id, "root", root, "origin", origin, "limit", limit, "bytes", common.StorageSize(bytes)) + t.nAccountRequests++ go t.accountRequestHandler(t, id, root, origin, limit, bytes) return nil } func (t *testPeer) RequestTrieNodes(id uint64, root common.Hash, paths []TrieNodePathSet, bytes uint64) error { t.logger.Trace("Fetching set of trie nodes", "reqid", id, "root", root, "pathsets", len(paths), "bytes", common.StorageSize(bytes)) + t.nTrienodeRequests++ go t.trieRequestHandler(t, id, root, paths, bytes) return nil } func (t *testPeer) RequestStorageRanges(id uint64, root common.Hash, accounts []common.Hash, origin, limit []byte, bytes uint64) error { + t.nStorageRequests++ if len(accounts) == 1 && origin != nil { t.logger.Trace("Fetching range of large storage slots", "reqid", id, "root", root, "account", accounts[0], "origin", common.BytesToHash(origin), "limit", common.BytesToHash(limit), "bytes", common.StorageSize(bytes)) } else { @@ -179,6 +196,7 @@ func (t *testPeer) RequestStorageRanges(id uint64, root common.Hash, accounts [] } func (t *testPeer) RequestByteCodes(id uint64, hashes []common.Hash, bytes uint64) error { + t.nBytecodeRequests++ t.logger.Trace("Fetching set of byte codes", "reqid", id, "hashes", len(hashes), "bytes", common.StorageSize(bytes)) go t.codeRequestHandler(t, id, hashes, bytes) return nil @@ -1365,7 +1383,7 @@ func makeBoundaryAccountTrie(n int) (*trie.Trie, entrySlice) { step := new(big.Int).Sub( new(big.Int).Div( new(big.Int).Exp(common.Big2, common.Big256, nil), - big.NewInt(accountConcurrency), + big.NewInt(int64(accountConcurrency)), ), common.Big1, ) for i := 0; i < accountConcurrency; i++ { @@ -1529,7 +1547,7 @@ func makeBoundaryStorageTrie(n int, db *trie.Database) (*trie.Trie, entrySlice) step := new(big.Int).Sub( new(big.Int).Div( new(big.Int).Exp(common.Big2, common.Big256, nil), - big.NewInt(accountConcurrency), + big.NewInt(int64(accountConcurrency)), ), common.Big1, ) for i := 0; i < accountConcurrency; i++ { @@ -1605,3 +1623,94 @@ func verifyTrie(db ethdb.KeyValueStore, root common.Hash, t *testing.T) { } t.Logf("accounts: %d, slots: %d", accounts, slots) } + +// TestSyncAccountPerformance tests how efficient the snap algo is at minimizing +// state healing +func TestSyncAccountPerformance(t *testing.T) { + // Set the account concurrency to 1. This _should_ result in the + // range root to become correct, and there should be no healing needed + defer func(old int) { accountConcurrency = old }(accountConcurrency) + accountConcurrency = 1 + + var ( + once sync.Once + cancel = make(chan struct{}) + term = func() { + once.Do(func() { + close(cancel) + }) + } + ) + sourceAccountTrie, elems := makeAccountTrieNoStorage(100) + + mkSource := func(name string) *testPeer { + source := newTestPeer(name, t, term) + source.accountTrie = sourceAccountTrie + source.accountValues = elems + return source + } + src := mkSource("source") + syncer := setupSyncer(src) + if err := syncer.Sync(sourceAccountTrie.Hash(), cancel); err != nil { + t.Fatalf("sync failed: %v", err) + } + verifyTrie(syncer.db, sourceAccountTrie.Hash(), t) + // The trie root will always be requested, since it is added when the snap + // sync cycle starts. When popping the queue, we do not look it up again. + // Doing so would bring this number down to zero in this artificial testcase, + // but only add extra IO for no reason in practice. + if have, want := src.nTrienodeRequests, 1; have != want { + fmt.Printf(src.Stats()) + t.Errorf("trie node heal requests wrong, want %d, have %d", want, have) + } +} + +func TestSlotEstimation(t *testing.T) { + for i, tc := range []struct { + last common.Hash + count int + want uint64 + }{ + { + // Half the space + common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + 100, + 100, + }, + { + // 1 / 16th + common.HexToHash("0x0fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"), + 100, + 1500, + }, + { + // Bit more than 1 / 16th + common.HexToHash("0x1000000000000000000000000000000000000000000000000000000000000000"), + 100, + 1499, + }, + { + // Almost everything + common.HexToHash("0xF000000000000000000000000000000000000000000000000000000000000000"), + 100, + 6, + }, + { + // Almost nothing -- should lead to error + common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000001"), + 1, + 0, + }, + { + // Nothing -- should lead to error + common.Hash{}, + 100, + 0, + }, + } { + have, _ := estimateRemainingSlots(tc.count, tc.last) + if want := tc.want; have != want { + t.Errorf("test %d: have %d want %d", i, have, want) + } + } +} diff --git a/eth/protocols/snap/tracker.go b/eth/protocols/snap/tracker.go new file mode 100644 index 0000000000..2cf59cc23a --- /dev/null +++ b/eth/protocols/snap/tracker.go @@ -0,0 +1,26 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package snap + +import ( + "time" + + "github.com/ethereum/go-ethereum/p2p/tracker" +) + +// requestTracker is a singleton tracker for request times. +var requestTracker = tracker.New(ProtocolName, time.Minute) diff --git a/eth/state_accessor.go b/eth/state_accessor.go index 4b29e4c62d..4cfd74d3cd 100644 --- a/eth/state_accessor.go +++ b/eth/state_accessor.go @@ -114,11 +114,6 @@ func (eth *Ethereum) stateAtBlock(block *types.Block, reexec uint64, base *state logged time.Time parent common.Hash ) - defer func() { - if err != nil && parent != (common.Hash{}) { - database.TrieDB().Dereference(parent) - } - }() for current.NumberU64() < origin { // Print progress logs if long enough time elapsed if time.Since(logged) > 8*time.Second && report { diff --git a/eth/sync.go b/eth/sync.go index 60f77593fd..fe7dcb0680 100644 --- a/eth/sync.go +++ b/eth/sync.go @@ -65,7 +65,7 @@ func (h *handler) syncTransactions(p *eth.Peer) { // The eth/65 protocol introduces proper transaction announcements, so instead // of dripping transactions across multiple peers, just send the entire list as // an announcement and let the remote side decide what they need (likely nothing). - if p.Version() == eth.ETH65 { + if p.Version() >= eth.ETH65 { hashes := make([]common.Hash, len(txs)) for i, tx := range txs { hashes[i] = tx.Hash() @@ -96,7 +96,7 @@ func (h *handler) txsyncLoop64() { // send starts a sending a pack of transactions from the sync. send := func(s *txsync) { - if s.p.Version() == eth.ETH65 { + if s.p.Version() >= eth.ETH65 { panic("initial transaction syncer running on eth/65+") } // Fill pack with transactions up to the target size. diff --git a/eth/sync_test.go b/eth/sync_test.go index 9cc806b18a..a0c6f86023 100644 --- a/eth/sync_test.go +++ b/eth/sync_test.go @@ -28,8 +28,8 @@ import ( ) // Tests that fast sync is disabled after a successful sync cycle. -func TestFastSyncDisabling64(t *testing.T) { testFastSyncDisabling(t, 64) } -func TestFastSyncDisabling65(t *testing.T) { testFastSyncDisabling(t, 65) } +func TestFastSyncDisabling65(t *testing.T) { testFastSyncDisabling(t, eth.ETH65) } +func TestFastSyncDisabling66(t *testing.T) { testFastSyncDisabling(t, eth.ETH66) } // Tests that fast sync gets disabled as soon as a real block is successfully // imported into the blockchain. diff --git a/eth/tracers/api.go b/eth/tracers/api.go index 299d095077..d5489db63b 100644 --- a/eth/tracers/api.go +++ b/eth/tracers/api.go @@ -187,6 +187,16 @@ type TraceConfig struct { Reexec *uint64 } +// TraceCallConfig is the config for traceCall API. It holds one more +// field to override the state for tracing. +type TraceCallConfig struct { + *vm.LogConfig + Tracer *string + Timeout *string + Reexec *uint64 + StateOverrides *ethapi.StateOverride +} + // StdTraceConfig holds extra parameters to standard-json trace functions. type StdTraceConfig struct { vm.LogConfig @@ -302,7 +312,7 @@ func (api *API) traceChain(ctx context.Context, start, end *types.Block, config // Trace all the transactions contained within for i, tx := range task.block.Transactions() { msg, _ := tx.AsMessage(signer) - msg = api.clearMessageDataIfNonParty(msg, psm) + msg = api.clearMessageDataIfNonParty(msg, psm) // Quorum txctx := &txTraceContext{ index: i, hash: tx.Hash(), @@ -842,7 +852,7 @@ func (api *API) TraceTransaction(ctx context.Context, hash common.Hash, config * // created during the execution of EVM if the given transaction was added on // top of the provided block and returns them as a JSON object. // You can provide -2 as a block number to trace on top of the pending block. -func (api *API) TraceCall(ctx context.Context, args ethapi.CallArgs, blockNrOrHash rpc.BlockNumberOrHash, config *TraceConfig) (interface{}, error) { +func (api *API) TraceCall(ctx context.Context, args ethapi.CallArgs, blockNrOrHash rpc.BlockNumberOrHash, config *TraceCallConfig) (interface{}, error) { // Try to retrieve the specified block var ( err error @@ -852,6 +862,8 @@ func (api *API) TraceCall(ctx context.Context, args ethapi.CallArgs, blockNrOrHa block, err = api.blockByHash(ctx, hash) } else if number, ok := blockNrOrHash.Number(); ok { block, err = api.blockByNumber(ctx, number) + } else { + return nil, errors.New("invalid arguments; neither block nor hash specified") } if err != nil { return nil, err @@ -870,24 +882,38 @@ func (api *API) TraceCall(ctx context.Context, args ethapi.CallArgs, blockNrOrHa return nil, err } + // Apply the customized state rules if required. + if config != nil { + if err := config.StateOverrides.Apply(statedb); err != nil { + return nil, err + } + } // Execute the trace msg := args.ToMessage(api.backend.RPCGasCap()) vmctx := core.NewEVMBlockContext(block.Header(), api.chainContext(ctx), nil) // Quorum: we run the call with privateState as publicState to check if we have a result, if it is not empty, then it is a private call var noTracerConfig *TraceConfig + var traceConfig *TraceConfig if config != nil { // create a new config without the tracer so that we have a ExecutionResult returned by api.traceTx - noTracerConfig = &TraceConfig{ + traceConfig = &TraceConfig{ LogConfig: config.LogConfig, + Tracer: config.Tracer, + Timeout: config.Timeout, Reexec: config.Reexec, + } + // create a new config with the tracer so that we have a ExecutionResult returned by api.traceTx + noTracerConfig = &TraceConfig{ + LogConfig: config.LogConfig, Timeout: config.Timeout, + Reexec: config.Reexec, } } res, err := api.traceTx(ctx, msg, new(txTraceContext), vmctx, statedb, privateStateDb, privateStateRepo, noTracerConfig) // test private with no config if exeRes, ok := res.(*ethapi.ExecutionResult); ok && err == nil && len(exeRes.StructLogs) > 0 { // check there is a result if config != nil && config.Tracer != nil { // re-run the private call with the custom JS tracer - return api.traceTx(ctx, msg, new(txTraceContext), vmctx, statedb, privateStateDb, privateStateRepo, config) // re-run with trace + return api.traceTx(ctx, msg, new(txTraceContext), vmctx, statedb, privateStateDb, privateStateRepo, traceConfig) // re-run with trace } return res, err // return private result with no tracer } else if err == nil && !ok { @@ -895,7 +921,7 @@ func (api *API) TraceCall(ctx context.Context, args ethapi.CallArgs, blockNrOrHa } // End Quorum - return api.traceTx(ctx, msg, new(txTraceContext), vmctx, statedb, privateStateDb, privateStateRepo, config) + return api.traceTx(ctx, msg, new(txTraceContext), vmctx, statedb, privateStateDb, privateStateRepo, traceConfig) } // traceTx configures a new tracer according to the provided configuration, and @@ -944,7 +970,7 @@ func (api *API) traceTx(ctx context.Context, message core.Message, txctx *txTrac psm, err := api.chainContext(ctx).PrivateStateManager().ResolveForUserContext(ctx) if err != nil { - return nil, fmt.Errorf("tracing failed: %v", err) + return nil, fmt.Errorf("tracing failed: %w", err) } // Run the transaction with tracing enabled. @@ -966,7 +992,7 @@ func (api *API) traceTx(ctx context.Context, message core.Message, txctx *txTrac result, err := core.ApplyMessage(vmenv, message, new(core.GasPool).AddGas(message.Gas())) if err != nil { - return nil, fmt.Errorf("tracing failed: %v", err) + return nil, fmt.Errorf("tracing failed: %w", err) } // Depending on the tracer type, format and return the output. diff --git a/eth/tracers/api_test.go b/eth/tracers/api_test.go index a84a339437..bb483caa7f 100644 --- a/eth/tracers/api_test.go +++ b/eth/tracers/api_test.go @@ -20,6 +20,7 @@ import ( "bytes" "context" "crypto/ecdsa" + "encoding/json" "errors" "fmt" "math/big" @@ -213,7 +214,7 @@ func TestTraceCall(t *testing.T) { var testSuite = []struct { blockNumber rpc.BlockNumber call ethapi.CallArgs - config *TraceConfig + config *TraceCallConfig expectErr error expect interface{} }{ @@ -299,7 +300,14 @@ func TestTraceCall(t *testing.T) { }, } for _, testspec := range testSuite { - result, err := api.TraceCall(context.Background(), testspec.call, rpc.BlockNumberOrHash{BlockNumber: &testspec.blockNumber}, testspec.config) + tc := &TraceCallConfig{} + if testspec.config != nil { + tc.LogConfig = testspec.config.LogConfig + tc.Tracer = testspec.config.Tracer + tc.Timeout = testspec.config.Timeout + tc.Reexec = testspec.config.Reexec + } + result, err := api.TraceCall(context.Background(), testspec.call, rpc.BlockNumberOrHash{BlockNumber: &testspec.blockNumber}, tc) if testspec.expectErr != nil { if err == nil { t.Errorf("Expect error %v, get nothing", testspec.expectErr) @@ -320,6 +328,147 @@ func TestTraceCall(t *testing.T) { } } +func TestOverridenTraceCall(t *testing.T) { + t.Parallel() + + // Initialize test accounts + accounts := newAccounts(3) + genesis := &core.Genesis{Alloc: core.GenesisAlloc{ + accounts[0].addr: {Balance: big.NewInt(params.Ether)}, + accounts[1].addr: {Balance: big.NewInt(params.Ether)}, + accounts[2].addr: {Balance: big.NewInt(params.Ether)}, + }} + genBlocks := 10 + signer := types.HomesteadSigner{} + api := NewAPI(newTestBackend(t, genBlocks, genesis, func(i int, b *core.BlockGen) { + // Transfer from account[0] to account[1] + // value: 1000 wei + // fee: 0 wei + tx, _ := types.SignTx(types.NewTransaction(uint64(i), accounts[1].addr, big.NewInt(1000), params.TxGas, big.NewInt(0), nil), signer, accounts[0].key) + b.AddTx(tx) + })) + randomAccounts, tracer := newAccounts(3), "callTracer" + + var testSuite = []struct { + blockNumber rpc.BlockNumber + call ethapi.CallArgs + config *TraceCallConfig + expectErr error + expect *callTrace + }{ + // Succcessful call with state overriding + { + blockNumber: rpc.PendingBlockNumber, + call: ethapi.CallArgs{ + From: &randomAccounts[0].addr, + To: &randomAccounts[1].addr, + Value: (*hexutil.Big)(big.NewInt(1000)), + }, + config: &TraceCallConfig{ + Tracer: &tracer, + StateOverrides: ðapi.StateOverride{ + randomAccounts[0].addr: ethapi.OverrideAccount{Balance: newRPCBalance(new(big.Int).Mul(big.NewInt(1), big.NewInt(params.Ether)))}, + }, + }, + expectErr: nil, + expect: &callTrace{ + Type: "CALL", + From: randomAccounts[0].addr, + To: randomAccounts[1].addr, + Gas: newRPCUint64(24979000), + GasUsed: newRPCUint64(0), + Value: (*hexutil.Big)(big.NewInt(1000)), + }, + }, + // Invalid call without state overriding + { + blockNumber: rpc.PendingBlockNumber, + call: ethapi.CallArgs{ + From: &randomAccounts[0].addr, + To: &randomAccounts[1].addr, + Value: (*hexutil.Big)(big.NewInt(1000)), + }, + config: &TraceCallConfig{ + Tracer: &tracer, + }, + expectErr: core.ErrInsufficientFundsForTransfer, + expect: nil, + }, + // Successful simple contract call + // + // // SPDX-License-Identifier: GPL-3.0 + // + // pragma solidity >=0.7.0 <0.8.0; + // + // /** + // * @title Storage + // * @dev Store & retrieve value in a variable + // */ + // contract Storage { + // uint256 public number; + // constructor() { + // number = block.number; + // } + // } + { + blockNumber: rpc.PendingBlockNumber, + call: ethapi.CallArgs{ + From: &randomAccounts[0].addr, + To: &randomAccounts[2].addr, + Data: newRPCBytes(common.Hex2Bytes("8381f58a")), // call number() + }, + config: &TraceCallConfig{ + Tracer: &tracer, + StateOverrides: ðapi.StateOverride{ + randomAccounts[2].addr: ethapi.OverrideAccount{ + Code: newRPCBytes(common.Hex2Bytes("6080604052348015600f57600080fd5b506004361060285760003560e01c80638381f58a14602d575b600080fd5b60336049565b6040518082815260200191505060405180910390f35b6000548156fea2646970667358221220eab35ffa6ab2adfe380772a48b8ba78e82a1b820a18fcb6f59aa4efb20a5f60064736f6c63430007040033")), + StateDiff: newStates([]common.Hash{{}}, []common.Hash{common.BigToHash(big.NewInt(123))}), + }, + }, + }, + expectErr: nil, + expect: &callTrace{ + Type: "CALL", + From: randomAccounts[0].addr, + To: randomAccounts[2].addr, + Input: hexutil.Bytes(common.Hex2Bytes("8381f58a")), + Output: hexutil.Bytes(common.BigToHash(big.NewInt(123)).Bytes()), + Gas: newRPCUint64(24978936), + GasUsed: newRPCUint64(2283), + Value: (*hexutil.Big)(big.NewInt(0)), + }, + }, + } + for _, testspec := range testSuite { + result, err := api.TraceCall(context.Background(), testspec.call, rpc.BlockNumberOrHash{BlockNumber: &testspec.blockNumber}, testspec.config) + if testspec.expectErr != nil { + if err == nil { + t.Errorf("Expect error %v, get nothing", testspec.expectErr) + continue + } + if !errors.Is(err, testspec.expectErr) { + t.Errorf("Error mismatch, want %v, get %v", testspec.expectErr, err) + } + } else { + if err != nil { + t.Errorf("Expect no error, get %v", err) + continue + } + ret := new(callTrace) + if err := json.Unmarshal(result.(json.RawMessage), ret); err != nil { + t.Fatalf("failed to unmarshal trace result: %v", err) + } + if !jsonEqual(ret, testspec.expect) { + // uncomment this for easier debugging + //have, _ := json.MarshalIndent(ret, "", " ") + //want, _ := json.MarshalIndent(testspec.expect, "", " ") + //t.Fatalf("trace mismatch: \nhave %+v\nwant %+v", string(have), string(want)) + t.Fatalf("trace mismatch: \nhave %+v\nwant %+v", ret, testspec.expect) + } + } + } +} + func TestTraceTransaction(t *testing.T) { t.Parallel() @@ -484,3 +633,29 @@ func newAccounts(n int) (accounts Accounts) { sort.Sort(accounts) return accounts } + +func newRPCBalance(balance *big.Int) **hexutil.Big { + rpcBalance := (*hexutil.Big)(balance) + return &rpcBalance +} + +func newRPCUint64(number uint64) *hexutil.Uint64 { + rpcUint64 := hexutil.Uint64(number) + return &rpcUint64 +} + +func newRPCBytes(bytes []byte) *hexutil.Bytes { + rpcBytes := hexutil.Bytes(bytes) + return &rpcBytes +} + +func newStates(keys []common.Hash, vals []common.Hash) *map[common.Hash]common.Hash { + if len(keys) != len(vals) { + panic("invalid input") + } + m := make(map[common.Hash]common.Hash) + for i := 0; i < len(keys); i++ { + m[keys[i]] = vals[i] + } + return &m +} diff --git a/ethdb/batch.go b/ethdb/batch.go index e261415bff..1353693318 100644 --- a/ethdb/batch.go +++ b/ethdb/batch.go @@ -44,3 +44,28 @@ type Batcher interface { // until a final write is called. NewBatch() Batch } + +// HookedBatch wraps an arbitrary batch where each operation may be hooked into +// to monitor from black box code. +type HookedBatch struct { + Batch + + OnPut func(key []byte, value []byte) // Callback if a key is inserted + OnDelete func(key []byte) // Callback if a key is deleted +} + +// Put inserts the given value into the key-value data store. +func (b HookedBatch) Put(key []byte, value []byte) error { + if b.OnPut != nil { + b.OnPut(key, value) + } + return b.Batch.Put(key, value) +} + +// Delete removes the key from the key-value data store. +func (b HookedBatch) Delete(key []byte) error { + if b.OnDelete != nil { + b.OnDelete(key) + } + return b.Batch.Delete(key) +} diff --git a/go.mod b/go.mod index c70cca3ab7..93e70012f1 100644 --- a/go.mod +++ b/go.mod @@ -9,8 +9,10 @@ replace github.com/coreos/etcd => github.com/Consensys/etcd v3.3.13-quorum197+in require ( github.com/Azure/azure-storage-blob-go v0.7.0 + github.com/Azure/go-autorest/autorest/adal v0.9.21 // indirect github.com/BurntSushi/toml v0.3.1 github.com/ConsenSys/quorum-qlight-token-manager-plugin-sdk-go v0.0.0-20220427130631-ecd75caa6e73 + github.com/StackExchange/wmi v1.2.1 // indirect github.com/VictoriaMetrics/fastcache v1.5.7 github.com/aws/aws-sdk-go-v2 v1.2.0 github.com/aws/aws-sdk-go-v2/config v1.1.1 @@ -19,11 +21,14 @@ require ( github.com/btcsuite/btcd v0.20.1-beta github.com/cespare/cp v0.1.0 github.com/cloudflare/cloudflare-go v0.14.0 - github.com/consensys/gurvy v0.3.8 + github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f github.com/coreos/etcd v3.3.20+incompatible + github.com/coreos/go-semver v0.3.0 // indirect github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect + github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f // indirect github.com/davecgh/go-spew v1.1.1 github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea + github.com/dlclark/regexp2 v1.7.0 // indirect github.com/docker/docker v20.10.12+incompatible github.com/dop251/goja v0.0.0-20200721192441-a695b0cdd498 github.com/eapache/channels v1.1.0 @@ -32,6 +37,7 @@ require ( github.com/fatih/color v1.7.0 github.com/fjl/memsize v0.0.0-20190710130421-bcb5799ab5e5 github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff + github.com/go-sourcemap/sourcemap v2.1.3+incompatible // indirect github.com/go-stack/stack v1.8.1 github.com/golang/mock v1.6.0 github.com/golang/protobuf v1.5.2 @@ -54,8 +60,11 @@ require ( github.com/jpmorganchase/quorum-security-plugin-sdk-go v0.0.0-20200714173835-22a319bb78ce github.com/julienschmidt/httprouter v1.2.0 github.com/karalabe/usb v0.0.0-20190919080040-51dc0efba356 + github.com/kylelemons/godebug v1.1.0 // indirect github.com/mattn/go-colorable v0.1.4 github.com/mattn/go-isatty v0.0.14 + github.com/mitchellh/go-testing-interface v1.0.0 // indirect + github.com/naoina/go-stringutil v0.1.0 // indirect github.com/naoina/toml v0.1.2-0.20170918210437-9fafd6967416 github.com/olekukonko/tablewriter v0.0.5 github.com/opentracing/opentracing-go v1.2.0 // indirect @@ -71,7 +80,9 @@ require ( github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef - golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897 + github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 // indirect + golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa + golang.org/x/sync v0.0.0-20210220032951-036812b2e83c golang.org/x/sys v0.0.0-20220422013727-9388b58f7150 golang.org/x/text v0.3.7 golang.org/x/time v0.0.0-20201208040808-7e3f01d25324 diff --git a/go.sum b/go.sum index 29c3bfc74f..6f58cf3be6 100644 --- a/go.sum +++ b/go.sum @@ -12,34 +12,28 @@ cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbf cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigtable v1.2.0/go.mod h1:JcVAOl45lrTmQfLj7T6TxyMzIN/3FGGcFm+2xVAli2o= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= -cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw= cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos= collectd.org v0.3.0/go.mod h1:A/8DzQBkF6abtvrT2j/AU/4tiBgJWYyh0y/oB/4MlWE= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= +github.com/Azure/azure-pipeline-go v0.2.1 h1:OLBdZJ3yvOn2MezlWvbrBMTEUQC72zAftRZOMdj5HYo= github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4= -github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY= -github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc= github.com/Azure/azure-storage-blob-go v0.7.0 h1:MuueVOYkufCxJw5YZzF842DY2MBsp+hLuh2apKY0mck= github.com/Azure/azure-storage-blob-go v0.7.0/go.mod h1:f9YQKtsG1nMisotuTPpO0tjNuEjKRYAcJU8/ydDI++4= -github.com/Azure/go-autorest/autorest v0.9.0 h1:MRvx8gncNaXJqOoLmhNjUAKh33JJF8LyxPhomEtOsjs= -github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= -github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= -github.com/Azure/go-autorest/autorest/adal v0.8.0 h1:CxTzQrySOxDnKpLjFJeZAS5Qrv/qFPkgLjx5bOAi//I= -github.com/Azure/go-autorest/autorest/adal v0.8.0/go.mod h1:Z6vX6WXXuyieHAXwMj0S6HY6e6wcHn37qQMBQlvY3lc= -github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= -github.com/Azure/go-autorest/autorest/date v0.2.0 h1:yW+Zlqf26583pE43KhfnhFcdmSWlm5Ew6bxipnr/tbM= -github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= -github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= -github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= -github.com/Azure/go-autorest/autorest/mocks v0.3.0 h1:qJumjCaCudz+OcqE9/XtEPfvtOjOmKaui4EOpFI6zZc= -github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= -github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY= -github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= -github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k= -github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= +github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs= +github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= +github.com/Azure/go-autorest/autorest/adal v0.9.21 h1:jjQnVFXPfekaqb8vIsv2G1lxshoW+oGv4MDlhRtnYZk= +github.com/Azure/go-autorest/autorest/adal v0.9.21/go.mod h1:zua7mBUaCc5YnSLKYgGJR/w5ePdMDA6H56upLsHzA9U= +github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw= +github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74= +github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk= +github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k= +github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg= +github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8= +github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo= +github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= @@ -49,8 +43,8 @@ github.com/Consensys/etcd v3.3.13-quorum197+incompatible h1:ZBM9sH4QEufgaShSyNNh github.com/Consensys/etcd v3.3.13-quorum197+incompatible/go.mod h1:wz4o/jwsTgMkSZUY9DmwVEIL3b2JX3t+tCDdy/J5ilY= github.com/DATA-DOG/go-sqlmock v1.3.3/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= -github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6 h1:fLjPD/aNc3UIOA6tDi6QXUemppXK3P9BI7mr2hd6gx8= -github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg= +github.com/StackExchange/wmi v1.2.1 h1:VIkavFPXSjcnS+O8yTq7NI32k0R5Aj+v39y29VYDOSA= +github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8= github.com/VictoriaMetrics/fastcache v1.5.7 h1:4y6y0G8PRzszQUYIQHHssv/jgPHAb5qQuuDNdCbyAgw= github.com/VictoriaMetrics/fastcache v1.5.7/go.mod h1:ptDBkNMQI4RtmVo8VS/XwRY6RoTu1dAWCbrk+6WsEM8= github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII= @@ -62,17 +56,6 @@ github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156/go.mod h1:Cb/ax github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8= github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= github.com/apache/arrow/go/arrow v0.0.0-20191024131854-af6fa24be0db/go.mod h1:VTxUBvSJ3s3eHAg65PNgrsn5BtqCRPdmyXh6rAfdxN0= -github.com/aristanetworks/goarista v0.0.0-20170210015632-ea17b1a17847/go.mod h1:D/tb0zPVXnP7fmsLZjtdUhSsumbK/ij54UXjjVgMGxQ= -github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= -github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= -github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= -github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= -github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= -github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= -github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/aws/aws-sdk-go v1.25.48/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go-v2 v1.2.0 h1:BS+UYpbsElC82gB+2E2jiCBg36i8HlubTB/dO/moQ9c= github.com/aws/aws-sdk-go-v2 v1.2.0/go.mod h1:zEQs02YRBw1DjK0PoJv3ygDYOFTre1ejlJWl8FwAuQo= github.com/aws/aws-sdk-go-v2/config v1.1.1 h1:ZAoq32boMzcaTW9bcUacBswAmHTbvlvDJICgHFZuECo= @@ -94,11 +77,8 @@ github.com/aws/smithy-go v1.1.0/go.mod h1:EzMw8dbp/YJL4A5/sbhGddag+NPT7q084agLbB github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= -github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= -github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= github.com/bmizerany/pat v0.0.0-20170815010413-6226ea591a40/go.mod h1:8rLXio+WjiTceGBHIoTvn60HIbs7Hm7bcHjyrSqYB9c= github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps= -github.com/btcsuite/btcd v0.0.0-20171128150713-2e60448ffcc6/go.mod h1:Dmm/EzmjnCiweXmzRIAiUWCInVmPgjkzgv5k4tVyXiQ= github.com/btcsuite/btcd v0.20.1-beta h1:Ik4hyJqN8Jfyv3S4AGBOmyouMsYE3EdYODkMbQjwPGw= github.com/btcsuite/btcd v0.20.1-beta/go.mod h1:wVuoA8VJLEcwgqHBwHmzLRazpKxTv13Px/pDuV7OomQ= github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f/go.mod h1:TdznJufoqS23FtqVCzL0ZqgP5MqXbb4fg/WgDys70nA= @@ -117,18 +97,17 @@ github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghf github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= -github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= -github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= -github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= -github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= +github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= +github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= +github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= +github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cloudflare/cloudflare-go v0.10.2-0.20190916151808-a80f83b9add9/go.mod h1:1MxXX1Ux4x6mqPmjkUgTP1CdXIBXKX7T+Jk9Gxrmx+U= github.com/cloudflare/cloudflare-go v0.14.0 h1:gFqGlGl/5f9UGXAaKapCGUfaTCgRKKnzu2VvzMZlOFA= github.com/cloudflare/cloudflare-go v0.14.0/go.mod h1:EnwdgGMaFOruiPZRFSgn+TsQ3hQ7C/YWzIGLeu5c304= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= @@ -138,20 +117,16 @@ github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWH github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/consensys/bavard v0.1.8-0.20210105233146-c16790d2aa8b/go.mod h1:Bpd0/3mZuaj6Sj+PqrmIquiOKy397AKGThQPaGzNXAQ= -github.com/consensys/goff v0.3.10/go.mod h1:xTldOBEHmFiYS0gPXd3NsaEqZWlnmeWcRLWgD3ba3xc= -github.com/consensys/gurvy v0.3.8 h1:H2hvjvT2OFMgdMn5ZbhXqHt+F8DJ2clZW7Vmc0kFFxc= -github.com/consensys/gurvy v0.3.8/go.mod h1:sN75xnsiD593XnhbhvG2PkOy194pZBzqShWF/kwuW/g= -github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= +github.com/consensys/bavard v0.1.8-0.20210406032232-f3452dc9b572/go.mod h1:Bpd0/3mZuaj6Sj+PqrmIquiOKy397AKGThQPaGzNXAQ= +github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f h1:C43yEtQ6NIf4ftFXD/V55gnGFgPbMQobd//YlnLjUJ8= +github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f/go.mod h1:815PAHg3wvysy0SyIqanF8gZ0Y1wjk/hrDHD/iT88+Q= github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM= github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= -github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU= github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= -github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/dave/jennifer v1.2.0/go.mod h1:fIb+770HOpJ2fmN9EPPKOqm1vMGhB+TwXKMZhrIygKg= github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= @@ -159,24 +134,20 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea h1:j4317fAZh7X6GqbFowYdYdI0L9bwxL07jyPZIdepyZ0= github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ= -github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgryski/go-bitstream v0.0.0-20180413035011-3522498ce2c8/go.mod h1:VMaSuZ+SZcx/wljOQKvp5srsbCiKDEb6K2wC4+PiBmQ= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= -github.com/dlclark/regexp2 v1.2.0 h1:8sAhBGEM0dRWogWqWyQeIJnxjWO6oIjl8FKqREDsGfk= -github.com/dlclark/regexp2 v1.2.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc= -github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/dlclark/regexp2 v1.7.0 h1:7lJfhqlPssTb1WQx4yvTHN0uElPEv52sbaECrAQxjAo= +github.com/dlclark/regexp2 v1.7.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8= github.com/docker/docker v20.10.12+incompatible h1:CEeNmFM0QZIsJCZKMkZx0ZcahTiewkrgiwfYD+dfl1U= github.com/docker/docker v20.10.12+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/dop251/goja v0.0.0-20200721192441-a695b0cdd498 h1:Y9vTBSsV4hSwPSj4bacAU/eSnV3dAxVpepaghAdhGoQ= github.com/dop251/goja v0.0.0-20200721192441-a695b0cdd498/go.mod h1:Mw6PkjjMXWbTj+nnj4s3QPXq1jaT0s5pC0iFD4+BOAA= -github.com/dvyukov/go-fuzz v0.0.0-20200318091601-be3528f3a813/go.mod h1:11Gm+ccJnvAhCNLlf5+cS9KjtbaD5I5zaZpFMsTHWTw= github.com/eapache/channels v1.1.0 h1:F1taHcn7/F0i8DYqKXJnyhJcVpp2kgFcNePxXtnyu4k= github.com/eapache/channels v1.1.0/go.mod h1:jMm2qB5Ubtg9zLd+inMZd2/NUvXgzmWXsDaLyQIGfH0= github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc= github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I= github.com/eclipse/paho.mqtt.golang v1.2.0/go.mod h1:H9keYFcgq3Qr5OUJm/JZI/i6U7joQ8SYLhZwfeOo6Ts= -github.com/edsrzf/mmap-go v0.0.0-20160512033002-935e0e8a636c/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M= github.com/edsrzf/mmap-go v1.0.0 h1:CEBF7HpRnUCSJgGUb5h1Gm7e3VkmVDrR8lvWVLtrOFw= github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= @@ -186,11 +157,8 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.m github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0= github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/ethereum/go-ethereum v1.9.25/go.mod h1:vMkFiYLHI4tgPw4k2j4MHKoovchFE8plZ0M9VMk4/oM= -github.com/fatih/color v1.3.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= -github.com/fjl/memsize v0.0.0-20180418122429-ca190fb6ffbc/go.mod h1:VvhXpOYNQvB+uIk2RvXzuaQtkQJzzIx6lSBe1xv7hi0= github.com/fjl/memsize v0.0.0-20190710130421-bcb5799ab5e5 h1:FtmdgXiUlNeRsoNMFlKLDt+S+6hbjVMEW6RGQ7aUf7c= github.com/fjl/memsize v0.0.0-20190710130421-bcb5799ab5e5/go.mod h1:VvhXpOYNQvB+uIk2RvXzuaQtkQJzzIx6lSBe1xv7hi0= github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k= @@ -209,23 +177,23 @@ github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2 github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80nA= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= -github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E= -github.com/go-ole/go-ole v1.2.1/go.mod h1:7FAglXiTm7HKlQRDeOQ6ZNUHidzCWXuZWq/1dTyBNF8= -github.com/go-sourcemap/sourcemap v2.1.2+incompatible h1:0b/xya7BKGhXuqFESKM4oIiRo9WOt2ebz7KxfreD6ug= -github.com/go-sourcemap/sourcemap v2.1.2+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg= +github.com/go-ole/go-ole v1.2.5 h1:t4MGB5xEDZvXI+0rMjjsfBsD7yAgp/s9ZDkL1JndXwY= +github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= +github.com/go-sourcemap/sourcemap v2.1.3+incompatible h1:W1iEw64niKVGogNgBN3ePyLFfuisuzeidWPMPWmECqU= +github.com/go-sourcemap/sourcemap v2.1.3+incompatible/go.mod h1:F8jJfvm2KbVjc5NqelyYJmf/v5J0dwNLS2mL4sNA1Jg= github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.1 h1:ntEHSVwIt7PNXNpgPmVfMrNhLtgjlmnZha2kOpuRiDw= github.com/go-stack/stack v1.8.1/go.mod h1:dcoOX6HbPZSZptuspn9bctJ+N/CnF5gGygcUP3XYfe4= github.com/gofrs/uuid v3.3.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= -github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls= github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= +github.com/golang-jwt/jwt/v4 v4.0.0 h1:RAqyYixv1p7uEnocuy8P1nru5wprCh/MH2BIlW5z5/o= +github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k= github.com/golang/geo v0.0.0-20190916061304-5b978397cfec/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= @@ -279,45 +247,20 @@ github.com/google/uuid v1.1.5/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= -github.com/gorilla/websocket v1.4.1-0.20190629185528-ae1634f6a989/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= -github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc= github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= -github.com/graph-gophers/graphql-go v0.0.0-20191115155744-f33e81362277/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc= github.com/graph-gophers/graphql-go v1.3.0 h1:Eb9x/q6MFpCLz7jBCiP/WTxjSDrYLR1QY41SORZyNJ0= github.com/graph-gophers/graphql-go v1.3.0/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc= -github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= -github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= -github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q= -github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8= -github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= -github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI= github.com/hashicorp/go-hclog v1.1.0 h1:QsGcniKx5/LuX2eYoeL+Np3UKYPNaN7YKpTh29h8rbw= github.com/hashicorp/go-hclog v1.1.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= -github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= -github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= -github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-plugin v1.2.2 h1:mgDpq0PkoK5gck2w4ivaMpWRHv/matdOR4xmeScmf/w= github.com/hashicorp/go-plugin v1.2.2/go.mod h1:F9eH4LrE/ZsRdbwhfjs9k9HoDUwAHnYtXdgmf1AVNs0= -github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU= -github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= -github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= -github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d h1:dg1dEPuWpEqDnvIw251EVy4zlP8gWbsGj4BsUKCRpYs= github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= -github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= -github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= -github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I= -github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc= github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb h1:b5rjCoWHc7eqmAS4/qyk21ZsHyb6Mxv/jykxvNTkU4M= github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= github.com/holiman/bloomfilter/v2 v2.0.3 h1:73e0e/V0tCydx14a0SCYS/EWCxgwLZ18CZcZKVu0fao= @@ -325,14 +268,12 @@ github.com/holiman/bloomfilter/v2 v2.0.3/go.mod h1:zpoh+gs7qcpqrHr3dB55AMiJwo0iU github.com/holiman/uint256 v1.1.1 h1:4JywC80b+/hSfljFlEBLHrrh+CIONLDz9NuFl0af4Mw= github.com/holiman/uint256 v1.1.1/go.mod h1:y4ga/t+u+Xwd7CpDgZESaRcWy0I7XMlTMA25ApIH5Jw= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= -github.com/huin/goupnp v1.0.0/go.mod h1:n9v9KO1tAxYH82qOn+UTIFQDmx5n1Zxd/ClZDMX7Bnc= github.com/huin/goupnp v1.0.1-0.20210310174557-0ca763054c88 h1:bcAj8KroPf552TScjFPIakjH2/tdIrIH8F+cc4v4SRo= github.com/huin/goupnp v1.0.1-0.20210310174557-0ca763054c88/go.mod h1:nNs7wvRfN1eKaMknBydLNQU6146XQim8t4h+q90biWo= github.com/huin/goutil v0.0.0-20170803182201-1ca381bf3150/go.mod h1:PpLOETDnJ0o3iZrZfqZzyLl6l7F3c6L1oWn7OICBi6o= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/influxdata/flux v0.65.1/go.mod h1:J754/zds0vvpfwuq7Gc2wRdVwEodfpCFM7mYlOw2LqY= -github.com/influxdata/influxdb v1.2.3-0.20180221223340-01288bdb0883/go.mod h1:qZna6X/4elxqT3yI9iZYdZrWWdeFOOprn86kgg4+IzY= github.com/influxdata/influxdb v1.8.3 h1:WEypI1BQFTT4teLM+1qkEcvUi0dAvopAI/ir0vAiBg8= github.com/influxdata/influxdb v1.8.3/go.mod h1:JugdFhsvvI8gadxOI6noqNeeBHvWNTbfYGtiAn+2jhI= github.com/influxdata/influxql v1.1.1-0.20200828144457-65d3ef77d385/go.mod h1:gHp9y86a/pxhjJ+zMjNXiQAA197Xk9wLxaz+fGG+kWk= @@ -344,24 +285,22 @@ github.com/influxdata/usage-client v0.0.0-20160829180054-6d3895376368/go.mod h1: github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458 h1:6OvNmYgJyexcZ3pYbTI9jWx5tHo1Dee/tWbLMfPe2TA= github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc= github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e h1:UvSe12bq+Uj2hWd8aOlwPmoZ+CITRFrdit+sDGfAg8U= -github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= -github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= -github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= -github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74= github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e h1:UvSe12bq+Uj2hWd8aOlwPmoZ+CITRFrdit+sDGfAg8U= -github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= -github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= -github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= -github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74= github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e h1:UvSe12bq+Uj2hWd8aOlwPmoZ+CITRFrdit+sDGfAg8U= github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= +github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= +github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= +github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= +github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= +github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= +github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= +github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74= +github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74= github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74= -github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jpmorganchase/quorum-account-plugin-sdk-go v0.0.0-20200714175524-662195b38a5e h1:aE+TcHdEop381e8gMBWw/7Nw5aOdXdVmVrVP7ZrKrq4= github.com/jpmorganchase/quorum-account-plugin-sdk-go v0.0.0-20200714175524-662195b38a5e/go.mod h1:clocsx5vZHANnLM+SmcJJDKY6VVxxcdRUCKe5Y+roQ0= github.com/jpmorganchase/quorum-hello-world-plugin-sdk-go v0.0.0-20200210211148-57f99f69eeb3 h1:vaXIQlaE9ELd0V5NjwMHD8hlAPeC2A1Zu1i2TxKdpVY= @@ -374,15 +313,12 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1 github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= github.com/jsternberg/zap-logfmt v1.0.0/go.mod h1:uvPs/4X51zdkcm5jXl5SYoN+4RK21K8mysFmDaM/h+o= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= -github.com/julienschmidt/httprouter v1.1.1-0.20170430222011-975b5c4c7c21/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0 h1:TDTW5Yz1mjftljbcKqRcrYhd4XeOoI98t+9HbQbYf7g= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes= github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef/go.mod h1:Ct9fl0F6iIOGgxJ5npU/IUOhOhqlVrGjyIZc8/MagT0= github.com/karalabe/usb v0.0.0-20190919080040-51dc0efba356 h1:I/yrLt2WilKxlQKCM52clh5rGzTKpVctGT1lH4Dc8Jw= github.com/karalabe/usb v0.0.0-20190919080040-51dc0efba356/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU= -github.com/kilic/bls12-381 v0.0.0-20201226121925-69dacb279461/go.mod h1:vDTTHJONJ6G+P2R74EhnyotQDTliQDnFEwhdmfzw1ig= -github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4= @@ -401,44 +337,29 @@ github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= -github.com/leanovate/gopter v0.2.8/go.mod h1:gNcbPWNEWRe4lm+bycKqxUYoH5uoVje5SkOJ3uoLer8= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= -github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= -github.com/mattn/go-colorable v0.1.0/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA= github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149 h1:HfxbT6/JcvIljmERptWhwa8XzP7H3T+Z2N26gTsaDaA= github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc= -github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d h1:oNAwILwmgWKFpuU+dXvI6dl9jG2mAWAZLX3r9s0PPiw= -github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc= -github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= -github.com/mattn/go-isatty v0.0.5-0.20180830101745-3fb116b82035/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y= github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= -github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0= github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= github.com/mattn/go-sqlite3 v1.11.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= github.com/mattn/go-tty v0.0.0-20180907095812-13ff1204f104/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= -github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= -github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= -github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-testing-interface v1.0.0 h1:fzU/JVNcaqHQEcVFAKeR41fkiLdIPrefOvVG1VZ96U0= github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= -github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg= -github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY= -github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae/go.mod h1:qAyveg+e4CE+eKJXWVjKXM4ck2QobLqTDytGJbLLhJg= @@ -452,8 +373,6 @@ github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw= github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= -github.com/olekukonko/tablewriter v0.0.1/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo= -github.com/olekukonko/tablewriter v0.0.2-0.20190409134802-7e037d187b0c/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo= github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= @@ -470,13 +389,11 @@ github.com/opentracing/opentracing-go v1.0.3-0.20180606204148-bd9c31933947/go.mo github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o= github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs= github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc= -github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc= github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ= github.com/paulbellamy/ratecounter v0.2.0/go.mod h1:Hfx1hDpSGoqxkVVpBi/IlYD7kChlfo5C6hzIHwPqfFE= github.com/pborman/uuid v0.0.0-20170112150404-1b00554d8222 h1:goeTyGkArOZIVOMA0dQbyuPWGNQJZGPwPu/QS9GlpnA= github.com/pborman/uuid v0.0.0-20170112150404-1b00554d8222/go.mod h1:VyrYX9gd7irzKovcSS6BIIEwPRkP2Wm2m9ufcdFSJ34= -github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/peterh/liner v1.0.1-0.20180619022028-8c1271fcf47f/go.mod h1:xIteQHvHuaLYG9IFj6mSxM0fCKrs34IrEQUhOYuGPHc= github.com/peterh/liner v1.1.1-0.20190123174540-a2c9a5303de7 h1:oYW+YCJ1pachXTQmzR3rNLYGGz4g/UgFcjb28p/viDM= github.com/peterh/liner v1.1.1-0.20190123174540-a2c9a5303de7/go.mod h1:CRroGNssyjTd/qIG2FyxByd2S8JEAZXBl4qUrZf8GS0= @@ -489,9 +406,7 @@ github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE github.com/pkg/term v0.0.0-20180730021639-bffc007b7fd5/go.mod h1:eCbImbZ95eXtAUIbLAuAVnBnwf83mjf6QIVH8SHYwqQ= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= -github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= @@ -499,31 +414,22 @@ github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1: github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= -github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.6.0 h1:kRhiuYSXR3+uv2IbVbZhUxK5zVD/2pp3Gd2PpvPkpEo= github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= -github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= -github.com/prometheus/tsdb v0.6.2-0.20190402121629-4f204dcbc150/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1 h1:YZcsG11NqnK4czYLrWd9mpEuAJIHVQLwdrleYfszMAA= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/retailnext/hllpp v1.0.1-0.20180308014038-101a6d2f8b52/go.mod h1:RDpi1RftBQPUCDRw6SmxeaREsAaRKnOclghuzp/WRzc= -github.com/rjeczalik/notify v0.9.1/go.mod h1:rKwnCoCGeuQnwBtTSPL9Dad03Vh2n40ePRrjvIXnJho= github.com/rjeczalik/notify v0.9.2 h1:MiTWrPj55mNDHEiIX5YUSKefw/+lCQVoAFmD6oQm5w8= github.com/rjeczalik/notify v0.9.2/go.mod h1:aErll2f0sUX9PXZnVNyeiObbmTlk5jnMoCa4QEjJeqM= -github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= -github.com/rs/cors v0.0.0-20160617231935-a62a804a8a00/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU= github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik= github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU= -github.com/rs/xhandler v0.0.0-20160618193221-ed27b6fd6521/go.mod h1:RvLn4FgxWubrpZHtQLnOf6EwhN2hEMusxZOhcW9H3UQ= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= -github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/segmentio/kafka-go v0.1.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo= github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= @@ -533,20 +439,12 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= -github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= -github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ= -github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= -github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= -github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4 h1:Gb2Tyox57NRNuZ2d3rmvB3pcmbu7O1RS3m8WRx7ilrg= github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4/go.mod h1:RZLeN1LMWmRsyYjvAu+I6Dm9QmlDaIIt+Y+4Kd7Tp+Q= -github.com/steakknife/bloomfilter v0.0.0-20180922174646-6819c0d2a570/go.mod h1:8OR4w3TdeIHIh1g6EMY5p0gVNOovcWC+1vpc7naMuAw= -github.com/steakknife/hamming v0.0.0-20180906055917-c99c65617cd3/go.mod h1:hpGUWaI9xL8pRQCTXQgocU38Qw1g0Us7n5PxxTwTCYU= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= @@ -555,63 +453,52 @@ github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= +github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= +github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= -github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca/go.mod h1:u2MKkTVTVJWe5D1rCvame8WqhBd88EuIwODJZ1VHCPM= github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 h1:xQdMZ1WLrgkkvOZ/LDQxjVxMLdby7osSh4ZEVa5sIjs= github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954/go.mod h1:u2MKkTVTVJWe5D1rCvame8WqhBd88EuIwODJZ1VHCPM= github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE= -github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c h1:u6SKchux2yDvFQnDHS3lPnIRmfVJ5Sxy3ao2SIdysLQ= github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM= github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef h1:wHSqTBrZW24CsNJDfeh9Ex6Pm0Rcpc7qrgKBiL44vF4= github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef/go.mod h1:sJ5fKU0s6JVwZjjcUEX2zFOnvq0ASQ2K9Zr6cf67kNs= -github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0= github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI= github.com/willf/bitset v1.1.3/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4= -github.com/wsddn/go-ecdh v0.0.0-20161211032359-48726bab9208/go.mod h1:IotVbo4F+mw0EzQ08zFqg7pK3FebNXpaMsRy2RT+Ees= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xlab/treeprint v0.0.0-20180616005107-d6fb6747feb6/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= -go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= -go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= -go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190909091759-094676da4a83/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897 h1:pLI5jrR7OSLijeIDcmRxNmw2api+jEfxLoykJVice/E= -golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= +golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa h1:zuSxTR4o9y82ebqCUJYNGJbGPo6sKVl54f/TVDObg1c= +golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= -golang.org/x/exp v0.0.0-20190731235908-ec7cb31e5a56/go.mod h1:JhuoJpWY28nO4Vef9tZUw9qufEGTyX1+7lmHxV5q5G4= golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= @@ -629,11 +516,9 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= -golang.org/x/mobile v0.0.0-20200801112145-973feb4309de/go.mod h1:skQtrUTUwhdJvXM/2KKJzY8pDgNr9I/FOMqDVRPBUS4= golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.1.1-0.20191209134235-331c550502dd/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -641,10 +526,7 @@ golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73r golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= @@ -661,7 +543,9 @@ golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81R golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20210220033124-5f55cee0dc0d/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4 h1:HVyaeDAYux4pnY+D/SiwmLOR36ewZ4iGQIIrtnuCjFA= golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -679,12 +563,10 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180926160741-c2ed4eda69e7/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -697,6 +579,7 @@ golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -706,14 +589,12 @@ golang.org/x/sys v0.0.0-20200107162124-548cf772de50/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200824131525-c12d262b63d8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201101102859-da207088b7d1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210105210732-16f7687f5001/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -721,6 +602,7 @@ golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220422013727-9388b58f7150 h1:xHms4gcpe1YE7A3yIllJXP16CMAGuqwO2lX1mTyyRRc= golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -728,13 +610,13 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20201208040808-7e3f01d25324 h1:Hir2P/De0WpUhtrKGGjvSb2YxUgyZ7EFOSLIcSSpiwE= golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= @@ -755,7 +637,6 @@ golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgw golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -763,7 +644,6 @@ golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtn golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200108203644-89082a384178/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200117012304-6edc0a871e69/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -845,7 +725,6 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= -gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/karalabe/cookiejar.v2 v2.0.0-20150724131613-8dcd6a7f4951 h1:DMTcQRFbEH62YPRWwOI647s2e5mHda3oBPMHfrLs2bw= gopkg.in/karalabe/cookiejar.v2 v2.0.0-20150724131613-8dcd6a7f4951/go.mod h1:owOxCRGGeAx1uugABik6K9oeNu1cgxP/R9ItzLDxNWA= gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce h1:+JknDZhAj8YMt7GC73Ei8pv4MzjDUNPHgQWJdtMAaDU= @@ -854,12 +733,10 @@ gopkg.in/olebedev/go-duktape.v3 v3.0.0-20200619000410-60c24ae608a6 h1:a6cXbcDDUk gopkg.in/olebedev/go-duktape.v3 v3.0.0-20200619000410-60c24ae608a6/go.mod h1:uAJfkITjFhyEEuUfm7bsmCZRbW5WRq8s9EY8HZ6hCns= gopkg.in/oleiade/lane.v1 v1.0.0 h1:Xs7/GTdnZNGuuXV7z8gTrHoHEeHdCnhuNXm4n0UpxjY= gopkg.in/oleiade/lane.v1 v1.0.0/go.mod h1:e9mCiNjxfTGlkjxn/TPK3JUwhjKjby5cjXuGotH/QlE= -gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/urfave/cli.v1 v1.20.0 h1:NdAVW6RYxDif9DhDHaAortIu956m2c0v+09AZBPTbE0= gopkg.in/urfave/cli.v1 v1.20.0/go.mod h1:vuBzUtMdQeixQj8LVd+/98pzhxNGQoyuPBlsXHOQNO0= -gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= @@ -870,8 +747,6 @@ gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo= -gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= gotest.tools/v3 v3.1.0 h1:rVV8Tcg/8jHUkPUorwjaMTtemIMVXfIPKiOqnhEhakk= gotest.tools/v3 v3.1.0/go.mod h1:fHy7eyTmJFO5bQbUsEGQ1v4m2J3Jz9eWL54TP2/ZuYQ= honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= @@ -880,5 +755,6 @@ honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= +honnef.co/go/tools v0.1.3/go.mod h1:NgwopIslSNH47DimFoV78dnkksY2EFtX0ajyb3K/las= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= diff --git a/internal/build/env.go b/internal/build/env.go index a3017182e3..d70c0d50a4 100644 --- a/internal/build/env.go +++ b/internal/build/env.go @@ -38,6 +38,7 @@ var ( // Environment contains metadata provided by the build environment. type Environment struct { + CI bool Name string // name of the environment Repo string // name of GitHub repo Commit, Date, Branch, Tag string // Git info @@ -61,6 +62,7 @@ func Env() Environment { commit = os.Getenv("TRAVIS_COMMIT") } return Environment{ + CI: true, Name: "travis", Repo: os.Getenv("TRAVIS_REPO_SLUG"), Commit: commit, @@ -77,6 +79,7 @@ func Env() Environment { commit = os.Getenv("APPVEYOR_REPO_COMMIT") } return Environment{ + CI: true, Name: "appveyor", Repo: os.Getenv("APPVEYOR_REPO_NAME"), Commit: commit, diff --git a/internal/build/gotool.go b/internal/build/gotool.go new file mode 100644 index 0000000000..e644b5f695 --- /dev/null +++ b/internal/build/gotool.go @@ -0,0 +1,149 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package build + +import ( + "fmt" + "log" + "os" + "os/exec" + "path/filepath" + "runtime" + "strings" +) + +type GoToolchain struct { + Root string // GOROOT + + // Cross-compilation variables. These are set when running the go tool. + GOARCH string + GOOS string + CC string +} + +// Go creates an invocation of the go command. +func (g *GoToolchain) Go(command string, args ...string) *exec.Cmd { + tool := g.goTool(command, args...) + + // Configure environment for cross build. + if g.GOARCH != "" && g.GOARCH != runtime.GOARCH { + tool.Env = append(tool.Env, "CGO_ENABLED=1") + tool.Env = append(tool.Env, "GOARCH="+g.GOARCH) + } + if g.GOOS != "" && g.GOOS != runtime.GOOS { + tool.Env = append(tool.Env, "GOOS="+g.GOOS) + } + // Configure C compiler. + if g.CC != "" { + tool.Env = append(tool.Env, "CC="+g.CC) + } else if os.Getenv("CC") != "" { + tool.Env = append(tool.Env, "CC="+os.Getenv("CC")) + } + return tool +} + +// Install creates an invocation of 'go install'. The command is configured to output +// executables to the given 'gobin' directory. +// +// This can be used to install auxiliary build tools without modifying the local go.mod and +// go.sum files. To install tools which are not required by go.mod, ensure that all module +// paths in 'args' contain a module version suffix (e.g. "...@latest"). +func (g *GoToolchain) Install(gobin string, args ...string) *exec.Cmd { + if !filepath.IsAbs(gobin) { + panic("GOBIN must be an absolute path") + } + tool := g.goTool("install") + tool.Env = append(tool.Env, "GOBIN="+gobin) + tool.Args = append(tool.Args, "-mod=readonly") + tool.Args = append(tool.Args, args...) + + // Ensure GOPATH is set because go install seems to absolutely require it. This uses + // 'go env' because it resolves the default value when GOPATH is not set in the + // environment. Ignore errors running go env and leave any complaining about GOPATH to + // the install command. + pathTool := g.goTool("env", "GOPATH") + output, _ := pathTool.Output() + tool.Env = append(tool.Env, "GOPATH="+string(output)) + return tool +} + +func (g *GoToolchain) goTool(command string, args ...string) *exec.Cmd { + if g.Root == "" { + g.Root = runtime.GOROOT() + } + tool := exec.Command(filepath.Join(g.Root, "bin", "go"), command) + tool.Args = append(tool.Args, args...) + tool.Env = append(tool.Env, "GOROOT="+g.Root) + + // Forward environment variables to the tool, but skip compiler target settings. + // TODO: what about GOARM? + skip := map[string]struct{}{"GOROOT": {}, "GOARCH": {}, "GOOS": {}, "GOBIN": {}, "CC": {}} + for _, e := range os.Environ() { + if i := strings.IndexByte(e, '='); i >= 0 { + if _, ok := skip[e[:i]]; ok { + continue + } + } + tool.Env = append(tool.Env, e) + } + return tool +} + +// DownloadGo downloads the Go binary distribution and unpacks it into a temporary +// directory. It returns the GOROOT of the unpacked toolchain. +func DownloadGo(csdb *ChecksumDB, version string) string { + // Shortcut: if the Go version that runs this script matches the + // requested version exactly, there is no need to download anything. + activeGo := strings.TrimPrefix(runtime.Version(), "go") + if activeGo == version { + log.Printf("-dlgo version matches active Go version %s, skipping download.", activeGo) + return runtime.GOROOT() + } + + ucache, err := os.UserCacheDir() + if err != nil { + log.Fatal(err) + } + + // For Arm architecture, GOARCH includes ISA version. + os := runtime.GOOS + arch := runtime.GOARCH + if arch == "arm" { + arch = "armv6l" + } + file := fmt.Sprintf("go%s.%s-%s", version, os, arch) + if os == "windows" { + file += ".zip" + } else { + file += ".tar.gz" + } + url := "https://golang.org/dl/" + file + dst := filepath.Join(ucache, file) + if err := csdb.DownloadFile(url, dst); err != nil { + log.Fatal(err) + } + + godir := filepath.Join(ucache, fmt.Sprintf("geth-go-%s-%s-%s", version, os, arch)) + if err := ExtractArchive(dst, godir); err != nil { + log.Fatal(err) + } + goroot, err := filepath.Abs(filepath.Join(godir, "go")) + if err != nil { + log.Fatal(err) + } + return goroot +} diff --git a/internal/build/util.go b/internal/build/util.go index 722cea6b0d..cd58fa9a61 100644 --- a/internal/build/util.go +++ b/internal/build/util.go @@ -29,7 +29,6 @@ import ( "os/exec" "path" "path/filepath" - "runtime" "strings" "text/template" ) @@ -111,19 +110,6 @@ func render(tpl *template.Template, outputFile string, outputPerm os.FileMode, x } } -// GoTool returns the command that runs a go tool. This uses go from GOROOT instead of PATH -// so that go commands executed by build use the same version of Go as the 'host' that runs -// build code. e.g. -// -// /usr/lib/go-1.12.1/bin/go run build/ci.go ... -// -// runs using go 1.12.1 and invokes go 1.12.1 tools from the same GOROOT. This is also important -// because runtime.Version checks on the host should match the tools that are run. -func GoTool(tool string, args ...string) *exec.Cmd { - args = append([]string{tool}, args...) - return exec.Command(filepath.Join(runtime.GOROOT(), "bin", "go"), args...) -} - // UploadSFTP uploads files to a remote host using the sftp command line tool. // The destination host may be specified either as [user@]host[: or as a URI in // the form sftp://[user@]host[:port]. @@ -182,7 +168,7 @@ func FindMainPackages(dir string) []string { // ExpandPackagesNoVendor expands a cmd/go import path pattern, skipping // vendored packages. -func ExpandPackagesNoVendor(patterns []string) []string { +func ExpandPackagesNoVendor(tc *GoToolchain, patterns []string) []string { expand := false for _, pkg := range patterns { if strings.Contains(pkg, "...") { @@ -190,7 +176,7 @@ func ExpandPackagesNoVendor(patterns []string) []string { } } if expand { - cmd := GoTool("list", patterns...) + cmd := tc.goTool("list", patterns...) out, err := cmd.CombinedOutput() if err != nil { log.Fatalf("package listing failed: %v\n%s", err, string(out)) diff --git a/internal/ethapi/api.go b/internal/ethapi/api.go index 4e927008c1..0b02e4ae63 100644 --- a/internal/ethapi/api.go +++ b/internal/ethapi/api.go @@ -924,13 +924,13 @@ func (args *CallArgs) ToMessage(globalGasCap uint64) types.Message { return msg } -// account indicates the overriding fields of account during the execution of -// a message call. +// OverrideAccount indicates the overriding fields of account during the execution +// of a message call. // Note, state and stateDiff can't be specified at the same time. If state is // set, message execution will only use the data in the given state. Otherwise // if statDiff is set, all diff will be applied first and then execute the call // message. -type account struct { +type OverrideAccount struct { Nonce *hexutil.Uint64 `json:"nonce"` Code *hexutil.Bytes `json:"code"` Balance **hexutil.Big `json:"balance"` @@ -938,18 +938,15 @@ type account struct { StateDiff *map[common.Hash]common.Hash `json:"stateDiff"` } -// Quorum - Multitenancy -// Before returning the result, we need to inspect the EVM and -// perform verification check -func DoCall(ctx context.Context, b Backend, args CallArgs, blockNrOrHash rpc.BlockNumberOrHash, overrides map[common.Address]account, vmCfg vm.Config, timeout time.Duration, globalGasCap uint64) (*core.ExecutionResult, error) { - defer func(start time.Time) { log.Debug("Executing EVM call finished", "runtime", time.Since(start)) }(time.Now()) +// StateOverride is the collection of overridden accounts. +type StateOverride map[common.Address]OverrideAccount - state, header, err := b.StateAndHeaderByNumberOrHash(ctx, blockNrOrHash) - if state == nil || err != nil { - return nil, err +// Apply overrides the fields of specified accounts into the given state. +func (diff *StateOverride) Apply(state *state.StateDB) error { + if diff == nil { + return nil } - // Override the fields of specified contracts before execution. - for addr, account := range overrides { + for addr, account := range *diff { // Override account nonce. if account.Nonce != nil { state.SetNonce(addr, uint64(*account.Nonce)) @@ -963,7 +960,7 @@ func DoCall(ctx context.Context, b Backend, args CallArgs, blockNrOrHash rpc.Blo state.SetBalance(addr, (*big.Int)(*account.Balance)) } if account.State != nil && account.StateDiff != nil { - return nil, fmt.Errorf("account %s has both 'state' and 'stateDiff'", addr.Hex()) + return fmt.Errorf("account %s has both 'state' and 'stateDiff'", addr.Hex()) } // Replace entire state if caller requires. if account.State != nil { @@ -976,6 +973,22 @@ func DoCall(ctx context.Context, b Backend, args CallArgs, blockNrOrHash rpc.Blo } } } + return nil +} + +// Quorum - Multitenancy +// Before returning the result, we need to inspect the EVM and +// perform verification check +func DoCall(ctx context.Context, b Backend, args CallArgs, blockNrOrHash rpc.BlockNumberOrHash, overrides *StateOverride, vmCfg vm.Config, timeout time.Duration, globalGasCap uint64) (*core.ExecutionResult, error) { + defer func(start time.Time) { log.Debug("Executing EVM call finished", "runtime", time.Since(start)) }(time.Now()) + + state, header, err := b.StateAndHeaderByNumberOrHash(ctx, blockNrOrHash) + if state == nil || err != nil { + return nil, err + } + /*if err := overrides.Apply(state.(eth.EthAPIState)); err != nil { + return nil, err + }*/ // Setup context so it may be cancelled the call has completed // or, in case of unmetered gas, setup a context with a timeout. var cancel context.CancelFunc @@ -988,8 +1001,8 @@ func DoCall(ctx context.Context, b Backend, args CallArgs, blockNrOrHash rpc.Blo // this makes sure resources are cleaned up. defer cancel() - msg := args.ToMessage(globalGasCap) // Get a new instance of the EVM. + msg := args.ToMessage(globalGasCap) evm, vmError, err := b.GetEVM(ctx, msg, state, header, nil) if err != nil { return nil, err @@ -1057,13 +1070,13 @@ func (e *revertError) ErrorData() interface{} { // Quorum // - replaced the default 5s time out with the value passed in vm.calltimeout // - multi tenancy verification -func (s *PublicBlockChainAPI) Call(ctx context.Context, args CallArgs, blockNrOrHash rpc.BlockNumberOrHash, overrides *map[common.Address]account) (hexutil.Bytes, error) { - var accounts map[common.Address]account +func (s *PublicBlockChainAPI) Call(ctx context.Context, args CallArgs, blockNrOrHash rpc.BlockNumberOrHash, overrides *StateOverride) (hexutil.Bytes, error) { + var accounts map[common.Address]OverrideAccount if overrides != nil { accounts = *overrides } - - result, err := DoCall(ctx, s.b, args, blockNrOrHash, accounts, vm.Config{}, s.b.CallTimeOut(), s.b.RPCGasCap()) + stateOverride := StateOverride(accounts) + result, err := DoCall(ctx, s.b, args, blockNrOrHash, &stateOverride, vm.Config{}, s.b.CallTimeOut(), s.b.RPCGasCap()) if err != nil { return nil, err } @@ -1556,7 +1569,8 @@ func AccessList(ctx context.Context, b Backend, blockNrOrHash rpc.BlockNumberOrH } } // Copy the original db so we don't modify it - statedb := db.(*state.StateDB) + // statedb := db.Copy() + statedb := db.(*state.StateDB).Copy() msg := types.NewMessage(args.From, args.To, uint64(*args.Nonce), args.Value.ToInt(), uint64(*args.Gas), args.GasPrice.ToInt(), input, accessList, false) // Apply the transaction with the access list tracer @@ -2006,7 +2020,6 @@ func (args *SendTxArgs) setDefaults(ctx context.Context, b Backend) error { return errors.New(`contract creation without any data provided`) } } - // Estimate the gas usage if necessary. if args.Gas == nil { // For backwards-compatibility reason, we try both input and data @@ -2035,6 +2048,7 @@ func (args *SendTxArgs) setDefaults(ctx context.Context, b Backend) error { id := (*hexutil.Big)(b.ChainConfig().ChainID) args.ChainID = id } + // Quorum if args.PrivateTxType == "" { args.PrivateTxType = "restricted" @@ -2077,7 +2091,6 @@ func (args *SendTxArgs) toTransaction() *types.Transaction { return types.NewTx(data) } -// TODO: this submits a signed transaction, if it is a signed private transaction that should already be recorded in the tx. // SubmitTransaction is a helper function that submits tx to txPool and logs a message. func SubmitTransaction(ctx context.Context, b Backend, tx *types.Transaction, privateFrom string, isRaw bool) (common.Hash, error) { // If the transaction fee cap is already specified, ensure the @@ -2151,7 +2164,6 @@ func SubmitTransaction(ctx context.Context, b Backend, tx *types.Transaction, pr log.EmitCheckpoint(log.TxCreated, "tx", tx.Hash().Hex(), "to", tx.To().Hex()) } return tx.Hash(), nil - } // runSimulation runs a simulation of the given transaction. @@ -2269,7 +2281,6 @@ func (s *PublicTransactionPoolAPI) SendTransaction(ctx context.Context, args Sen if err != nil { return common.Hash{}, err } - // Quorum if signed.IsPrivate() && s.b.IsPrivacyMarkerTransactionCreationEnabled() { pmt, err := createPrivacyMarkerTransaction(s.b, signed, &args.PrivateTxArgs) @@ -2319,7 +2330,6 @@ func (s *PublicTransactionPoolAPI) FillTransaction(ctx context.Context, args Sen tx.SetPrivate() } // /Quorum - data, err := tx.MarshalBinary() if err != nil { return nil, err @@ -2344,7 +2354,7 @@ func (s *PublicTransactionPoolAPI) SendRawTransaction(ctx context.Context, input func (s *PublicTransactionPoolAPI) SendRawPrivateTransaction(ctx context.Context, encodedTx hexutil.Bytes, args SendRawTxArgs) (common.Hash, error) { tx := new(types.Transaction) - if err := rlp.DecodeBytes(encodedTx, tx); err != nil { + if err := tx.UnmarshalBinary(encodedTx); err != nil { return common.Hash{}, err } @@ -2456,14 +2466,13 @@ func (s *PublicTransactionPoolAPI) SignTransaction(ctx context.Context, args Sen args.Gas = &gas } // End Quorum - if err := args.setDefaults(ctx, s.b); err != nil { return nil, err } + // Before actually sign the transaction, ensure the transaction fee is reasonable. if err := checkTxFee(args.GasPrice.ToInt(), uint64(*args.Gas), s.b.RPCTxFeeCap()); err != nil { return nil, err } - // Quorum toSign := args.toTransaction() if args.IsPrivate() { diff --git a/internal/web3ext/web3ext.go b/internal/web3ext/web3ext.go index b32c2b266d..dff152c194 100644 --- a/internal/web3ext/web3ext.go +++ b/internal/web3ext/web3ext.go @@ -642,12 +642,6 @@ web3._extend({ params: 2, inputFormatter: [null, web3._extend.formatters.inputBlockNumberFormatter], }), - new web3._extend.Method({ - name: 'storageRoot', - call: 'eth_storageRoot', - params: 2, - inputFormatter: [web3._extend.formatters.inputAddressFormatter, null] - }), // QUORUM new web3._extend.Method({ name: 'sendTransactionAsync', diff --git a/les/client.go b/les/client.go index d4ec8eafb2..9a5b1ebc8e 100644 --- a/les/client.go +++ b/les/client.go @@ -73,9 +73,9 @@ type LightEthereum struct { accountManager *accounts.Manager netRPCService *ethapi.PublicNetAPI - udpEnabled bool p2pServer *p2p.Server p2pConfig *p2p.Config + udpEnabled bool } // New creates an instance of the light client. diff --git a/les/client_handler.go b/les/client_handler.go index f8e9edc9fe..73149975c3 100644 --- a/les/client_handler.go +++ b/les/client_handler.go @@ -28,6 +28,7 @@ import ( "github.com/ethereum/go-ethereum/core/forkid" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/eth/downloader" + "github.com/ethereum/go-ethereum/eth/protocols/eth" "github.com/ethereum/go-ethereum/light" "github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/p2p" @@ -470,7 +471,7 @@ func (d *downloaderPeerNotify) registerPeer(p *serverPeer) { handler: h, peer: p, } - h.downloader.RegisterLightPeer(p.id, ethVersion, pc) + h.downloader.RegisterLightPeer(p.id, eth.ETH65, pc) } func (d *downloaderPeerNotify) unregisterPeer(p *serverPeer) { diff --git a/les/flowcontrol/control.go b/les/flowcontrol/control.go index 4f0de82318..cf30dba17a 100644 --- a/les/flowcontrol/control.go +++ b/les/flowcontrol/control.go @@ -88,6 +88,9 @@ func NewClientNode(cm *ClientManager, params ServerParams) *ClientNode { // Disconnect should be called when a client is disconnected func (node *ClientNode) Disconnect() { + if node == nil { + return + } node.lock.Lock() defer node.lock.Unlock() diff --git a/les/metrics.go b/les/metrics.go index d356326b76..07d3133c95 100644 --- a/les/metrics.go +++ b/les/metrics.go @@ -71,7 +71,6 @@ var ( connectionTimer = metrics.NewRegisteredTimer("les/connection/duration", nil) serverConnectionGauge = metrics.NewRegisteredGauge("les/connection/server", nil) - clientConnectionGauge = metrics.NewRegisteredGauge("les/connection/client", nil) totalCapacityGauge = metrics.NewRegisteredGauge("les/server/totalCapacity", nil) totalRechargeGauge = metrics.NewRegisteredGauge("les/server/totalRecharge", nil) diff --git a/les/peer.go b/les/peer.go index f6cc94dfad..c6c672942b 100644 --- a/les/peer.go +++ b/les/peer.go @@ -1099,7 +1099,6 @@ func (p *clientPeer) Handshake(td *big.Int, head common.Hash, headNum uint64, ge // set default announceType on server side p.announceType = announceTypeSimple } - p.fcClient = flowcontrol.NewClientNode(server.fcManager, p.fcParams) } return nil }) diff --git a/les/server.go b/les/server.go index d44b1b57d4..c135e65f2d 100644 --- a/les/server.go +++ b/les/server.go @@ -212,17 +212,25 @@ func (s *LesServer) Stop() error { close(s.closeCh) s.clientPool.Stop() - s.serverset.close() + if s.serverset != nil { + s.serverset.close() + } s.peers.close() s.fcManager.Stop() s.costTracker.stop() s.handler.stop() s.servingQueue.stop() - s.vfluxServer.Stop() + if s.vfluxServer != nil { + s.vfluxServer.Stop() + } // Note, bloom trie indexer is closed by parent bloombits indexer. - s.chtIndexer.Close() - s.lesDb.Close() + if s.chtIndexer != nil { + s.chtIndexer.Close() + } + if s.lesDb != nil { + s.lesDb.Close() + } s.wg.Wait() log.Info("Les server stopped") diff --git a/les/server_handler.go b/les/server_handler.go index 5e12136d90..80fcf1c44e 100644 --- a/les/server_handler.go +++ b/les/server_handler.go @@ -30,6 +30,7 @@ import ( "github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/ethdb" + "github.com/ethereum/go-ethereum/les/flowcontrol" "github.com/ethereum/go-ethereum/light" "github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/metrics" @@ -41,7 +42,6 @@ import ( const ( softResponseLimit = 2 * 1024 * 1024 // Target maximum size of returned blocks, headers or node data. estHeaderRlpSize = 500 // Approximate size of an RLP encoded block header - ethVersion = 64 // equivalent eth version for the downloader MaxHeaderFetch = 192 // Amount of block headers to be fetched per retrieval request MaxBodyFetch = 32 // Amount of block bodies to be fetched per retrieval request @@ -123,26 +123,27 @@ func (h *serverHandler) handle(p *clientPeer) error { p.Log().Debug("Light Ethereum handshake failed", "err", err) return err } - + // Connected to another server, no messages expected, just wait for disconnection if p.server { if err := h.server.serverset.register(p); err != nil { return err } - // connected to another server, no messages expected, just wait for disconnection _, err := p.rw.ReadMsg() h.server.serverset.unregister(p) return err } - defer p.fcClient.Disconnect() // set by handshake if it's not another server + // Setup flow control mechanism for the peer + p.fcClient = flowcontrol.NewClientNode(h.server.fcManager, p.fcParams) + defer p.fcClient.Disconnect() - // Reject light clients if server is not synced. - // - // Put this checking here, so that "non-synced" les-server peers are still allowed - // to keep the connection. + // Reject light clients if server is not synced. Put this checking here, so + // that "non-synced" les-server peers are still allowed to keep the connection. if !h.synced() { p.Log().Debug("Light server not synced, rejecting peer") return p2p.DiscRequested } + + // Register the peer into the peerset and clientpool if err := h.server.peers.register(p); err != nil { return err } @@ -151,19 +152,14 @@ func (h *serverHandler) handle(p *clientPeer) error { p.Log().Debug("Client pool already closed") return p2p.DiscRequested } - activeCount, _ := h.server.clientPool.Active() - clientConnectionGauge.Update(int64(activeCount)) p.connectedAt = mclock.Now() var wg sync.WaitGroup // Wait group used to track all in-flight task routines. - defer func() { wg.Wait() // Ensure all background task routines have exited. h.server.clientPool.Unregister(p) h.server.peers.unregister(p.ID()) p.balance = nil - activeCount, _ := h.server.clientPool.Active() - clientConnectionGauge.Update(int64(activeCount)) connectionTimer.Update(time.Duration(mclock.Now() - p.connectedAt)) }() diff --git a/les/test_helper.go b/les/test_helper.go index 87b53e6971..86d7a68a14 100644 --- a/les/test_helper.go +++ b/les/test_helper.go @@ -189,7 +189,7 @@ func testIndexers(db ethdb.Database, odr light.OdrBackend, config *light.Indexer return indexers[:] } -func newTestClientHandler(backend *backends.SimulatedBackend, odr *LesOdr, indexers []*core.ChainIndexer, db ethdb.Database, peers *serverPeerSet, ulcServers []string, ulcFraction int) *clientHandler { +func newTestClientHandler(backend *backends.SimulatedBackend, odr *LesOdr, indexers []*core.ChainIndexer, db ethdb.Database, peers *serverPeerSet, ulcServers []string, ulcFraction int) (*clientHandler, func()) { var ( evmux = new(event.TypeMux) engine = ethash.NewFaker() @@ -245,10 +245,12 @@ func newTestClientHandler(backend *backends.SimulatedBackend, odr *LesOdr, index client.oracle.Start(backend) } client.handler.start() - return client.handler + return client.handler, func() { + client.handler.stop() + } } -func newTestServerHandler(blocks int, indexers []*core.ChainIndexer, db ethdb.Database, clock mclock.Clock) (*serverHandler, *backends.SimulatedBackend) { +func newTestServerHandler(blocks int, indexers []*core.ChainIndexer, db ethdb.Database, clock mclock.Clock) (*serverHandler, *backends.SimulatedBackend, func()) { var ( gspec = core.Genesis{ Config: params.AllEthashProtocolChanges, @@ -314,7 +316,8 @@ func newTestServerHandler(blocks int, indexers []*core.ChainIndexer, db ethdb.Da } server.servingQueue.setThreads(4) server.handler.start() - return server.handler, simulation + closer := func() { server.Stop() } + return server.handler, simulation, closer } func alwaysTrueFn() bool { @@ -600,8 +603,8 @@ func newClientServerEnv(t *testing.T, config testnetConfig) (*testServer, *testC ccIndexer, cbIndexer, cbtIndexer := cIndexers[0], cIndexers[1], cIndexers[2] odr.SetIndexers(ccIndexer, cbIndexer, cbtIndexer) - server, b := newTestServerHandler(config.blocks, sindexers, sdb, clock) - client := newTestClientHandler(b, odr, cIndexers, cdb, speers, config.ulcServers, config.ulcFraction) + server, b, serverClose := newTestServerHandler(config.blocks, sindexers, sdb, clock) + client, clientClose := newTestClientHandler(b, odr, cIndexers, cdb, speers, config.ulcServers, config.ulcFraction) scIndexer.Start(server.blockchain) sbIndexer.Start(server.blockchain) @@ -658,7 +661,10 @@ func newClientServerEnv(t *testing.T, config testnetConfig) (*testServer, *testC cbIndexer.Close() scIndexer.Close() sbIndexer.Close() + dist.close() + serverClose() b.Close() + clientClose() } return s, c, teardown } diff --git a/les/vflux/server/balance.go b/les/vflux/server/balance.go index 01e645a16a..2bc1ddd189 100644 --- a/les/vflux/server/balance.go +++ b/les/vflux/server/balance.go @@ -358,11 +358,15 @@ func (n *nodeBalance) estimatePriority(capacity uint64, addBalance int64, future if bias > 0 { b = n.reducedBalance(b, now+mclock.AbsTime(future), bias, capacity, 0) } - // Note: we subtract one from the estimated priority in order to ensure that biased - // estimates are always lower than actual priorities, even if the bias is very small. + pri := n.balanceToPriority(now, b, capacity) + // Ensure that biased estimates are always lower than actual priorities, even if + // the bias is very small. // This ensures that two nodes will not ping-pong update signals forever if both of // them have zero estimated priority drop in the projected future. - pri := n.balanceToPriority(now, b, capacity) - 1 + current := n.balanceToPriority(now, n.balance, capacity) + if pri >= current { + pri = current - 1 + } if update { n.addCallback(balanceCallbackUpdate, pri, n.signalPriorityUpdate) } diff --git a/les/vflux/server/balance_test.go b/les/vflux/server/balance_test.go index 66f0d1f301..5af89c18ab 100644 --- a/les/vflux/server/balance_test.go +++ b/les/vflux/server/balance_test.go @@ -283,7 +283,7 @@ func TestEstimatedPriority(t *testing.T) { {time.Second, 3 * time.Second, 1000000000, 48}, // All positive balance is used up - {time.Second * 55, 0, 0, 0}, + {time.Second * 55, 0, 0, -1}, // 1 minute estimated time cost, 4/58 * 10^9 estimated request cost per sec. {0, time.Minute, 0, -int64(time.Minute) - int64(time.Second)*120/29}, @@ -292,8 +292,8 @@ func TestEstimatedPriority(t *testing.T) { b.clock.Run(i.runTime) node.RequestServed(i.reqCost) priority := node.estimatePriority(1000000000, 0, i.futureTime, 0, false) - if priority != i.priority-1 { - t.Fatalf("Estimated priority mismatch, want %v, got %v", i.priority-1, priority) + if priority != i.priority { + t.Fatalf("Estimated priority mismatch, want %v, got %v", i.priority, priority) } } } diff --git a/les/vflux/server/clientpool.go b/les/vflux/server/clientpool.go index 2e5fdd0ee7..351961b74e 100644 --- a/les/vflux/server/clientpool.go +++ b/les/vflux/server/clientpool.go @@ -103,14 +103,7 @@ func NewClientPool(balanceDb ethdb.KeyValueStore, minCap uint64, connectedBias t if c, ok := ns.GetField(node, setup.clientField).(clientPeer); ok { timeout = c.InactiveAllowance() } - if timeout > 0 { - ns.AddTimeout(node, setup.inactiveFlag, timeout) - } else { - // Note: if capacity is immediately available then priorityPool will set the active - // flag simultaneously with removing the inactive flag and therefore this will not - // initiate disconnection - ns.SetStateSub(node, nodestate.Flags{}, setup.inactiveFlag, 0) - } + ns.AddTimeout(node, setup.inactiveFlag, timeout) } if oldState.Equals(setup.inactiveFlag) && newState.Equals(setup.inactiveFlag.Or(setup.priorityFlag)) { ns.SetStateSub(node, setup.inactiveFlag, nodestate.Flags{}, 0) // priority gained; remove timeout @@ -246,12 +239,11 @@ func (cp *ClientPool) SetCapacity(node *enode.Node, reqCap uint64, bias time.Dur maxTarget = curve.maxCapacity(func(capacity uint64) int64 { return balance.estimatePriority(capacity, 0, 0, bias, false) }) - if maxTarget <= capacity { + if maxTarget < reqCap { return } - if maxTarget > reqCap { - maxTarget = reqCap - } + maxTarget = reqCap + // Specify a narrow target range that allows a limited number of fine step // iterations minTarget = maxTarget - maxTarget/20 diff --git a/les/vflux/server/clientpool_test.go b/les/vflux/server/clientpool_test.go index 9503121697..3316227ce0 100644 --- a/les/vflux/server/clientpool_test.go +++ b/les/vflux/server/clientpool_test.go @@ -326,12 +326,13 @@ func TestPaidClientKickedOut(t *testing.T) { if cap := connect(pool, newPoolTestPeer(11, kickedCh)); cap == 0 { t.Fatalf("Free client should be accepted") } + clock.Run(0) select { case id := <-kickedCh: if id != 0 { t.Fatalf("Kicked client mismatch, want %v, got %v", 0, id) } - case <-time.NewTimer(time.Second).C: + default: t.Fatalf("timeout") } } @@ -399,9 +400,10 @@ func TestFreeClientKickedOut(t *testing.T) { if cap := connect(pool, newPoolTestPeer(10, kicked)); cap != 0 { t.Fatalf("New free client should be rejected") } + clock.Run(0) select { case <-kicked: - case <-time.NewTimer(time.Second).C: + default: t.Fatalf("timeout") } disconnect(pool, newPoolTestPeer(10, kicked)) @@ -409,13 +411,15 @@ func TestFreeClientKickedOut(t *testing.T) { for i := 0; i < 10; i++ { connect(pool, newPoolTestPeer(i+10, kicked)) } + clock.Run(0) + for i := 0; i < 10; i++ { select { case id := <-kicked: if id >= 10 { t.Fatalf("Old client should be kicked, now got: %d", id) } - case <-time.NewTimer(time.Second).C: + default: t.Fatalf("timeout") } } diff --git a/les/vflux/server/prioritypool.go b/les/vflux/server/prioritypool.go index 573a3570a4..480f77e6af 100644 --- a/les/vflux/server/prioritypool.go +++ b/les/vflux/server/prioritypool.go @@ -63,20 +63,22 @@ type priorityPool struct { ns *nodestate.NodeStateMachine clock mclock.Clock lock sync.Mutex - inactiveQueue *prque.Prque maxCount, maxCap uint64 minCap uint64 activeBias time.Duration capacityStepDiv, fineStepDiv uint64 + // The snapshot of priority pool for query. cachedCurve *capacityCurve ccUpdatedAt mclock.AbsTime ccUpdateForced bool - tempState []*ppNodeInfo // nodes currently in temporary state - // the following fields represent the temporary state if tempState is not empty + // Runtime status of prioritypool, represents the + // temporary state if tempState is not empty + tempState []*ppNodeInfo activeCount, activeCap uint64 activeQueue *prque.LazyQueue + inactiveQueue *prque.Prque } // ppNodeInfo is the internal node descriptor of priorityPool @@ -89,8 +91,9 @@ type ppNodeInfo struct { tempState bool // should only be true while the priorityPool lock is held tempCapacity uint64 // equals capacity when tempState is false + // the following fields only affect the temporary state and they are set to their - // default value when entering the temp state + // default value when leaving the temp state minTarget, stepDiv uint64 bias time.Duration } @@ -157,11 +160,6 @@ func newPriorityPool(ns *nodestate.NodeStateMachine, setup *serverSetup, clock m func (pp *priorityPool) requestCapacity(node *enode.Node, minTarget, maxTarget uint64, bias time.Duration) uint64 { pp.lock.Lock() pp.activeQueue.Refresh() - var updates []capUpdate - defer func() { - pp.lock.Unlock() - pp.updateFlags(updates) - }() if minTarget < pp.minCap { minTarget = pp.minCap @@ -175,12 +173,13 @@ func (pp *priorityPool) requestCapacity(node *enode.Node, minTarget, maxTarget u c, _ := pp.ns.GetField(node, pp.setup.queueField).(*ppNodeInfo) if c == nil { log.Error("requestCapacity called for unknown node", "id", node.ID()) + pp.lock.Unlock() return 0 } pp.setTempState(c) if maxTarget > c.capacity { - c.bias = bias - c.stepDiv = pp.fineStepDiv + pp.setTempStepDiv(c, pp.fineStepDiv) + pp.setTempBias(c, bias) } pp.setTempCapacity(c, maxTarget) c.minTarget = minTarget @@ -188,7 +187,9 @@ func (pp *priorityPool) requestCapacity(node *enode.Node, minTarget, maxTarget u pp.inactiveQueue.Remove(c.inactiveIndex) pp.activeQueue.Push(c) pp.enforceLimits() - updates = pp.finalizeChanges(c.tempCapacity >= minTarget && c.tempCapacity <= maxTarget && c.tempCapacity != c.capacity) + updates := pp.finalizeChanges(c.tempCapacity >= minTarget && c.tempCapacity <= maxTarget && c.tempCapacity != c.capacity) + pp.lock.Unlock() + pp.updateFlags(updates) return c.capacity } @@ -196,15 +197,11 @@ func (pp *priorityPool) requestCapacity(node *enode.Node, minTarget, maxTarget u func (pp *priorityPool) SetLimits(maxCount, maxCap uint64) { pp.lock.Lock() pp.activeQueue.Refresh() - var updates []capUpdate - defer func() { - pp.lock.Unlock() - pp.ns.Operation(func() { pp.updateFlags(updates) }) - }() - inc := (maxCount > pp.maxCount) || (maxCap > pp.maxCap) dec := (maxCount < pp.maxCount) || (maxCap < pp.maxCap) pp.maxCount, pp.maxCap = maxCount, maxCap + + var updates []capUpdate if dec { pp.enforceLimits() updates = pp.finalizeChanges(true) @@ -212,6 +209,8 @@ func (pp *priorityPool) SetLimits(maxCount, maxCap uint64) { if inc { updates = append(updates, pp.tryActivate(false)...) } + pp.lock.Unlock() + pp.ns.Operation(func() { pp.updateFlags(updates) }) } // setActiveBias sets the bias applied when trying to activate inactive nodes @@ -291,18 +290,15 @@ func (pp *priorityPool) inactivePriority(p *ppNodeInfo) int64 { func (pp *priorityPool) connectedNode(c *ppNodeInfo) { pp.lock.Lock() pp.activeQueue.Refresh() - var updates []capUpdate - defer func() { - pp.lock.Unlock() - pp.updateFlags(updates) - }() - if c.connected { + pp.lock.Unlock() return } c.connected = true pp.inactiveQueue.Push(c, pp.inactivePriority(c)) - updates = pp.tryActivate(false) + updates := pp.tryActivate(false) + pp.lock.Unlock() + pp.updateFlags(updates) } // disconnectedNode is called when a node has been removed from the pool (both inactiveFlag @@ -311,23 +307,22 @@ func (pp *priorityPool) connectedNode(c *ppNodeInfo) { func (pp *priorityPool) disconnectedNode(c *ppNodeInfo) { pp.lock.Lock() pp.activeQueue.Refresh() - var updates []capUpdate - defer func() { - pp.lock.Unlock() - pp.updateFlags(updates) - }() - if !c.connected { + pp.lock.Unlock() return } c.connected = false pp.activeQueue.Remove(c.activeIndex) pp.inactiveQueue.Remove(c.inactiveIndex) + + var updates []capUpdate if c.capacity != 0 { pp.setTempState(c) pp.setTempCapacity(c, 0) updates = pp.tryActivate(true) } + pp.lock.Unlock() + pp.updateFlags(updates) } // setTempState internally puts a node in a temporary state that can either be reverted @@ -342,27 +337,62 @@ func (pp *priorityPool) setTempState(c *ppNodeInfo) { if c.tempCapacity != c.capacity { // should never happen log.Error("tempCapacity != capacity when entering tempState") } + // Assign all the defaults to the temp state. c.minTarget = pp.minCap c.stepDiv = pp.capacityStepDiv + c.bias = 0 pp.tempState = append(pp.tempState, c) } +// unsetTempState revokes the temp status of the node and reset all internal +// fields to the default value. +func (pp *priorityPool) unsetTempState(c *ppNodeInfo) { + if !c.tempState { + return + } + c.tempState = false + if c.tempCapacity != c.capacity { // should never happen + log.Error("tempCapacity != capacity when leaving tempState") + } + c.minTarget = pp.minCap + c.stepDiv = pp.capacityStepDiv + c.bias = 0 +} + // setTempCapacity changes the capacity of a node in the temporary state and adjusts // activeCap and activeCount accordingly. Since this change is performed in the temporary // state it should be called after setTempState and before finalizeChanges. -func (pp *priorityPool) setTempCapacity(n *ppNodeInfo, cap uint64) { - if !n.tempState { // should never happen +func (pp *priorityPool) setTempCapacity(c *ppNodeInfo, cap uint64) { + if !c.tempState { // should never happen log.Error("Node is not in temporary state") return } - pp.activeCap += cap - n.tempCapacity - if n.tempCapacity == 0 { + pp.activeCap += cap - c.tempCapacity + if c.tempCapacity == 0 { pp.activeCount++ } if cap == 0 { pp.activeCount-- } - n.tempCapacity = cap + c.tempCapacity = cap +} + +// setTempBias changes the connection bias of a node in the temporary state. +func (pp *priorityPool) setTempBias(c *ppNodeInfo, bias time.Duration) { + if !c.tempState { // should never happen + log.Error("Node is not in temporary state") + return + } + c.bias = bias +} + +// setTempStepDiv changes the capacity divisor of a node in the temporary state. +func (pp *priorityPool) setTempStepDiv(c *ppNodeInfo, stepDiv uint64) { + if !c.tempState { // should never happen + log.Error("Node is not in temporary state") + return + } + c.stepDiv = stepDiv } // enforceLimits enforces active node count and total capacity limits. It returns the @@ -412,10 +442,8 @@ func (pp *priorityPool) finalizeChanges(commit bool) (updates []capUpdate) { } else { pp.setTempCapacity(c, c.capacity) // revert activeCount/activeCap } - c.tempState = false - c.bias = 0 - c.stepDiv = pp.capacityStepDiv - c.minTarget = pp.minCap + pp.unsetTempState(c) + if c.connected { if c.capacity != 0 { pp.activeQueue.Push(c) @@ -462,13 +490,13 @@ func (pp *priorityPool) tryActivate(commit bool) []capUpdate { for pp.inactiveQueue.Size() > 0 { c := pp.inactiveQueue.PopItem().(*ppNodeInfo) pp.setTempState(c) + pp.setTempBias(c, pp.activeBias) pp.setTempCapacity(c, pp.minCap) - c.bias = pp.activeBias pp.activeQueue.Push(c) pp.enforceLimits() if c.tempCapacity > 0 { commit = true - c.bias = 0 + pp.setTempBias(c, 0) } else { break } @@ -483,14 +511,9 @@ func (pp *priorityPool) tryActivate(commit bool) []capUpdate { func (pp *priorityPool) updatePriority(node *enode.Node) { pp.lock.Lock() pp.activeQueue.Refresh() - var updates []capUpdate - defer func() { - pp.lock.Unlock() - pp.updateFlags(updates) - }() - c, _ := pp.ns.GetField(node, pp.setup.queueField).(*ppNodeInfo) if c == nil || !c.connected { + pp.lock.Unlock() return } pp.activeQueue.Remove(c.activeIndex) @@ -500,7 +523,9 @@ func (pp *priorityPool) updatePriority(node *enode.Node) { } else { pp.inactiveQueue.Push(c, pp.inactivePriority(c)) } - updates = pp.tryActivate(false) + updates := pp.tryActivate(false) + pp.lock.Unlock() + pp.updateFlags(updates) } // capacityCurve is a snapshot of the priority pool contents in a format that can efficiently diff --git a/log/format.go b/log/format.go index 421384cc1d..baf8fddac0 100644 --- a/log/format.go +++ b/log/format.go @@ -4,6 +4,7 @@ import ( "bytes" "encoding/json" "fmt" + "math/big" "reflect" "strconv" "strings" @@ -329,11 +330,20 @@ func formatLogfmtValue(value interface{}, term bool) string { return "nil" } - if t, ok := value.(time.Time); ok { + switch v := value.(type) { + case time.Time: // Performance optimization: No need for escaping since the provided // timeFormat doesn't have any escape characters, and escaping is // expensive. - return t.Format(timeFormat) + return v.Format(timeFormat) + + case *big.Int: + // Big ints get consumed by the Stringer clause so we need to handle + // them earlier on. + if v == nil { + return "" + } + return formatLogfmtBigInt(v) } if term { if s, ok := value.(TerminalStringer); ok { @@ -349,8 +359,27 @@ func formatLogfmtValue(value interface{}, term bool) string { return strconv.FormatFloat(float64(v), floatFormat, 3, 64) case float64: return strconv.FormatFloat(v, floatFormat, 3, 64) - case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64: - return fmt.Sprintf("%d", value) + case int8: + return strconv.FormatInt(int64(v), 10) + case uint8: + return strconv.FormatInt(int64(v), 10) + case int16: + return strconv.FormatInt(int64(v), 10) + case uint16: + return strconv.FormatInt(int64(v), 10) + // Larger integers get thousands separators. + case int: + return FormatLogfmtInt64(int64(v)) + case int32: + return FormatLogfmtInt64(int64(v)) + case int64: + return FormatLogfmtInt64(v) + case uint: + return FormatLogfmtUint64(uint64(v)) + case uint32: + return FormatLogfmtUint64(uint64(v)) + case uint64: + return FormatLogfmtUint64(v) case string: return escapeString(v) default: @@ -358,6 +387,87 @@ func formatLogfmtValue(value interface{}, term bool) string { } } +// FormatLogfmtInt64 formats n with thousand separators. +func FormatLogfmtInt64(n int64) string { + if n < 0 { + return formatLogfmtUint64(uint64(-n), true) + } + return formatLogfmtUint64(uint64(n), false) +} + +// FormatLogfmtUint64 formats n with thousand separators. +func FormatLogfmtUint64(n uint64) string { + return formatLogfmtUint64(n, false) +} + +func formatLogfmtUint64(n uint64, neg bool) string { + // Small numbers are fine as is + if n < 100000 { + if neg { + return strconv.Itoa(-int(n)) + } else { + return strconv.Itoa(int(n)) + } + } + // Large numbers should be split + const maxLength = 26 + + var ( + out = make([]byte, maxLength) + i = maxLength - 1 + comma = 0 + ) + for ; n > 0; i-- { + if comma == 3 { + comma = 0 + out[i] = ',' + } else { + comma++ + out[i] = '0' + byte(n%10) + n /= 10 + } + } + if neg { + out[i] = '-' + i-- + } + return string(out[i+1:]) +} + +// formatLogfmtBigInt formats n with thousand separators. +func formatLogfmtBigInt(n *big.Int) string { + if n.IsUint64() { + return FormatLogfmtUint64(n.Uint64()) + } + if n.IsInt64() { + return FormatLogfmtInt64(n.Int64()) + } + + var ( + text = n.String() + buf = make([]byte, len(text)+len(text)/3) + comma = 0 + i = len(buf) - 1 + ) + for j := len(text) - 1; j >= 0; j, i = j-1, i-1 { + c := text[j] + + switch { + case c == '-': + buf[i] = c + case comma == 3: + buf[i] = ',' + i-- + comma = 0 + fallthrough + default: + buf[i] = c + comma++ + } + } + return string(buf[i+1:]) +} + // escapeString checks if the provided string needs escaping/quoting, and // calls strconv.Quote if needed func escapeString(s string) string { diff --git a/log/format_test.go b/log/format_test.go new file mode 100644 index 0000000000..d7e0a95768 --- /dev/null +++ b/log/format_test.go @@ -0,0 +1,95 @@ +package log + +import ( + "math" + "math/big" + "math/rand" + "testing" +) + +func TestPrettyInt64(t *testing.T) { + tests := []struct { + n int64 + s string + }{ + {0, "0"}, + {10, "10"}, + {-10, "-10"}, + {100, "100"}, + {-100, "-100"}, + {1000, "1000"}, + {-1000, "-1000"}, + {10000, "10000"}, + {-10000, "-10000"}, + {99999, "99999"}, + {-99999, "-99999"}, + {100000, "100,000"}, + {-100000, "-100,000"}, + {1000000, "1,000,000"}, + {-1000000, "-1,000,000"}, + {math.MaxInt64, "9,223,372,036,854,775,807"}, + {math.MinInt64, "-9,223,372,036,854,775,808"}, + } + for i, tt := range tests { + if have := FormatLogfmtInt64(tt.n); have != tt.s { + t.Errorf("test %d: format mismatch: have %s, want %s", i, have, tt.s) + } + } +} + +func TestPrettyUint64(t *testing.T) { + tests := []struct { + n uint64 + s string + }{ + {0, "0"}, + {10, "10"}, + {100, "100"}, + {1000, "1000"}, + {10000, "10000"}, + {99999, "99999"}, + {100000, "100,000"}, + {1000000, "1,000,000"}, + {math.MaxUint64, "18,446,744,073,709,551,615"}, + } + for i, tt := range tests { + if have := FormatLogfmtUint64(tt.n); have != tt.s { + t.Errorf("test %d: format mismatch: have %s, want %s", i, have, tt.s) + } + } +} + +func TestPrettyBigInt(t *testing.T) { + tests := []struct { + int string + s string + }{ + {"111222333444555678999", "111,222,333,444,555,678,999"}, + {"-111222333444555678999", "-111,222,333,444,555,678,999"}, + {"11122233344455567899900", "11,122,233,344,455,567,899,900"}, + {"-11122233344455567899900", "-11,122,233,344,455,567,899,900"}, + } + + for _, tt := range tests { + v, _ := new(big.Int).SetString(tt.int, 10) + if have := formatLogfmtBigInt(v); have != tt.s { + t.Errorf("invalid output %s, want %s", have, tt.s) + } + } +} + +var sink string + +func BenchmarkPrettyInt64Logfmt(b *testing.B) { + b.ReportAllocs() + for i := 0; i < b.N; i++ { + sink = FormatLogfmtInt64(rand.Int63()) + } +} + +func BenchmarkPrettyUint64Logfmt(b *testing.B) { + b.ReportAllocs() + for i := 0; i < b.N; i++ { + sink = FormatLogfmtUint64(rand.Uint64()) + } +} diff --git a/node/node.go b/node/node.go index 04626f71de..a094fc11d8 100644 --- a/node/node.go +++ b/node/node.go @@ -168,6 +168,14 @@ func New(conf *Config) (*Node, error) { node.server.Config.DataDir = node.config.DataDir // End Quorum + // Check HTTP/WS prefixes are valid. + if err := validatePrefix("HTTP", conf.HTTPPathPrefix); err != nil { + return nil, err + } + if err := validatePrefix("WebSocket", conf.WSPathPrefix); err != nil { + return nil, err + } + // Configure RPC servers. node.http = newHTTPServer(node.log, conf.HTTPTimeouts).withMultitenancy(node.config.EnableMultitenancy) node.ws = newHTTPServer(node.log, rpc.DefaultHTTPTimeouts).withMultitenancy(node.config.EnableMultitenancy) diff --git a/oss-fuzz.sh b/oss-fuzz.sh index f8152f0fad..a9bac03257 100644 --- a/oss-fuzz.sh +++ b/oss-fuzz.sh @@ -114,5 +114,10 @@ compile_fuzzer tests/fuzzers/bls12381 FuzzPairing fuzz_pairing compile_fuzzer tests/fuzzers/bls12381 FuzzMapG1 fuzz_map_g1 compile_fuzzer tests/fuzzers/bls12381 FuzzMapG2 fuzz_map_g2 +compile_fuzzer tests/fuzzers/bls12381 FuzzCrossG1Add fuzz_cross_g1_add +compile_fuzzer tests/fuzzers/bls12381 FuzzCrossG1MultiExp fuzz_cross_g1_multiexp +compile_fuzzer tests/fuzzers/bls12381 FuzzCrossG2Add fuzz_cross_g2_add +compile_fuzzer tests/fuzzers/bls12381 FuzzCrossPairing fuzz_cross_pairing + #TODO: move this to tests/fuzzers, if possible compile_fuzzer crypto/blake2b Fuzz fuzzBlake2b diff --git a/p2p/discover/v5_udp.go b/p2p/discover/v5_udp.go index eb01d95e93..71a39ea5a5 100644 --- a/p2p/discover/v5_udp.go +++ b/p2p/discover/v5_udp.go @@ -763,9 +763,16 @@ func (t *UDPv5) matchWithCall(fromID enode.ID, nonce v5wire.Nonce) (*callV5, err // handlePing sends a PONG response. func (t *UDPv5) handlePing(p *v5wire.Ping, fromID enode.ID, fromAddr *net.UDPAddr) { + remoteIP := fromAddr.IP + // Handle IPv4 mapped IPv6 addresses in the + // event the local node is binded to an + // ipv6 interface. + if remoteIP.To4() != nil { + remoteIP = remoteIP.To4() + } t.sendResponse(fromID, fromAddr, &v5wire.Pong{ ReqID: p.ReqID, - ToIP: fromAddr.IP, + ToIP: remoteIP, ToPort: uint16(fromAddr.Port), ENRSeq: t.localNode.Node().Seq(), }) diff --git a/p2p/discover/v5_udp_test.go b/p2p/discover/v5_udp_test.go index 292785bd51..f061f5ab41 100644 --- a/p2p/discover/v5_udp_test.go +++ b/p2p/discover/v5_udp_test.go @@ -597,6 +597,38 @@ func TestUDPv5_LocalNode(t *testing.T) { } } +func TestUDPv5_PingWithIPV4MappedAddress(t *testing.T) { + t.Parallel() + test := newUDPV5Test(t) + defer test.close() + + rawIP := net.IPv4(0xFF, 0x12, 0x33, 0xE5) + test.remoteaddr = &net.UDPAddr{ + IP: rawIP.To16(), + Port: 0, + } + remote := test.getNode(test.remotekey, test.remoteaddr).Node() + done := make(chan struct{}, 1) + + // This handler will truncate the ipv4-mapped in ipv6 address. + go func() { + test.udp.handlePing(&v5wire.Ping{ENRSeq: 1}, remote.ID(), test.remoteaddr) + done <- struct{}{} + }() + test.waitPacketOut(func(p *v5wire.Pong, addr *net.UDPAddr, _ v5wire.Nonce) { + if len(p.ToIP) == net.IPv6len { + t.Error("Received untruncated ip address") + } + if len(p.ToIP) != net.IPv4len { + t.Errorf("Received ip address with incorrect length: %d", len(p.ToIP)) + } + if !p.ToIP.Equal(rawIP) { + t.Errorf("Received incorrect ip address: wanted %s but received %s", rawIP.String(), p.ToIP.String()) + } + }) + <-done +} + // udpV5Test is the framework for all tests above. // It runs the UDPv5 transport on a virtual socket and allows testing outgoing packets. type udpV5Test struct { diff --git a/p2p/dnsdisc/client.go b/p2p/dnsdisc/client.go index 3e4b50aadd..f2a4bed4c6 100644 --- a/p2p/dnsdisc/client.go +++ b/p2p/dnsdisc/client.go @@ -32,15 +32,17 @@ import ( "github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enr" lru "github.com/hashicorp/golang-lru" + "golang.org/x/sync/singleflight" "golang.org/x/time/rate" ) // Client discovers nodes by querying DNS servers. type Client struct { - cfg Config - clock mclock.Clock - entries *lru.Cache - ratelimit *rate.Limiter + cfg Config + clock mclock.Clock + entries *lru.Cache + ratelimit *rate.Limiter + singleflight singleflight.Group } // Config holds configuration options for the client. @@ -135,17 +137,20 @@ func (c *Client) NewIterator(urls ...string) (enode.Iterator, error) { // resolveRoot retrieves a root entry via DNS. func (c *Client) resolveRoot(ctx context.Context, loc *linkEntry) (rootEntry, error) { - txts, err := c.cfg.Resolver.LookupTXT(ctx, loc.domain) - c.cfg.Logger.Trace("Updating DNS discovery root", "tree", loc.domain, "err", err) - if err != nil { - return rootEntry{}, err - } - for _, txt := range txts { - if strings.HasPrefix(txt, rootPrefix) { - return parseAndVerifyRoot(txt, loc) + e, err, _ := c.singleflight.Do(loc.str, func() (interface{}, error) { + txts, err := c.cfg.Resolver.LookupTXT(ctx, loc.domain) + c.cfg.Logger.Trace("Updating DNS discovery root", "tree", loc.domain, "err", err) + if err != nil { + return rootEntry{}, err } - } - return rootEntry{}, nameError{loc.domain, errNoRoot} + for _, txt := range txts { + if strings.HasPrefix(txt, rootPrefix) { + return parseAndVerifyRoot(txt, loc) + } + } + return rootEntry{}, nameError{loc.domain, errNoRoot} + }) + return e.(rootEntry), err } func parseAndVerifyRoot(txt string, loc *linkEntry) (rootEntry, error) { @@ -168,17 +173,21 @@ func (c *Client) resolveEntry(ctx context.Context, domain, hash string) (entry, if err := c.ratelimit.Wait(ctx); err != nil { return nil, err } - cacheKey := truncateHash(hash) if e, ok := c.entries.Get(cacheKey); ok { return e.(entry), nil } - e, err := c.doResolveEntry(ctx, domain, hash) - if err != nil { - return nil, err - } - c.entries.Add(cacheKey, e) - return e, nil + + ei, err, _ := c.singleflight.Do(cacheKey, func() (interface{}, error) { + e, err := c.doResolveEntry(ctx, domain, hash) + if err != nil { + return nil, err + } + c.entries.Add(cacheKey, e) + return e, nil + }) + e, _ := ei.(entry) + return e, err } // doResolveEntry fetches an entry via DNS. diff --git a/p2p/metrics.go b/p2p/metrics.go index be0d2f495e..1bb505cdfb 100644 --- a/p2p/metrics.go +++ b/p2p/metrics.go @@ -33,9 +33,6 @@ const ( // HandleHistName is the prefix of the per-packet serving time histograms. HandleHistName = "p2p/handle" - - // WaitHistName is the prefix of the per-packet (req only) waiting time histograms. - WaitHistName = "p2p/wait" ) var ( diff --git a/p2p/tracker/tracker.go b/p2p/tracker/tracker.go new file mode 100644 index 0000000000..69a49087e2 --- /dev/null +++ b/p2p/tracker/tracker.go @@ -0,0 +1,205 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +package tracker + +import ( + "container/list" + "fmt" + "sync" + "time" + + "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/metrics" +) + +const ( + // trackedGaugeName is the prefix of the per-packet request tracking. + trackedGaugeName = "p2p/tracked" + + // lostMeterName is the prefix of the per-packet request expirations. + lostMeterName = "p2p/lost" + + // staleMeterName is the prefix of the per-packet stale responses. + staleMeterName = "p2p/stale" + + // waitHistName is the prefix of the per-packet (req only) waiting time histograms. + waitHistName = "p2p/wait" + + // maxTrackedPackets is a huge number to act as a failsafe on the number of + // pending requests the node will track. It should never be hit unless an + // attacker figures out a way to spin requests. + maxTrackedPackets = 100000 +) + +// request tracks sent network requests which have not yet received a response. +type request struct { + peer string + version uint // Protocol version + + reqCode uint64 // Protocol message code of the request + resCode uint64 // Protocol message code of the expected response + + time time.Time // Timestamp when the request was made + expire *list.Element // Expiration marker to untrack it +} + +// Tracker is a pending network request tracker to measure how much time it takes +// a remote peer to respond. +type Tracker struct { + protocol string // Protocol capability identifier for the metrics + timeout time.Duration // Global timeout after which to drop a tracked packet + + pending map[uint64]*request // Currently pending requests + expire *list.List // Linked list tracking the expiration order + wake *time.Timer // Timer tracking the expiration of the next item + + lock sync.Mutex // Lock protecting from concurrent updates +} + +// New creates a new network request tracker to monitor how much time it takes to +// fill certain requests and how individual peers perform. +func New(protocol string, timeout time.Duration) *Tracker { + return &Tracker{ + protocol: protocol, + timeout: timeout, + pending: make(map[uint64]*request), + expire: list.New(), + } +} + +// Track adds a network request to the tracker to wait for a response to arrive +// or until the request it cancelled or times out. +func (t *Tracker) Track(peer string, version uint, reqCode uint64, resCode uint64, id uint64) { + if !metrics.Enabled { + return + } + t.lock.Lock() + defer t.lock.Unlock() + + // If there's a duplicate request, we've just random-collided (or more probably, + // we have a bug), report it. We could also add a metric, but we're not really + // expecting ourselves to be buggy, so a noisy warning should be enough. + if _, ok := t.pending[id]; ok { + log.Error("Network request id collision", "protocol", t.protocol, "version", version, "code", reqCode, "id", id) + return + } + // If we have too many pending requests, bail out instead of leaking memory + if pending := len(t.pending); pending >= maxTrackedPackets { + log.Error("Request tracker exceeded allowance", "pending", pending, "peer", peer, "protocol", t.protocol, "version", version, "code", reqCode) + return + } + // Id doesn't exist yet, start tracking it + t.pending[id] = &request{ + peer: peer, + version: version, + reqCode: reqCode, + resCode: resCode, + time: time.Now(), + expire: t.expire.PushBack(id), + } + g := fmt.Sprintf("%s/%s/%d/%#02x", trackedGaugeName, t.protocol, version, reqCode) + metrics.GetOrRegisterGauge(g, nil).Inc(1) + + // If we've just inserted the first item, start the expiration timer + if t.wake == nil { + t.wake = time.AfterFunc(t.timeout, t.clean) + } +} + +// clean is called automatically when a preset time passes without a response +// being dleivered for the first network request. +func (t *Tracker) clean() { + t.lock.Lock() + defer t.lock.Unlock() + + // Expire anything within a certain threshold (might be no items at all if + // we raced with the delivery) + for t.expire.Len() > 0 { + // Stop iterating if the next pending request is still alive + var ( + head = t.expire.Front() + id = head.Value.(uint64) + req = t.pending[id] + ) + if time.Since(req.time) < t.timeout+5*time.Millisecond { + break + } + // Nope, dead, drop it + t.expire.Remove(head) + delete(t.pending, id) + + g := fmt.Sprintf("%s/%s/%d/%#02x", trackedGaugeName, t.protocol, req.version, req.reqCode) + metrics.GetOrRegisterGauge(g, nil).Dec(1) + + m := fmt.Sprintf("%s/%s/%d/%#02x", lostMeterName, t.protocol, req.version, req.reqCode) + metrics.GetOrRegisterMeter(m, nil).Mark(1) + } + t.schedule() +} + +// schedule starts a timer to trigger on the expiration of the first network +// packet. +func (t *Tracker) schedule() { + if t.expire.Len() == 0 { + t.wake = nil + return + } + t.wake = time.AfterFunc(time.Until(t.pending[t.expire.Front().Value.(uint64)].time.Add(t.timeout)), t.clean) +} + +// Fulfil fills a pending request, if any is available, reporting on various metrics. +func (t *Tracker) Fulfil(peer string, version uint, code uint64, id uint64) { + if !metrics.Enabled { + return + } + t.lock.Lock() + defer t.lock.Unlock() + + // If it's a non existing request, track as stale response + req, ok := t.pending[id] + if !ok { + m := fmt.Sprintf("%s/%s/%d/%#02x", staleMeterName, t.protocol, version, code) + metrics.GetOrRegisterMeter(m, nil).Mark(1) + return + } + // If the response is funky, it might be some active attack + if req.peer != peer || req.version != version || req.resCode != code { + log.Warn("Network response id collision", + "have", fmt.Sprintf("%s:%s/%d:%d", peer, t.protocol, version, code), + "want", fmt.Sprintf("%s:%s/%d:%d", peer, t.protocol, req.version, req.resCode), + ) + return + } + // Everything matches, mark the request serviced and meter it + t.expire.Remove(req.expire) + delete(t.pending, id) + if req.expire.Prev() == nil { + if t.wake.Stop() { + t.schedule() + } + } + g := fmt.Sprintf("%s/%s/%d/%#02x", trackedGaugeName, t.protocol, req.version, req.reqCode) + metrics.GetOrRegisterGauge(g, nil).Dec(1) + + h := fmt.Sprintf("%s/%s/%d/%#02x", waitHistName, t.protocol, req.version, req.reqCode) + sampler := func() metrics.Sample { + return metrics.ResettingSample( + metrics.NewExpDecaySample(1028, 0.015), + ) + } + metrics.GetOrRegisterHistogramLazy(h, nil, sampler).Update(time.Since(req.time).Microseconds()) +} diff --git a/params/config.go b/params/config.go index 4517eca5ee..983e2f0e7b 100644 --- a/params/config.go +++ b/params/config.go @@ -24,8 +24,8 @@ import ( "strings" "github.com/ethereum/go-ethereum/common" - "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/log" + "golang.org/x/crypto/sha3" ) // Genesis hashes to enforce below configs on. @@ -34,7 +34,7 @@ var ( RopstenGenesisHash = common.HexToHash("0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d") RinkebyGenesisHash = common.HexToHash("0x6341fd3daf94b748c72ced5a5b26028f2474f5f00d824504e4fa37a75767e177") GoerliGenesisHash = common.HexToHash("0xbf7e331f7f7c1dd2e05159666b3bf8bc7a8a3a9eb1d518969eab529dd9b88c1a") - YoloV3GenesisHash = common.HexToHash("0x374f07cc7fa7c251fc5f36849f574b43db43600526410349efdca2bcea14101a") + YoloV3GenesisHash = common.HexToHash("0xf1f2876e8500c77afcc03228757b39477eceffccf645b734967fe3c7e16967b7") ) // TrustedCheckpoints associates each known checkpoint with the genesis hash of @@ -247,21 +247,21 @@ var ( // // This configuration is intentionally not using keyed fields to force anyone // adding flags to the config to also have to set these fields. - AllEthashProtocolChanges = &ChainConfig{big.NewInt(1337), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, false, 32, 35, big.NewInt(0), big.NewInt(0), nil, nil, false, nil, nil} + AllEthashProtocolChanges = &ChainConfig{big.NewInt(1337), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, false, 32, 35, big.NewInt(0), big.NewInt(0), nil, nil, false, nil, nil} // AllCliqueProtocolChanges contains every protocol change (EIPs) introduced // and accepted by the Ethereum core developers into the Clique consensus. // // This configuration is intentionally not using keyed fields to force anyone // adding flags to the config to also have to set these fields. - AllCliqueProtocolChanges = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, &CliqueConfig{Period: 0, Epoch: 30000}, nil, nil, nil, nil, false, 32, 32, big.NewInt(0), big.NewInt(0), nil, nil, false, nil, nil} + AllCliqueProtocolChanges = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, nil, &CliqueConfig{Period: 0, Epoch: 30000}, nil, nil, nil, nil, false, 32, 32, big.NewInt(0), big.NewInt(0), nil, nil, false, nil, nil} // Quorum chainID should 10 - TestChainConfig = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, false, 32, 32, big.NewInt(0), big.NewInt(0), nil, nil, false, nil, nil} + TestChainConfig = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, false, 32, 32, big.NewInt(0), big.NewInt(0), nil, nil, false, nil, nil} TestRules = TestChainConfig.Rules(new(big.Int)) - QuorumTestChainConfig = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, true, 64, 32, big.NewInt(0), big.NewInt(0), nil, big.NewInt(0), false, nil, nil} - QuorumMPSTestChainConfig = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, true, 64, 32, big.NewInt(0), big.NewInt(0), nil, big.NewInt(0), true, nil, nil} + QuorumTestChainConfig = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, true, 64, 32, big.NewInt(0), big.NewInt(0), nil, big.NewInt(0), false, nil, nil} + QuorumMPSTestChainConfig = &ChainConfig{big.NewInt(10), big.NewInt(0), nil, false, big.NewInt(0), common.Hash{}, big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), big.NewInt(0), nil, nil, nil, nil, nil, new(EthashConfig), nil, nil, nil, nil, nil, true, 64, 32, big.NewInt(0), big.NewInt(0), nil, big.NewInt(0), true, nil, nil} ) // TrustedCheckpoint represents a set of post-processed trie roots (CHT and @@ -285,12 +285,18 @@ func (c *TrustedCheckpoint) HashEqual(hash common.Hash) bool { // Hash returns the hash of checkpoint's four key fields(index, sectionHead, chtRoot and bloomTrieRoot). func (c *TrustedCheckpoint) Hash() common.Hash { - buf := make([]byte, 8+3*common.HashLength) - binary.BigEndian.PutUint64(buf, c.SectionIndex) - copy(buf[8:], c.SectionHead.Bytes()) - copy(buf[8+common.HashLength:], c.CHTRoot.Bytes()) - copy(buf[8+2*common.HashLength:], c.BloomRoot.Bytes()) - return crypto.Keccak256Hash(buf) + var sectionIndex [8]byte + binary.BigEndian.PutUint64(sectionIndex[:], c.SectionIndex) + + w := sha3.NewLegacyKeccak256() + w.Write(sectionIndex[:]) + w.Write(c.SectionHead[:]) + w.Write(c.CHTRoot[:]) + w.Write(c.BloomRoot[:]) + + var h common.Hash + w.Sum(h[:0]) + return h } // Empty returns an indicator whether the checkpoint is regarded as empty. @@ -338,8 +344,9 @@ type ChainConfig struct { MuirGlacierBlock *big.Int `json:"muirGlacierBlock,omitempty"` // Eip-2384 (bomb delay) switch block (nil = no fork, 0 = already activated) BerlinBlock *big.Int `json:"berlinBlock,omitempty"` // Berlin switch block (nil = no fork, 0 = already on berlin) - YoloV3Block *big.Int `json:"yoloV3Block,omitempty"` // YOLO v3: Gas repricings TODO @holiman add EIP references - EWASMBlock *big.Int `json:"ewasmBlock,omitempty"` // EWASM switch block (nil = no fork, 0 = already activated) + YoloV3Block *big.Int `json:"yoloV3Block,omitempty"` // YOLO v3: Gas repricings TODO @holiman add EIP references + EWASMBlock *big.Int `json:"ewasmBlock,omitempty"` // EWASM switch block (nil = no fork, 0 = already activated) + CatalystBlock *big.Int `json:"catalystBlock,omitempty"` // Catalyst switch block (nil = no fork, 0 = already on catalyst) // Various consensus engines Ethash *EthashConfig `json:"ethash,omitempty"` @@ -467,7 +474,7 @@ func (c *ChainConfig) String() string { default: engine = "unknown" } - return fmt.Sprintf("{ChainID: %v Homestead: %v DAO: %v DAOSupport: %v EIP150: %v EIP155: %v EIP158: %v Byzantium: %v IsQuorum: %v Constantinople: %v TransactionSizeLimit: %v MaxCodeSize: %v Petersburg: %v Istanbul: %v, Muir Glacier: %v, Berlin: %v YOLO v3: %v PrivacyEnhancements: %v PrivacyPrecompile: %v EnableGasPriceBlock: %v Engine: %v}", + return fmt.Sprintf("{ChainID: %v Homestead: %v DAO: %v DAOSupport: %v EIP150: %v EIP155: %v EIP158: %v Byzantium: %v IsQuorum: %v Constantinople: %v TransactionSizeLimit: %v MaxCodeSize: %v Petersburg: %v Istanbul: %v, Muir Glacier: %v, Berlin: %v Catalyst: %v YOLO v3: %v PrivacyEnhancements: %v PrivacyPrecompile: %v EnableGasPriceBlock: %v Engine: %v}", c.ChainID, c.HomesteadBlock, c.DAOForkBlock, @@ -484,6 +491,7 @@ func (c *ChainConfig) String() string { c.IstanbulBlock, c.MuirGlacierBlock, c.BerlinBlock, + c.CatalystBlock, c.YoloV3Block, c.PrivacyEnhancementsBlock, //Quorum c.PrivacyPrecompileBlock, //Quorum @@ -563,6 +571,11 @@ func (c *ChainConfig) IsBerlin(num *big.Int) bool { return isForked(c.BerlinBlock, num) || isForked(c.YoloV3Block, num) } +// IsCatalyst returns whether num is either equal to the Merge fork block or greater. +func (c *ChainConfig) IsCatalyst(num *big.Int) bool { + return isForked(c.CatalystBlock, num) +} + // IsEWASM returns whether num represents a block number after the EWASM fork func (c *ChainConfig) IsEWASM(num *big.Int) bool { return isForked(c.EWASMBlock, num) @@ -1123,7 +1136,7 @@ type Rules struct { ChainID *big.Int IsHomestead, IsEIP150, IsEIP155, IsEIP158 bool IsByzantium, IsConstantinople, IsPetersburg, IsIstanbul bool - IsBerlin bool + IsBerlin, IsCatalyst bool // Quorum IsPrivacyEnhancementsEnabled bool IsPrivacyPrecompile bool @@ -1147,6 +1160,7 @@ func (c *ChainConfig) Rules(num *big.Int) Rules { IsPetersburg: c.IsPetersburg(num), IsIstanbul: c.IsIstanbul(num), IsBerlin: c.IsBerlin(num), + IsCatalyst: c.IsCatalyst(num), // Quorum IsPrivacyEnhancementsEnabled: c.IsPrivacyEnhancementsEnabled(num), IsPrivacyPrecompile: c.IsPrivacyPrecompileEnabled(num), diff --git a/params/version.go b/params/version.go index 94da6155a9..bfbaff5611 100644 --- a/params/version.go +++ b/params/version.go @@ -23,7 +23,7 @@ import ( const ( VersionMajor = 1 // Major version component of the current release VersionMinor = 10 // Minor version component of the current release - VersionPatch = 2 // Patch version component of the current release + VersionPatch = 3 // Patch version component of the current release VersionMeta = "stable" // Version metadata to append to the version string QuorumVersionMajor = 22 diff --git a/rpc/errors.go b/rpc/errors.go index dbfde8b196..4c06a745fb 100644 --- a/rpc/errors.go +++ b/rpc/errors.go @@ -18,6 +18,35 @@ package rpc import "fmt" +// HTTPError is returned by client operations when the HTTP status code of the +// response is not a 2xx status. +type HTTPError struct { + StatusCode int + Status string + Body []byte +} + +func (err HTTPError) Error() string { + if len(err.Body) == 0 { + return err.Status + } + return fmt.Sprintf("%v: %s", err.Status, err.Body) +} + +// Error wraps RPC errors, which contain an error code in addition to the message. +type Error interface { + Error() string // returns the message + ErrorCode() int // returns the code +} + +// A DataError contains some data in addition to the error message. +type DataError interface { + Error() string // returns the message + ErrorData() interface{} // returns the error data +} + +// Error types defined below are the built-in JSON-RPC errors. + var ( _ Error = new(methodNotFoundError) _ Error = new(subscriptionNotFoundError) diff --git a/rpc/http.go b/rpc/http.go index c86719b1f9..fa6f21b80a 100644 --- a/rpc/http.go +++ b/rpc/http.go @@ -152,12 +152,9 @@ func DialHTTP(endpoint string) (*Client, error) { func (c *Client) sendHTTP(ctx context.Context, op *requestOp, msg interface{}) error { hc := c.writeConn.(*httpConn) respBody, err := hc.doRequest(ctx, msg) - if respBody != nil { - defer respBody.Close() - } - if err != nil { if respBody != nil { + defer respBody.Close() buf := new(bytes.Buffer) if _, err2 := buf.ReadFrom(respBody); err2 == nil { return fmt.Errorf("%v: %v", err, buf.String()) @@ -165,6 +162,8 @@ func (c *Client) sendHTTP(ctx context.Context, op *requestOp, msg interface{}) e } return err } + defer respBody.Close() + var respmsg jsonrpcMessage if err := json.NewDecoder(respBody).Decode(&respmsg); err != nil { return err @@ -210,7 +209,6 @@ func (hc *httpConn) doRequest(ctx context.Context, msg interface{}) (io.ReadClos req.Header = hc.headers.Clone() // Quorum - // do request if hc.credentialsProvider != nil { if token, err := hc.credentialsProvider(ctx); err != nil { log.Warn("unable to obtain http credentials from provider", "err", err) @@ -229,12 +227,23 @@ func (hc *httpConn) doRequest(ctx context.Context, msg interface{}) (io.ReadClos hc.mu.Unlock() + // do request resp, err := hc.client.Do(req) if err != nil { return nil, err } if resp.StatusCode < 200 || resp.StatusCode >= 300 { - return resp.Body, errors.New(resp.Status) + var buf bytes.Buffer + var body []byte + if _, err := buf.ReadFrom(resp.Body); err == nil { + body = buf.Bytes() + } + + return nil, HTTPError{ + Status: resp.Status, + StatusCode: resp.StatusCode, + Body: body, + } } return resp.Body, nil } diff --git a/rpc/http_test.go b/rpc/http_test.go index b75af67c52..97f8d44c39 100644 --- a/rpc/http_test.go +++ b/rpc/http_test.go @@ -123,3 +123,42 @@ func TestHTTPRespBodyUnlimited(t *testing.T) { t.Fatalf("response has wrong length %d, want %d", len(r), respLength) } } + +// Tests that an HTTP error results in an HTTPError instance +// being returned with the expected attributes. +func TestHTTPErrorResponse(t *testing.T) { + ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + http.Error(w, "error has occurred!", http.StatusTeapot) + })) + defer ts.Close() + + c, err := DialHTTP(ts.URL) + if err != nil { + t.Fatal(err) + } + + var r string + err = c.Call(&r, "test_method") + if err == nil { + t.Fatal("error was expected") + } + + httpErr, ok := err.(HTTPError) + if !ok { + t.Fatalf("unexpected error type %T", err) + } + + if httpErr.StatusCode != http.StatusTeapot { + t.Error("unexpected status code", httpErr.StatusCode) + } + if httpErr.Status != "418 I'm a teapot" { + t.Error("unexpected status text", httpErr.Status) + } + if body := string(httpErr.Body); body != "error has occurred!\n" { + t.Error("unexpected body", body) + } + + if errMsg := httpErr.Error(); errMsg != "418 I'm a teapot: error has occurred!\n" { + t.Error("unexpected error message", errMsg) + } +} diff --git a/rpc/types.go b/rpc/types.go index 21b7ebaf94..aec36327b0 100644 --- a/rpc/types.go +++ b/rpc/types.go @@ -35,18 +35,6 @@ type API struct { Public bool // indication if the methods must be considered safe for public use } -// Error wraps RPC errors, which contain an error code in addition to the message. -type Error interface { - Error() string // returns the message - ErrorCode() int // returns the code -} - -// A DataError contains some data in addition to the error message. -type DataError interface { - Error() string // returns the message - ErrorData() interface{} // returns the error data -} - // ServerCodec implements reading, parsing and writing RPC messages for the server side of // a RPC session. Implementations must be go-routine safe since the codec can be called in // multiple go-routines concurrently. diff --git a/signer/core/api.go b/signer/core/api.go index b751ab69a1..13660c6986 100644 --- a/signer/core/api.go +++ b/signer/core/api.go @@ -550,6 +550,14 @@ func (api *SignerAPI) SignTransaction(ctx context.Context, args SendTxArgs, meth return nil, err } } + if args.ChainID != nil { + requestedChainId := (*big.Int)(args.ChainID) + if api.chainID.Cmp(requestedChainId) != 0 { + log.Error("Signing request with wrong chain id", "requested", requestedChainId, "configured", api.chainID) + return nil, fmt.Errorf("requested chainid %d does not match the configuration of the signer", + requestedChainId) + } + } req := SignTxRequest{ Transaction: args, Meta: MetadataFromContext(ctx), diff --git a/signer/core/cliui.go b/signer/core/cliui.go index cbfb56c9df..e0375483c3 100644 --- a/signer/core/cliui.go +++ b/signer/core/cliui.go @@ -118,6 +118,18 @@ func (ui *CommandlineUI) ApproveTx(request *SignTxRequest) (SignTxResponse, erro fmt.Printf("gas: %v (%v)\n", request.Transaction.Gas, uint64(request.Transaction.Gas)) fmt.Printf("gasprice: %v wei\n", request.Transaction.GasPrice.ToInt()) fmt.Printf("nonce: %v (%v)\n", request.Transaction.Nonce, uint64(request.Transaction.Nonce)) + if chainId := request.Transaction.ChainID; chainId != nil { + fmt.Printf("chainid: %v\n", chainId) + } + if list := request.Transaction.AccessList; list != nil { + fmt.Printf("Accesslist\n") + for i, el := range *list { + fmt.Printf(" %d. %v\n", i, el.Address) + for j, slot := range el.StorageKeys { + fmt.Printf(" %d. %v\n", j, slot) + } + } + } if request.Transaction.Data != nil { d := *request.Transaction.Data if len(d) > 0 { diff --git a/signer/core/types.go b/signer/core/types.go index a7a460dd7e..bb1e88e9e4 100644 --- a/signer/core/types.go +++ b/signer/core/types.go @@ -76,9 +76,13 @@ type SendTxArgs struct { // We accept "data" and "input" for backwards-compatibility reasons. Data *hexutil.Bytes `json:"data"` Input *hexutil.Bytes `json:"input,omitempty"` + + // For non-legacy transactions + AccessList *types.AccessList `json:"accessList,omitempty"` + ChainID *hexutil.Big `json:"chainId,omitempty"` + // QUORUM IsPrivate bool `json:"isPrivate,omitempty"` - // END QUORUM } func (args SendTxArgs) String() string { @@ -96,11 +100,34 @@ func (args *SendTxArgs) toTransaction() (tx *types.Transaction) { } else if args.Input != nil { input = *args.Input } - if args.To == nil { - tx = types.NewContractCreation(uint64(args.Nonce), (*big.Int)(&args.Value), uint64(args.Gas), (*big.Int)(&args.GasPrice), input) + var to *common.Address + if args.To != nil { + _to := args.To.Address() + to = &_to + } + var data types.TxData + if args.AccessList == nil { + data = &types.LegacyTx{ + To: to, + Nonce: uint64(args.Nonce), + Gas: uint64(args.Gas), + GasPrice: (*big.Int)(&args.GasPrice), + Value: (*big.Int)(&args.Value), + Data: input, + } } else { - tx = types.NewTransaction(uint64(args.Nonce), args.To.Address(), (*big.Int)(&args.Value), (uint64)(args.Gas), (*big.Int)(&args.GasPrice), input) + data = &types.AccessListTx{ + To: to, + ChainID: (*big.Int)(args.ChainID), + Nonce: uint64(args.Nonce), + Gas: uint64(args.Gas), + GasPrice: (*big.Int)(&args.GasPrice), + Value: (*big.Int)(&args.Value), + Data: input, + AccessList: *args.AccessList, + } } + tx = types.NewTx(data) if args.IsPrivate { tx.SetPrivate() } diff --git a/tests/block_test.go b/tests/block_test.go index 6f015dc327..263a058ef2 100644 --- a/tests/block_test.go +++ b/tests/block_test.go @@ -28,7 +28,7 @@ func TestBlockchain(t *testing.T) { // For speedier CI-runs, the line below can be uncommented, so those are skipped. // For now, in hardfork-times (Berlin), we run the tests both as StateTests and // as blockchain tests, since the latter also covers things like receipt root - //bt.skipLoad(`^GeneralStateTests/`) + bt.skipLoad(`^GeneralStateTests/`) // Skip random failures due to selfish mining test bt.skipLoad(`.*bcForgedTest/bcForkUncle\.json`) diff --git a/tests/fuzzers/bls12381/bls12381_fuzz.go b/tests/fuzzers/bls12381/bls12381_fuzz.go new file mode 100644 index 0000000000..c0f452f3ed --- /dev/null +++ b/tests/fuzzers/bls12381/bls12381_fuzz.go @@ -0,0 +1,244 @@ +// Copyright 2021 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . + +// +build gofuzz + +package bls + +import ( + "bytes" + "crypto/rand" + "fmt" + "io" + "math/big" + + gnark "github.com/consensys/gnark-crypto/ecc/bls12-381" + "github.com/consensys/gnark-crypto/ecc/bls12-381/fp" + "github.com/consensys/gnark-crypto/ecc/bls12-381/fr" + "github.com/ethereum/go-ethereum/crypto/bls12381" +) + +func FuzzCrossPairing(data []byte) int { + input := bytes.NewReader(data) + + // get random G1 points + kpG1, cpG1, err := getG1Points(input) + if err != nil { + return 0 + } + + // get random G2 points + kpG2, cpG2, err := getG2Points(input) + if err != nil { + return 0 + } + + // compute pairing using geth + engine := bls12381.NewPairingEngine() + engine.AddPair(kpG1, kpG2) + kResult := engine.Result() + + // compute pairing using gnark + cResult, err := gnark.Pair([]gnark.G1Affine{*cpG1}, []gnark.G2Affine{*cpG2}) + if err != nil { + panic(fmt.Sprintf("gnark/bls12381 encountered error: %v", err)) + } + + // compare result + if !(bytes.Equal(cResult.Marshal(), bls12381.NewGT().ToBytes(kResult))) { + panic("pairing mismatch gnark / geth ") + } + + return 1 +} + +func FuzzCrossG1Add(data []byte) int { + input := bytes.NewReader(data) + + // get random G1 points + kp1, cp1, err := getG1Points(input) + if err != nil { + return 0 + } + + // get random G1 points + kp2, cp2, err := getG1Points(input) + if err != nil { + return 0 + } + + // compute kp = kp1 + kp2 + g1 := bls12381.NewG1() + kp := bls12381.PointG1{} + g1.Add(&kp, kp1, kp2) + + // compute cp = cp1 + cp2 + _cp1 := new(gnark.G1Jac).FromAffine(cp1) + _cp2 := new(gnark.G1Jac).FromAffine(cp2) + cp := new(gnark.G1Affine).FromJacobian(_cp1.AddAssign(_cp2)) + + // compare result + if !(bytes.Equal(cp.Marshal(), g1.ToBytes(&kp))) { + panic("G1 point addition mismatch gnark / geth ") + } + + return 1 +} + +func FuzzCrossG2Add(data []byte) int { + input := bytes.NewReader(data) + + // get random G2 points + kp1, cp1, err := getG2Points(input) + if err != nil { + return 0 + } + + // get random G2 points + kp2, cp2, err := getG2Points(input) + if err != nil { + return 0 + } + + // compute kp = kp1 + kp2 + g2 := bls12381.NewG2() + kp := bls12381.PointG2{} + g2.Add(&kp, kp1, kp2) + + // compute cp = cp1 + cp2 + _cp1 := new(gnark.G2Jac).FromAffine(cp1) + _cp2 := new(gnark.G2Jac).FromAffine(cp2) + cp := new(gnark.G2Affine).FromJacobian(_cp1.AddAssign(_cp2)) + + // compare result + if !(bytes.Equal(cp.Marshal(), g2.ToBytes(&kp))) { + panic("G2 point addition mismatch gnark / geth ") + } + + return 1 +} + +func FuzzCrossG1MultiExp(data []byte) int { + var ( + input = bytes.NewReader(data) + gethScalars []*big.Int + gnarkScalars []fr.Element + gethPoints []*bls12381.PointG1 + gnarkPoints []gnark.G1Affine + ) + // n random scalars (max 17) + for i := 0; i < 17; i++ { + // note that geth/crypto/bls12381 works only with scalars <= 32bytes + s, err := randomScalar(input, fr.Modulus()) + if err != nil { + break + } + // get a random G1 point as basis + kp1, cp1, err := getG1Points(input) + if err != nil { + break + } + gethScalars = append(gethScalars, s) + var gnarkScalar = &fr.Element{} + gnarkScalar = gnarkScalar.SetBigInt(s).FromMont() + gnarkScalars = append(gnarkScalars, *gnarkScalar) + + gethPoints = append(gethPoints, new(bls12381.PointG1).Set(kp1)) + gnarkPoints = append(gnarkPoints, *cp1) + } + if len(gethScalars) == 0 { + return 0 + } + // compute multi exponentiation + g1 := bls12381.NewG1() + kp := bls12381.PointG1{} + if _, err := g1.MultiExp(&kp, gethPoints, gethScalars); err != nil { + panic(fmt.Sprintf("G1 multi exponentiation errored (geth): %v", err)) + } + // note that geth/crypto/bls12381.MultiExp mutates the scalars slice (and sets all the scalars to zero) + + // gnark multi exp + cp := new(gnark.G1Affine) + cp.MultiExp(gnarkPoints, gnarkScalars) + + // compare result + if !(bytes.Equal(cp.Marshal(), g1.ToBytes(&kp))) { + panic("G1 multi exponentiation mismatch gnark / geth ") + } + + return 1 +} + +func getG1Points(input io.Reader) (*bls12381.PointG1, *gnark.G1Affine, error) { + // sample a random scalar + s, err := randomScalar(input, fp.Modulus()) + if err != nil { + return nil, nil, err + } + + // compute a random point + cp := new(gnark.G1Affine) + _, _, g1Gen, _ := gnark.Generators() + cp.ScalarMultiplication(&g1Gen, s) + cpBytes := cp.Marshal() + + // marshal gnark point -> geth point + g1 := bls12381.NewG1() + kp, err := g1.FromBytes(cpBytes) + if err != nil { + panic(fmt.Sprintf("Could not marshal gnark.G1 -> geth.G1: %v", err)) + } + if !bytes.Equal(g1.ToBytes(kp), cpBytes) { + panic("bytes(gnark.G1) != bytes(geth.G1)") + } + + return kp, cp, nil +} + +func getG2Points(input io.Reader) (*bls12381.PointG2, *gnark.G2Affine, error) { + // sample a random scalar + s, err := randomScalar(input, fp.Modulus()) + if err != nil { + return nil, nil, err + } + + // compute a random point + cp := new(gnark.G2Affine) + _, _, _, g2Gen := gnark.Generators() + cp.ScalarMultiplication(&g2Gen, s) + cpBytes := cp.Marshal() + + // marshal gnark point -> geth point + g2 := bls12381.NewG2() + kp, err := g2.FromBytes(cpBytes) + if err != nil { + panic(fmt.Sprintf("Could not marshal gnark.G2 -> geth.G2: %v", err)) + } + if !bytes.Equal(g2.ToBytes(kp), cpBytes) { + panic("bytes(gnark.G2) != bytes(geth.G2)") + } + + return kp, cp, nil +} + +func randomScalar(r io.Reader, max *big.Int) (k *big.Int, err error) { + for { + k, err = rand.Int(r, max) + if err != nil || k.Sign() > 0 { + return + } + } +} diff --git a/tests/fuzzers/bls12381/bls_fuzzer.go b/tests/fuzzers/bls12381/precompile_fuzzer.go similarity index 100% rename from tests/fuzzers/bls12381/bls_fuzzer.go rename to tests/fuzzers/bls12381/precompile_fuzzer.go diff --git a/tests/fuzzers/bn256/bn256_fuzz.go b/tests/fuzzers/bn256/bn256_fuzz.go index c98fbc33ae..030ac19b3f 100644 --- a/tests/fuzzers/bn256/bn256_fuzz.go +++ b/tests/fuzzers/bn256/bn256_fuzz.go @@ -12,12 +12,12 @@ import ( "io" "math/big" - gurvy "github.com/consensys/gurvy/bn256" + "github.com/consensys/gnark-crypto/ecc/bn254" cloudflare "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare" google "github.com/ethereum/go-ethereum/crypto/bn256/google" ) -func getG1Points(input io.Reader) (*cloudflare.G1, *google.G1, *gurvy.G1Affine) { +func getG1Points(input io.Reader) (*cloudflare.G1, *google.G1, *bn254.G1Affine) { _, xc, err := cloudflare.RandomG1(input) if err != nil { // insufficient input @@ -25,16 +25,16 @@ func getG1Points(input io.Reader) (*cloudflare.G1, *google.G1, *gurvy.G1Affine) } xg := new(google.G1) if _, err := xg.Unmarshal(xc.Marshal()); err != nil { - panic(fmt.Sprintf("Could not marshal cloudflare -> google:", err)) + panic(fmt.Sprintf("Could not marshal cloudflare -> google: %v", err)) } - xs := new(gurvy.G1Affine) + xs := new(bn254.G1Affine) if err := xs.Unmarshal(xc.Marshal()); err != nil { - panic(fmt.Sprintf("Could not marshal cloudflare -> consensys:", err)) + panic(fmt.Sprintf("Could not marshal cloudflare -> gnark: %v", err)) } return xc, xg, xs } -func getG2Points(input io.Reader) (*cloudflare.G2, *google.G2, *gurvy.G2Affine) { +func getG2Points(input io.Reader) (*cloudflare.G2, *google.G2, *bn254.G2Affine) { _, xc, err := cloudflare.RandomG2(input) if err != nil { // insufficient input @@ -42,11 +42,11 @@ func getG2Points(input io.Reader) (*cloudflare.G2, *google.G2, *gurvy.G2Affine) } xg := new(google.G2) if _, err := xg.Unmarshal(xc.Marshal()); err != nil { - panic(fmt.Sprintf("Could not marshal cloudflare -> google:", err)) + panic(fmt.Sprintf("Could not marshal cloudflare -> google: %v", err)) } - xs := new(gurvy.G2Affine) + xs := new(bn254.G2Affine) if err := xs.Unmarshal(xc.Marshal()); err != nil { - panic(fmt.Sprintf("Could not marshal cloudflare -> consensys:", err)) + panic(fmt.Sprintf("Could not marshal cloudflare -> gnark: %v", err)) } return xc, xg, xs } @@ -70,16 +70,16 @@ func FuzzAdd(data []byte) int { rg := new(google.G1) rg.Add(xg, yg) - tmpX := new(gurvy.G1Jac).FromAffine(xs) - tmpY := new(gurvy.G1Jac).FromAffine(ys) - rs := new(gurvy.G1Affine).FromJacobian(tmpX.AddAssign(tmpY)) + tmpX := new(bn254.G1Jac).FromAffine(xs) + tmpY := new(bn254.G1Jac).FromAffine(ys) + rs := new(bn254.G1Affine).FromJacobian(tmpX.AddAssign(tmpY)) if !bytes.Equal(rc.Marshal(), rg.Marshal()) { panic("add mismatch: cloudflare/google") } if !bytes.Equal(rc.Marshal(), rs.Marshal()) { - panic("add mismatch: cloudflare/consensys") + panic("add mismatch: cloudflare/gnark") } return 1 } @@ -112,16 +112,16 @@ func FuzzMul(data []byte) int { rg := new(google.G1) rg.ScalarMult(pg, new(big.Int).SetBytes(buf)) - rs := new(gurvy.G1Jac) - psJac := new(gurvy.G1Jac).FromAffine(ps) + rs := new(bn254.G1Jac) + psJac := new(bn254.G1Jac).FromAffine(ps) rs.ScalarMultiplication(psJac, new(big.Int).SetBytes(buf)) - rsAffine := new(gurvy.G1Affine).FromJacobian(rs) + rsAffine := new(bn254.G1Affine).FromJacobian(rs) if !bytes.Equal(rc.Marshal(), rg.Marshal()) { panic("scalar mul mismatch: cloudflare/google") } if !bytes.Equal(rc.Marshal(), rsAffine.Marshal()) { - panic("scalar mul mismatch: cloudflare/consensys") + panic("scalar mul mismatch: cloudflare/gnark") } return 1 } @@ -136,18 +136,20 @@ func FuzzPair(data []byte) int { if tc == nil { return 0 } + // Pair the two points and ensure they result in the same output - clPair := cloudflare.PairingCheck([]*cloudflare.G1{pc}, []*cloudflare.G2{tc}) - if clPair != google.PairingCheck([]*google.G1{pg}, []*google.G2{tg}) { + clPair := cloudflare.Pair(pc, tc).Marshal() + gPair := google.Pair(pg, tg).Marshal() + if !bytes.Equal(clPair, gPair) { panic("pairing mismatch: cloudflare/google") } - coPair, err := gurvy.PairingCheck([]gurvy.G1Affine{*ps}, []gurvy.G2Affine{*ts}) + cPair, err := bn254.Pair([]bn254.G1Affine{*ps}, []bn254.G2Affine{*ts}) if err != nil { - panic(fmt.Sprintf("gurvy encountered error: %v", err)) + panic(fmt.Sprintf("gnark/bn254 encountered error: %v", err)) } - if clPair != coPair { - panic("pairing mismatch: cloudflare/consensys") + if !bytes.Equal(clPair, cPair.Marshal()) { + panic("pairing mismatch: cloudflare/gnark") } return 1 diff --git a/tests/fuzzers/rangeproof/rangeproof-fuzzer.go b/tests/fuzzers/rangeproof/rangeproof-fuzzer.go index b82a380723..09ee6bb9c7 100644 --- a/tests/fuzzers/rangeproof/rangeproof-fuzzer.go +++ b/tests/fuzzers/rangeproof/rangeproof-fuzzer.go @@ -170,30 +170,11 @@ func (f *fuzzer) fuzz() int { } ok = 1 //nodes, subtrie - nodes, subtrie, notary, hasMore, err := trie.VerifyRangeProof(tr.Hash(), first, last, keys, vals, proof) + hasMore, err := trie.VerifyRangeProof(tr.Hash(), first, last, keys, vals, proof) if err != nil { - if nodes != nil { - panic("err != nil && nodes != nil") - } - if subtrie != nil { - panic("err != nil && subtrie != nil") - } - if notary != nil { - panic("err != nil && notary != nil") - } if hasMore { panic("err != nil && hasMore == true") } - } else { - if nodes == nil { - panic("err == nil && nodes == nil") - } - if subtrie == nil { - panic("err == nil && subtrie == nil") - } - if notary == nil { - panic("err == nil && subtrie == nil") - } } } return ok diff --git a/tests/fuzzers/vflux/clientpool-fuzzer.go b/tests/fuzzers/vflux/clientpool-fuzzer.go index 41b8627348..0414c001ec 100644 --- a/tests/fuzzers/vflux/clientpool-fuzzer.go +++ b/tests/fuzzers/vflux/clientpool-fuzzer.go @@ -28,11 +28,22 @@ import ( "github.com/ethereum/go-ethereum/ethdb/memorydb" "github.com/ethereum/go-ethereum/les/vflux" vfs "github.com/ethereum/go-ethereum/les/vflux/server" + "github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/p2p/enode" "github.com/ethereum/go-ethereum/p2p/enr" "github.com/ethereum/go-ethereum/rlp" ) +var ( + debugMode = false + doLog = func(msg string, ctx ...interface{}) { + if !debugMode { + return + } + log.Info(msg, ctx...) + } +) + type fuzzer struct { peers [256]*clientPeer disconnectList []*clientPeer @@ -65,6 +76,7 @@ func (p *clientPeer) InactiveAllowance() time.Duration { } func (p *clientPeer) UpdateCapacity(newCap uint64, requested bool) { + origin, originTotal := p.capacity, p.fuzzer.activeCap p.fuzzer.activeCap -= p.capacity if p.capacity != 0 { p.fuzzer.activeCount-- @@ -74,9 +86,11 @@ func (p *clientPeer) UpdateCapacity(newCap uint64, requested bool) { if p.capacity != 0 { p.fuzzer.activeCount++ } + doLog("Update capacity", "peer", p.node.ID(), "origin", origin, "cap", newCap, "origintotal", originTotal, "total", p.fuzzer.activeCap, "requested", requested) } func (p *clientPeer) Disconnect() { + origin, originTotal := p.capacity, p.fuzzer.activeCap p.fuzzer.disconnectList = append(p.fuzzer.disconnectList, p) p.fuzzer.activeCap -= p.capacity if p.capacity != 0 { @@ -84,6 +98,7 @@ func (p *clientPeer) Disconnect() { } p.capacity = 0 p.balance = nil + doLog("Disconnect", "peer", p.node.ID(), "origin", origin, "origintotal", originTotal, "total", p.fuzzer.activeCap) } func newFuzzer(input []byte) *fuzzer { @@ -165,12 +180,16 @@ func (f *fuzzer) randomFactors() vfs.PriceFactors { } } -func (f *fuzzer) connectedBalanceOp(balance vfs.ConnectedBalance) { +func (f *fuzzer) connectedBalanceOp(balance vfs.ConnectedBalance, id enode.ID) { switch f.randomInt(3) { case 0: - balance.RequestServed(uint64(f.randomTokenAmount(false))) + cost := uint64(f.randomTokenAmount(false)) + balance.RequestServed(cost) + doLog("Serve request cost", "id", id, "amount", cost) case 1: - balance.SetPriceFactors(f.randomFactors(), f.randomFactors()) + posFactor, negFactor := f.randomFactors(), f.randomFactors() + balance.SetPriceFactors(posFactor, negFactor) + doLog("Set price factor", "pos", posFactor, "neg", negFactor) case 2: balance.GetBalance() balance.GetRawBalance() @@ -178,12 +197,16 @@ func (f *fuzzer) connectedBalanceOp(balance vfs.ConnectedBalance) { } } -func (f *fuzzer) atomicBalanceOp(balance vfs.AtomicBalanceOperator) { +func (f *fuzzer) atomicBalanceOp(balance vfs.AtomicBalanceOperator, id enode.ID) { switch f.randomInt(3) { case 0: - balance.AddBalance(f.randomTokenAmount(true)) + amount := f.randomTokenAmount(true) + balance.AddBalance(amount) + doLog("Add balance", "id", id, "amount", amount) case 1: - balance.SetBalance(uint64(f.randomTokenAmount(false)), uint64(f.randomTokenAmount(false))) + pos, neg := uint64(f.randomTokenAmount(false)), uint64(f.randomTokenAmount(false)) + balance.SetBalance(pos, neg) + doLog("Set balance", "id", id, "pos", pos, "neg", neg) case 2: balance.GetBalance() balance.GetRawBalance() @@ -212,33 +235,53 @@ func FuzzClientPool(input []byte) int { case 0: i := int(f.randomByte()) f.peers[i].balance = pool.Register(f.peers[i]) + doLog("Register peer", "id", f.peers[i].node.ID()) case 1: i := int(f.randomByte()) f.peers[i].Disconnect() + doLog("Disconnect peer", "id", f.peers[i].node.ID()) case 2: f.maxCount = uint64(f.randomByte()) f.maxCap = uint64(f.randomByte()) f.maxCap *= f.maxCap + + count, cap := pool.Limits() pool.SetLimits(f.maxCount, f.maxCap) + doLog("Set limits", "maxcount", f.maxCount, "maxcap", f.maxCap, "origincount", count, "oricap", cap) case 3: + bias := f.randomDelay() pool.SetConnectedBias(f.randomDelay()) + doLog("Set connection bias", "bias", bias) case 4: - pool.SetDefaultFactors(f.randomFactors(), f.randomFactors()) + pos, neg := f.randomFactors(), f.randomFactors() + pool.SetDefaultFactors(pos, neg) + doLog("Set default factors", "pos", pos, "neg", neg) case 5: - pool.SetExpirationTCs(uint64(f.randomInt(50000)), uint64(f.randomInt(50000))) + pos, neg := uint64(f.randomInt(50000)), uint64(f.randomInt(50000)) + pool.SetExpirationTCs(pos, neg) + doLog("Set expiration constants", "pos", pos, "neg", neg) case 6: - if _, err := pool.SetCapacity(f.peers[f.randomByte()].node, uint64(f.randomByte()), f.randomDelay(), f.randomBool()); err == vfs.ErrCantFindMaximum { + var ( + index = f.randomByte() + reqCap = uint64(f.randomByte()) + bias = f.randomDelay() + requested = f.randomBool() + ) + if _, err := pool.SetCapacity(f.peers[index].node, reqCap, bias, requested); err == vfs.ErrCantFindMaximum { panic(nil) } + doLog("Set capacity", "id", f.peers[index].node.ID(), "reqcap", reqCap, "bias", bias, "requested", requested) case 7: - if balance := f.peers[f.randomByte()].balance; balance != nil { - f.connectedBalanceOp(balance) + index := f.randomByte() + if balance := f.peers[index].balance; balance != nil { + f.connectedBalanceOp(balance, f.peers[index].node.ID()) } case 8: - pool.BalanceOperation(f.peers[f.randomByte()].node.ID(), f.peers[f.randomByte()].freeID, func(balance vfs.AtomicBalanceOperator) { + index := f.randomByte() + pool.BalanceOperation(f.peers[index].node.ID(), f.peers[index].freeID, func(balance vfs.AtomicBalanceOperator) { count := f.randomInt(4) for i := 0; i < count; i++ { - f.atomicBalanceOp(balance) + f.atomicBalanceOp(balance, f.peers[index].node.ID()) } }) case 9: @@ -272,13 +315,16 @@ func FuzzClientPool(input []byte) int { for _, peer := range f.disconnectList { pool.Unregister(peer) + doLog("Unregister peer", "id", peer.node.ID()) } f.disconnectList = nil if d := f.randomDelay(); d > 0 { clock.Run(d) } - //fmt.Println(f.activeCount, f.maxCount, f.activeCap, f.maxCap) - if activeCount, activeCap := pool.Active(); activeCount != f.activeCount || activeCap != f.activeCap { + doLog("Clientpool stats in fuzzer", "count", f.activeCap, "maxcount", f.maxCount, "cap", f.activeCap, "maxcap", f.maxCap) + activeCount, activeCap := pool.Active() + doLog("Clientpool stats in pool", "count", activeCount, "cap", activeCap) + if activeCount != f.activeCount || activeCap != f.activeCap { panic(nil) } if f.activeCount > f.maxCount || f.activeCap > f.maxCap { diff --git a/tests/fuzzers/vflux/debug/main.go b/tests/fuzzers/vflux/debug/main.go index de0b5d4124..1d4a5ff19c 100644 --- a/tests/fuzzers/vflux/debug/main.go +++ b/tests/fuzzers/vflux/debug/main.go @@ -21,10 +21,13 @@ import ( "io/ioutil" "os" + "github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/tests/fuzzers/vflux" ) func main() { + log.Root().SetHandler(log.LvlFilterHandler(log.LvlTrace, log.StreamHandler(os.Stderr, log.TerminalFormat(true)))) + if len(os.Args) != 2 { fmt.Fprintf(os.Stderr, "Usage: debug \n") fmt.Fprintf(os.Stderr, "Example\n") diff --git a/trie/committer.go b/trie/committer.go index 33fd9e9823..ce4065f5fd 100644 --- a/trie/committer.go +++ b/trie/committer.go @@ -220,13 +220,13 @@ func (c *committer) commitLoop(db *Database) { switch n := n.(type) { case *shortNode: if child, ok := n.Val.(valueNode); ok { - c.onleaf(nil, child, hash) + c.onleaf(nil, nil, child, hash) } case *fullNode: // For children in range [0, 15], it's impossible // to contain valuenode. Only check the 17th child. if n.Children[16] != nil { - c.onleaf(nil, n.Children[16].(valueNode), hash) + c.onleaf(nil, nil, n.Children[16].(valueNode), hash) } } } diff --git a/trie/database.go b/trie/database.go index 5217fb10c4..b18665770e 100644 --- a/trie/database.go +++ b/trie/database.go @@ -133,9 +133,6 @@ type rawShortNode struct { Val node } -func (n rawShortNode) canUnload(uint16, uint16) bool { - panic("this should never end up in a live trie") -} func (n rawShortNode) cache() (hashNode, bool) { panic("this should never end up in a live trie") } func (n rawShortNode) fstring(ind string) string { panic("this should never end up in a live trie") } diff --git a/trie/iterator.go b/trie/iterator.go index 76d437c403..406f216c22 100644 --- a/trie/iterator.go +++ b/trie/iterator.go @@ -22,6 +22,7 @@ import ( "errors" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/rlp" ) @@ -102,6 +103,19 @@ type NodeIterator interface { // iterator is not positioned at a leaf. Callers must not retain references // to the value after calling Next. LeafProof() [][]byte + + // AddResolver sets an intermediate database to use for looking up trie nodes + // before reaching into the real persistent layer. + // + // This is not required for normal operation, rather is an optimization for + // cases where trie nodes can be recovered from some external mechanism without + // reading from disk. In those cases, this resolver allows short circuiting + // accesses and returning them from memory. + // + // Before adding a similar mechanism to any other place in Geth, consider + // making trie.Database an interface and wrapping at that level. It's a huge + // refactor, but it could be worth it if another occurrence arises. + AddResolver(ethdb.KeyValueStore) } // nodeIteratorState represents the iteration state at one particular node of the @@ -119,6 +133,8 @@ type nodeIterator struct { stack []*nodeIteratorState // Hierarchy of trie nodes persisting the iteration state path []byte // Path to the current node err error // Failure set in case of an internal error in the iterator + + resolver ethdb.KeyValueStore // Optional intermediate resolver above the disk layer } // errIteratorEnd is stored in nodeIterator.err when iteration is done. @@ -143,6 +159,10 @@ func newNodeIterator(trie *Trie, start []byte) NodeIterator { return it } +func (it *nodeIterator) AddResolver(resolver ethdb.KeyValueStore) { + it.resolver = resolver +} + func (it *nodeIterator) Hash() common.Hash { if len(it.stack) == 0 { return common.Hash{} @@ -243,7 +263,7 @@ func (it *nodeIterator) seek(prefix []byte) error { key = key[:len(key)-1] // Move forward until we're just before the closest match to key. for { - state, parentIndex, path, err := it.peek(bytes.HasPrefix(key, it.path)) + state, parentIndex, path, err := it.peekSeek(key) if err == errIteratorEnd { return errIteratorEnd } else if err != nil { @@ -255,16 +275,21 @@ func (it *nodeIterator) seek(prefix []byte) error { } } +// init initializes the the iterator. +func (it *nodeIterator) init() (*nodeIteratorState, error) { + root := it.trie.Hash() + state := &nodeIteratorState{node: it.trie.root, index: -1} + if root != emptyRoot { + state.hash = root + } + return state, state.resolve(it, nil) +} + // peek creates the next state of the iterator. func (it *nodeIterator) peek(descend bool) (*nodeIteratorState, *int, []byte, error) { + // Initialize the iterator if we've just started. if len(it.stack) == 0 { - // Initialize the iterator if we've just started. - root := it.trie.Hash() - state := &nodeIteratorState{node: it.trie.root, index: -1} - if root != emptyRoot { - state.hash = root - } - err := state.resolve(it.trie, nil) + state, err := it.init() return state, nil, nil, err } if !descend { @@ -281,7 +306,40 @@ func (it *nodeIterator) peek(descend bool) (*nodeIteratorState, *int, []byte, er } state, path, ok := it.nextChild(parent, ancestor) if ok { - if err := state.resolve(it.trie, path); err != nil { + if err := state.resolve(it, path); err != nil { + return parent, &parent.index, path, err + } + return state, &parent.index, path, nil + } + // No more child nodes, move back up. + it.pop() + } + return nil, nil, nil, errIteratorEnd +} + +// peekSeek is like peek, but it also tries to skip resolving hashes by skipping +// over the siblings that do not lead towards the desired seek position. +func (it *nodeIterator) peekSeek(seekKey []byte) (*nodeIteratorState, *int, []byte, error) { + // Initialize the iterator if we've just started. + if len(it.stack) == 0 { + state, err := it.init() + return state, nil, nil, err + } + if !bytes.HasPrefix(seekKey, it.path) { + // If we're skipping children, pop the current node first + it.pop() + } + + // Continue iteration to the next child + for len(it.stack) > 0 { + parent := it.stack[len(it.stack)-1] + ancestor := parent.hash + if (ancestor == common.Hash{}) { + ancestor = parent.parent + } + state, path, ok := it.nextChildAt(parent, ancestor, seekKey) + if ok { + if err := state.resolve(it, path); err != nil { return parent, &parent.index, path, err } return state, &parent.index, path, nil @@ -292,9 +350,21 @@ func (it *nodeIterator) peek(descend bool) (*nodeIteratorState, *int, []byte, er return nil, nil, nil, errIteratorEnd } -func (st *nodeIteratorState) resolve(tr *Trie, path []byte) error { +func (it *nodeIterator) resolveHash(hash hashNode, path []byte) (node, error) { + if it.resolver != nil { + if blob, err := it.resolver.Get(hash); err == nil && len(blob) > 0 { + if resolved, err := decodeNode(hash, blob); err == nil { + return resolved, nil + } + } + } + resolved, err := it.trie.resolveHash(hash, path) + return resolved, err +} + +func (st *nodeIteratorState) resolve(it *nodeIterator, path []byte) error { if hash, ok := st.node.(hashNode); ok { - resolved, err := tr.resolveHash(hash, path) + resolved, err := it.resolveHash(hash, path) if err != nil { return err } @@ -304,25 +374,38 @@ func (st *nodeIteratorState) resolve(tr *Trie, path []byte) error { return nil } +func findChild(n *fullNode, index int, path []byte, ancestor common.Hash) (node, *nodeIteratorState, []byte, int) { + var ( + child node + state *nodeIteratorState + childPath []byte + ) + for ; index < len(n.Children); index++ { + if n.Children[index] != nil { + child = n.Children[index] + hash, _ := child.cache() + state = &nodeIteratorState{ + hash: common.BytesToHash(hash), + node: child, + parent: ancestor, + index: -1, + pathlen: len(path), + } + childPath = append(childPath, path...) + childPath = append(childPath, byte(index)) + return child, state, childPath, index + } + } + return nil, nil, nil, 0 +} + func (it *nodeIterator) nextChild(parent *nodeIteratorState, ancestor common.Hash) (*nodeIteratorState, []byte, bool) { switch node := parent.node.(type) { case *fullNode: - // Full node, move to the first non-nil child. - for i := parent.index + 1; i < len(node.Children); i++ { - child := node.Children[i] - if child != nil { - hash, _ := child.cache() - state := &nodeIteratorState{ - hash: common.BytesToHash(hash), - node: child, - parent: ancestor, - index: -1, - pathlen: len(it.path), - } - path := append(it.path, byte(i)) - parent.index = i - 1 - return state, path, true - } + //Full node, move to the first non-nil child. + if child, state, path, index := findChild(node, parent.index+1, it.path, ancestor); child != nil { + parent.index = index - 1 + return state, path, true } case *shortNode: // Short node, return the pointer singleton child @@ -342,6 +425,52 @@ func (it *nodeIterator) nextChild(parent *nodeIteratorState, ancestor common.Has return parent, it.path, false } +// nextChildAt is similar to nextChild, except that it targets a child as close to the +// target key as possible, thus skipping siblings. +func (it *nodeIterator) nextChildAt(parent *nodeIteratorState, ancestor common.Hash, key []byte) (*nodeIteratorState, []byte, bool) { + switch n := parent.node.(type) { + case *fullNode: + // Full node, move to the first non-nil child before the desired key position + child, state, path, index := findChild(n, parent.index+1, it.path, ancestor) + if child == nil { + // No more children in this fullnode + return parent, it.path, false + } + // If the child we found is already past the seek position, just return it. + if bytes.Compare(path, key) >= 0 { + parent.index = index - 1 + return state, path, true + } + // The child is before the seek position. Try advancing + for { + nextChild, nextState, nextPath, nextIndex := findChild(n, index+1, it.path, ancestor) + // If we run out of children, or skipped past the target, return the + // previous one + if nextChild == nil || bytes.Compare(nextPath, key) >= 0 { + parent.index = index - 1 + return state, path, true + } + // We found a better child closer to the target + state, path, index = nextState, nextPath, nextIndex + } + case *shortNode: + // Short node, return the pointer singleton child + if parent.index < 0 { + hash, _ := n.Val.cache() + state := &nodeIteratorState{ + hash: common.BytesToHash(hash), + node: n.Val, + parent: ancestor, + index: -1, + pathlen: len(it.path), + } + path := append(it.path, n.Key...) + return state, path, true + } + } + return parent, it.path, false +} + func (it *nodeIterator) push(state *nodeIteratorState, parentIndex *int, path []byte) { it.path = path it.stack = append(it.stack, state) @@ -420,6 +549,10 @@ func (it *differenceIterator) Path() []byte { return it.b.Path() } +func (it *differenceIterator) AddResolver(resolver ethdb.KeyValueStore) { + panic("not implemented") +} + func (it *differenceIterator) Next(bool) bool { // Invariants: // - We always advance at least one element in b. @@ -527,6 +660,10 @@ func (it *unionIterator) Path() []byte { return (*it.items)[0].Path() } +func (it *unionIterator) AddResolver(resolver ethdb.KeyValueStore) { + panic("not implemented") +} + // Next returns the next node in the union of tries being iterated over. // // It does this by maintaining a heap of iterators, sorted by the iteration diff --git a/trie/iterator_test.go b/trie/iterator_test.go index 75a0a99e51..2518f7bac8 100644 --- a/trie/iterator_test.go +++ b/trie/iterator_test.go @@ -18,11 +18,14 @@ package trie import ( "bytes" + "encoding/binary" "fmt" "math/rand" "testing" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/crypto" + "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb/memorydb" ) @@ -440,3 +443,81 @@ func checkIteratorNoDups(t *testing.T, it NodeIterator, seen map[string]bool) in } return len(seen) } + +type loggingDb struct { + getCount uint64 + backend ethdb.KeyValueStore +} + +func (l *loggingDb) Has(key []byte) (bool, error) { + return l.backend.Has(key) +} + +func (l *loggingDb) Get(key []byte) ([]byte, error) { + l.getCount++ + return l.backend.Get(key) +} + +func (l *loggingDb) Put(key []byte, value []byte) error { + return l.backend.Put(key, value) +} + +func (l *loggingDb) Delete(key []byte) error { + return l.backend.Delete(key) +} + +func (l *loggingDb) NewBatch() ethdb.Batch { + return l.backend.NewBatch() +} + +func (l *loggingDb) NewIterator(prefix []byte, start []byte) ethdb.Iterator { + fmt.Printf("NewIterator\n") + return l.backend.NewIterator(prefix, start) +} +func (l *loggingDb) Stat(property string) (string, error) { + return l.backend.Stat(property) +} + +func (l *loggingDb) Compact(start []byte, limit []byte) error { + return l.backend.Compact(start, limit) +} + +func (l *loggingDb) Close() error { + return l.backend.Close() +} + +// makeLargeTestTrie create a sample test trie +func makeLargeTestTrie() (*Database, *SecureTrie, *loggingDb) { + // Create an empty trie + logDb := &loggingDb{0, memorydb.New()} + triedb := NewDatabase(logDb) + trie, _ := NewSecure(common.Hash{}, triedb) + + // Fill it with some arbitrary data + for i := 0; i < 10000; i++ { + key := make([]byte, 32) + val := make([]byte, 32) + binary.BigEndian.PutUint64(key, uint64(i)) + binary.BigEndian.PutUint64(val, uint64(i)) + key = crypto.Keccak256(key) + val = crypto.Keccak256(val) + trie.Update(key, val) + } + trie.Commit(nil) + // Return the generated trie + return triedb, trie, logDb +} + +// Tests that the node iterator indeed walks over the entire database contents. +func TestNodeIteratorLargeTrie(t *testing.T) { + // Create some arbitrary test trie to iterate + db, trie, logDb := makeLargeTestTrie() + db.Cap(0) // flush everything + // Do a seek operation + trie.NodeIterator(common.FromHex("0x77667766776677766778855885885885")) + // master: 24 get operations + // this pr: 5 get operations + if have, want := logDb.getCount, uint64(5); have != want { + t.Fatalf("Too many lookups during seek, have %d want %d", have, want) + } +} diff --git a/trie/notary.go b/trie/notary.go deleted file mode 100644 index 5a64727aa7..0000000000 --- a/trie/notary.go +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright 2020 The go-ethereum Authors -// This file is part of the go-ethereum library. -// -// The go-ethereum library is free software: you can redistribute it and/or modify -// it under the terms of the GNU Lesser General Public License as published by -// the Free Software Foundation, either version 3 of the License, or -// (at your option) any later version. -// -// The go-ethereum library is distributed in the hope that it will be useful, -// but WITHOUT ANY WARRANTY; without even the implied warranty of -// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -// GNU Lesser General Public License for more details. -// -// You should have received a copy of the GNU Lesser General Public License -// along with the go-ethereum library. If not, see . - -package trie - -import ( - "github.com/ethereum/go-ethereum/ethdb" - "github.com/ethereum/go-ethereum/ethdb/memorydb" -) - -// KeyValueNotary tracks which keys have been accessed through a key-value reader -// with te scope of verifying if certain proof datasets are maliciously bloated. -type KeyValueNotary struct { - ethdb.KeyValueReader - reads map[string]struct{} -} - -// NewKeyValueNotary wraps a key-value database with an access notary to track -// which items have bene accessed. -func NewKeyValueNotary(db ethdb.KeyValueReader) *KeyValueNotary { - return &KeyValueNotary{ - KeyValueReader: db, - reads: make(map[string]struct{}), - } -} - -// Get retrieves an item from the underlying database, but also tracks it as an -// accessed slot for bloat checks. -func (k *KeyValueNotary) Get(key []byte) ([]byte, error) { - k.reads[string(key)] = struct{}{} - return k.KeyValueReader.Get(key) -} - -// Accessed returns s snapshot of the original key-value store containing only the -// data accessed through the notary. -func (k *KeyValueNotary) Accessed() ethdb.KeyValueStore { - db := memorydb.New() - for keystr := range k.reads { - key := []byte(keystr) - val, _ := k.KeyValueReader.Get(key) - db.Put(key, val) - } - return db -} diff --git a/trie/proof.go b/trie/proof.go index 61c35a8423..08a9e40422 100644 --- a/trie/proof.go +++ b/trie/proof.go @@ -464,123 +464,91 @@ func hasRightElement(node node, key []byte) bool { // // Except returning the error to indicate the proof is valid or not, the function will // also return a flag to indicate whether there exists more accounts/slots in the trie. -func VerifyRangeProof(rootHash common.Hash, firstKey []byte, lastKey []byte, keys [][]byte, values [][]byte, proof ethdb.KeyValueReader) (ethdb.KeyValueStore, *Trie, *KeyValueNotary, bool, error) { +// +// Note: This method does not verify that the proof is of minimal form. If the input +// proofs are 'bloated' with neighbour leaves or random data, aside from the 'useful' +// data, then the proof will still be accepted. +func VerifyRangeProof(rootHash common.Hash, firstKey []byte, lastKey []byte, keys [][]byte, values [][]byte, proof ethdb.KeyValueReader) (bool, error) { if len(keys) != len(values) { - return nil, nil, nil, false, fmt.Errorf("inconsistent proof data, keys: %d, values: %d", len(keys), len(values)) + return false, fmt.Errorf("inconsistent proof data, keys: %d, values: %d", len(keys), len(values)) } // Ensure the received batch is monotonic increasing. for i := 0; i < len(keys)-1; i++ { if bytes.Compare(keys[i], keys[i+1]) >= 0 { - return nil, nil, nil, false, errors.New("range is not monotonically increasing") + return false, errors.New("range is not monotonically increasing") } } - // Create a key-value notary to track which items from the given proof the - // range prover actually needed to verify the data - notary := NewKeyValueNotary(proof) - // Special case, there is no edge proof at all. The given range is expected // to be the whole leaf-set in the trie. if proof == nil { - var ( - diskdb = memorydb.New() - triedb = NewDatabase(diskdb) - ) - tr, err := New(common.Hash{}, triedb) - if err != nil { - return nil, nil, nil, false, err - } + tr := NewStackTrie(nil) for index, key := range keys { tr.TryUpdate(key, values[index]) } - if tr.Hash() != rootHash { - return nil, nil, nil, false, fmt.Errorf("invalid proof, want hash %x, got %x", rootHash, tr.Hash()) + if have, want := tr.Hash(), rootHash; have != want { + return false, fmt.Errorf("invalid proof, want hash %x, got %x", want, have) } - // Proof seems valid, serialize all the nodes into the database - if _, err := tr.Commit(nil); err != nil { - return nil, nil, nil, false, err - } - if err := triedb.Commit(rootHash, false, nil); err != nil { - return nil, nil, nil, false, err - } - return diskdb, tr, notary, false, nil // No more elements + return false, nil // No more elements } // Special case, there is a provided edge proof but zero key/value // pairs, ensure there are no more accounts / slots in the trie. if len(keys) == 0 { - root, val, err := proofToPath(rootHash, nil, firstKey, notary, true) + root, val, err := proofToPath(rootHash, nil, firstKey, proof, true) if err != nil { - return nil, nil, nil, false, err + return false, err } if val != nil || hasRightElement(root, firstKey) { - return nil, nil, nil, false, errors.New("more entries available") + return false, errors.New("more entries available") } - // Since the entire proof is a single path, we can construct a trie and a - // node database directly out of the inputs, no need to generate them - diskdb := notary.Accessed() - tr := &Trie{ - db: NewDatabase(diskdb), - root: root, - } - return diskdb, tr, notary, hasRightElement(root, firstKey), nil + return hasRightElement(root, firstKey), nil } // Special case, there is only one element and two edge keys are same. // In this case, we can't construct two edge paths. So handle it here. if len(keys) == 1 && bytes.Equal(firstKey, lastKey) { - root, val, err := proofToPath(rootHash, nil, firstKey, notary, false) + root, val, err := proofToPath(rootHash, nil, firstKey, proof, false) if err != nil { - return nil, nil, nil, false, err + return false, err } if !bytes.Equal(firstKey, keys[0]) { - return nil, nil, nil, false, errors.New("correct proof but invalid key") + return false, errors.New("correct proof but invalid key") } if !bytes.Equal(val, values[0]) { - return nil, nil, nil, false, errors.New("correct proof but invalid data") + return false, errors.New("correct proof but invalid data") } - // Since the entire proof is a single path, we can construct a trie and a - // node database directly out of the inputs, no need to generate them - diskdb := notary.Accessed() - tr := &Trie{ - db: NewDatabase(diskdb), - root: root, - } - return diskdb, tr, notary, hasRightElement(root, firstKey), nil + return hasRightElement(root, firstKey), nil } // Ok, in all other cases, we require two edge paths available. // First check the validity of edge keys. if bytes.Compare(firstKey, lastKey) >= 0 { - return nil, nil, nil, false, errors.New("invalid edge keys") + return false, errors.New("invalid edge keys") } // todo(rjl493456442) different length edge keys should be supported if len(firstKey) != len(lastKey) { - return nil, nil, nil, false, errors.New("inconsistent edge keys") + return false, errors.New("inconsistent edge keys") } // Convert the edge proofs to edge trie paths. Then we can // have the same tree architecture with the original one. // For the first edge proof, non-existent proof is allowed. - root, _, err := proofToPath(rootHash, nil, firstKey, notary, true) + root, _, err := proofToPath(rootHash, nil, firstKey, proof, true) if err != nil { - return nil, nil, nil, false, err + return false, err } // Pass the root node here, the second path will be merged // with the first one. For the last edge proof, non-existent // proof is also allowed. - root, _, err = proofToPath(rootHash, root, lastKey, notary, true) + root, _, err = proofToPath(rootHash, root, lastKey, proof, true) if err != nil { - return nil, nil, nil, false, err + return false, err } // Remove all internal references. All the removed parts should // be re-filled(or re-constructed) by the given leaves range. empty, err := unsetInternal(root, firstKey, lastKey) if err != nil { - return nil, nil, nil, false, err + return false, err } // Rebuild the trie with the leaf stream, the shape of trie // should be same with the original one. - var ( - diskdb = memorydb.New() - triedb = NewDatabase(diskdb) - ) - tr := &Trie{root: root, db: triedb} + tr := &Trie{root: root, db: NewDatabase(memorydb.New())} if empty { tr.root = nil } @@ -588,16 +556,9 @@ func VerifyRangeProof(rootHash common.Hash, firstKey []byte, lastKey []byte, key tr.TryUpdate(key, values[index]) } if tr.Hash() != rootHash { - return nil, nil, nil, false, fmt.Errorf("invalid proof, want hash %x, got %x", rootHash, tr.Hash()) - } - // Proof seems valid, serialize all the nodes into the database - if _, err := tr.Commit(nil); err != nil { - return nil, nil, nil, false, err - } - if err := triedb.Commit(rootHash, false, nil); err != nil { - return nil, nil, nil, false, err + return false, fmt.Errorf("invalid proof, want hash %x, got %x", rootHash, tr.Hash()) } - return diskdb, tr, notary, hasRightElement(root, keys[len(keys)-1]), nil + return hasRightElement(root, keys[len(keys)-1]), nil } // get returns the child of the given node. Return nil if the diff --git a/trie/proof_test.go b/trie/proof_test.go index 304affa9f2..a35b7144c0 100644 --- a/trie/proof_test.go +++ b/trie/proof_test.go @@ -182,7 +182,7 @@ func TestRangeProof(t *testing.T) { keys = append(keys, entries[i].k) vals = append(vals, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, vals, proof) + _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, vals, proof) if err != nil { t.Fatalf("Case %d(%d->%d) expect no error, got %v", i, start, end-1, err) } @@ -233,7 +233,7 @@ func TestRangeProofWithNonExistentProof(t *testing.T) { keys = append(keys, entries[i].k) vals = append(vals, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), first, last, keys, vals, proof) + _, err := VerifyRangeProof(trie.Hash(), first, last, keys, vals, proof) if err != nil { t.Fatalf("Case %d(%d->%d) expect no error, got %v", i, start, end-1, err) } @@ -254,7 +254,7 @@ func TestRangeProofWithNonExistentProof(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), first, last, k, v, proof) + _, err := VerifyRangeProof(trie.Hash(), first, last, k, v, proof) if err != nil { t.Fatal("Failed to verify whole rang with non-existent edges") } @@ -289,7 +289,7 @@ func TestRangeProofWithInvalidNonExistentProof(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), first, k[len(k)-1], k, v, proof) + _, err := VerifyRangeProof(trie.Hash(), first, k[len(k)-1], k, v, proof) if err == nil { t.Fatalf("Expected to detect the error, got nil") } @@ -311,7 +311,7 @@ func TestRangeProofWithInvalidNonExistentProof(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), k[0], last, k, v, proof) + _, err = VerifyRangeProof(trie.Hash(), k[0], last, k, v, proof) if err == nil { t.Fatalf("Expected to detect the error, got nil") } @@ -335,7 +335,7 @@ func TestOneElementRangeProof(t *testing.T) { if err := trie.Prove(entries[start].k, 0, proof); err != nil { t.Fatalf("Failed to prove the first node %v", err) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), entries[start].k, entries[start].k, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) + _, err := VerifyRangeProof(trie.Hash(), entries[start].k, entries[start].k, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -350,7 +350,7 @@ func TestOneElementRangeProof(t *testing.T) { if err := trie.Prove(entries[start].k, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), first, entries[start].k, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) + _, err = VerifyRangeProof(trie.Hash(), first, entries[start].k, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -365,7 +365,7 @@ func TestOneElementRangeProof(t *testing.T) { if err := trie.Prove(last, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), entries[start].k, last, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) + _, err = VerifyRangeProof(trie.Hash(), entries[start].k, last, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -380,7 +380,7 @@ func TestOneElementRangeProof(t *testing.T) { if err := trie.Prove(last, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), first, last, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) + _, err = VerifyRangeProof(trie.Hash(), first, last, [][]byte{entries[start].k}, [][]byte{entries[start].v}, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -399,7 +399,7 @@ func TestOneElementRangeProof(t *testing.T) { if err := tinyTrie.Prove(last, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(tinyTrie.Hash(), first, last, [][]byte{entry.k}, [][]byte{entry.v}, proof) + _, err = VerifyRangeProof(tinyTrie.Hash(), first, last, [][]byte{entry.k}, [][]byte{entry.v}, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -421,7 +421,7 @@ func TestAllElementsProof(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), nil, nil, k, v, nil) + _, err := VerifyRangeProof(trie.Hash(), nil, nil, k, v, nil) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -434,7 +434,7 @@ func TestAllElementsProof(t *testing.T) { if err := trie.Prove(entries[len(entries)-1].k, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), k[0], k[len(k)-1], k, v, proof) + _, err = VerifyRangeProof(trie.Hash(), k[0], k[len(k)-1], k, v, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -449,7 +449,7 @@ func TestAllElementsProof(t *testing.T) { if err := trie.Prove(last, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), first, last, k, v, proof) + _, err = VerifyRangeProof(trie.Hash(), first, last, k, v, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -482,7 +482,7 @@ func TestSingleSideRangeProof(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), common.Hash{}.Bytes(), k[len(k)-1], k, v, proof) + _, err := VerifyRangeProof(trie.Hash(), common.Hash{}.Bytes(), k[len(k)-1], k, v, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -518,7 +518,7 @@ func TestReverseSingleSideRangeProof(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), k[0], last.Bytes(), k, v, proof) + _, err := VerifyRangeProof(trie.Hash(), k[0], last.Bytes(), k, v, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -590,7 +590,7 @@ func TestBadRangeProof(t *testing.T) { index = mrand.Intn(end - start) vals[index] = nil } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), first, last, keys, vals, proof) + _, err := VerifyRangeProof(trie.Hash(), first, last, keys, vals, proof) if err == nil { t.Fatalf("%d Case %d index %d range: (%d->%d) expect error, got nil", i, testcase, index, start, end-1) } @@ -624,7 +624,7 @@ func TestGappedRangeProof(t *testing.T) { keys = append(keys, entries[i].k) vals = append(vals, entries[i].v) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, vals, proof) + _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, vals, proof) if err == nil { t.Fatal("expect error, got nil") } @@ -651,7 +651,7 @@ func TestSameSideProofs(t *testing.T) { if err := trie.Prove(last, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err := VerifyRangeProof(trie.Hash(), first, last, [][]byte{entries[pos].k}, [][]byte{entries[pos].v}, proof) + _, err := VerifyRangeProof(trie.Hash(), first, last, [][]byte{entries[pos].k}, [][]byte{entries[pos].v}, proof) if err == nil { t.Fatalf("Expected error, got nil") } @@ -667,7 +667,7 @@ func TestSameSideProofs(t *testing.T) { if err := trie.Prove(last, 0, proof); err != nil { t.Fatalf("Failed to prove the last node %v", err) } - _, _, _, _, err = VerifyRangeProof(trie.Hash(), first, last, [][]byte{entries[pos].k}, [][]byte{entries[pos].v}, proof) + _, err = VerifyRangeProof(trie.Hash(), first, last, [][]byte{entries[pos].k}, [][]byte{entries[pos].v}, proof) if err == nil { t.Fatalf("Expected error, got nil") } @@ -735,7 +735,7 @@ func TestHasRightElement(t *testing.T) { k = append(k, entries[i].k) v = append(v, entries[i].v) } - _, _, _, hasMore, err := VerifyRangeProof(trie.Hash(), firstKey, lastKey, k, v, proof) + hasMore, err := VerifyRangeProof(trie.Hash(), firstKey, lastKey, k, v, proof) if err != nil { t.Fatalf("Expected no error, got %v", err) } @@ -768,31 +768,19 @@ func TestEmptyRangeProof(t *testing.T) { if err := trie.Prove(first, 0, proof); err != nil { t.Fatalf("Failed to prove the first node %v", err) } - db, tr, not, _, err := VerifyRangeProof(trie.Hash(), first, nil, nil, nil, proof) + _, err := VerifyRangeProof(trie.Hash(), first, nil, nil, nil, proof) if c.err && err == nil { t.Fatalf("Expected error, got nil") } if !c.err && err != nil { t.Fatalf("Expected no error, got %v", err) } - // If no error was returned, ensure the returned trie and database contains - // the entire proof, since there's no value - if !c.err { - if err := tr.Prove(first, 0, memorydb.New()); err != nil { - t.Errorf("returned trie doesn't contain original proof: %v", err) - } - if memdb := db.(*memorydb.Database); memdb.Len() != proof.Len() { - t.Errorf("database entry count mismatch: have %d, want %d", memdb.Len(), proof.Len()) - } - if not == nil { - t.Errorf("missing notary") - } - } } } // TestBloatedProof tests a malicious proof, where the proof is more or less the -// whole trie. +// whole trie. Previously we didn't accept such packets, but the new APIs do, so +// lets leave this test as a bit weird, but present. func TestBloatedProof(t *testing.T) { // Use a small trie trie, kvs := nonRandomTrie(100) @@ -805,6 +793,8 @@ func TestBloatedProof(t *testing.T) { var vals [][]byte proof := memorydb.New() + // In the 'malicious' case, we add proofs for every single item + // (but only one key/value pair used as leaf) for i, entry := range entries { trie.Prove(entry.k, 0, proof) if i == 50 { @@ -812,13 +802,14 @@ func TestBloatedProof(t *testing.T) { vals = append(vals, entry.v) } } + // For reference, we use the same function, but _only_ prove the first + // and last element want := memorydb.New() trie.Prove(keys[0], 0, want) trie.Prove(keys[len(keys)-1], 0, want) - _, _, notary, _, _ := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, vals, proof) - if used := notary.Accessed().(*memorydb.Database); used.Len() != want.Len() { - t.Fatalf("notary proof size mismatch: have %d, want %d", used.Len(), want.Len()) + if _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, vals, proof); err != nil { + t.Fatalf("expected bloated proof to succeed, got %v", err) } } @@ -922,13 +913,40 @@ func benchmarkVerifyRangeProof(b *testing.B, size int) { b.ResetTimer() for i := 0; i < b.N; i++ { - _, _, _, _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, values, proof) + _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, values, proof) if err != nil { b.Fatalf("Case %d(%d->%d) expect no error, got %v", i, start, end-1, err) } } } +func BenchmarkVerifyRangeNoProof10(b *testing.B) { benchmarkVerifyRangeNoProof(b, 100) } +func BenchmarkVerifyRangeNoProof500(b *testing.B) { benchmarkVerifyRangeNoProof(b, 500) } +func BenchmarkVerifyRangeNoProof1000(b *testing.B) { benchmarkVerifyRangeNoProof(b, 1000) } + +func benchmarkVerifyRangeNoProof(b *testing.B, size int) { + trie, vals := randomTrie(size) + var entries entrySlice + for _, kv := range vals { + entries = append(entries, kv) + } + sort.Sort(entries) + + var keys [][]byte + var values [][]byte + for _, entry := range entries { + keys = append(keys, entry.k) + values = append(values, entry.v) + } + b.ResetTimer() + for i := 0; i < b.N; i++ { + _, err := VerifyRangeProof(trie.Hash(), keys[0], keys[len(keys)-1], keys, values, nil) + if err != nil { + b.Fatalf("Expected no error, got %v", err) + } + } +} + func randomTrie(n int) (*Trie, map[string]*kv) { trie := new(Trie) vals := make(map[string]*kv) diff --git a/trie/stacktrie.go b/trie/stacktrie.go index a198eb204b..f9ff10b62d 100644 --- a/trie/stacktrie.go +++ b/trie/stacktrie.go @@ -17,8 +17,12 @@ package trie import ( + "bufio" + "bytes" + "encoding/gob" "errors" "fmt" + "io" "sync" "github.com/ethereum/go-ethereum/common" @@ -66,6 +70,96 @@ func NewStackTrie(db ethdb.KeyValueWriter) *StackTrie { } } +// NewFromBinary initialises a serialized stacktrie with the given db. +func NewFromBinary(data []byte, db ethdb.KeyValueWriter) (*StackTrie, error) { + var st StackTrie + if err := st.UnmarshalBinary(data); err != nil { + return nil, err + } + // If a database is used, we need to recursively add it to every child + if db != nil { + st.setDb(db) + } + return &st, nil +} + +// MarshalBinary implements encoding.BinaryMarshaler +func (st *StackTrie) MarshalBinary() (data []byte, err error) { + var ( + b bytes.Buffer + w = bufio.NewWriter(&b) + ) + if err := gob.NewEncoder(w).Encode(struct { + Nodetype uint8 + Val []byte + Key []byte + KeyOffset uint8 + }{ + st.nodeType, + st.val, + st.key, + uint8(st.keyOffset), + }); err != nil { + return nil, err + } + for _, child := range st.children { + if child == nil { + w.WriteByte(0) + continue + } + w.WriteByte(1) + if childData, err := child.MarshalBinary(); err != nil { + return nil, err + } else { + w.Write(childData) + } + } + w.Flush() + return b.Bytes(), nil +} + +// UnmarshalBinary implements encoding.BinaryUnmarshaler +func (st *StackTrie) UnmarshalBinary(data []byte) error { + r := bytes.NewReader(data) + return st.unmarshalBinary(r) +} + +func (st *StackTrie) unmarshalBinary(r io.Reader) error { + var dec struct { + Nodetype uint8 + Val []byte + Key []byte + KeyOffset uint8 + } + gob.NewDecoder(r).Decode(&dec) + st.nodeType = dec.Nodetype + st.val = dec.Val + st.key = dec.Key + st.keyOffset = int(dec.KeyOffset) + + var hasChild = make([]byte, 1) + for i := range st.children { + if _, err := r.Read(hasChild); err != nil { + return err + } else if hasChild[0] == 0 { + continue + } + var child StackTrie + child.unmarshalBinary(r) + st.children[i] = &child + } + return nil +} + +func (st *StackTrie) setDb(db ethdb.KeyValueWriter) { + st.db = db + for _, child := range st.children { + if child != nil { + child.setDb(db) + } + } +} + func newLeaf(ko int, key, val []byte, db ethdb.KeyValueWriter) *StackTrie { st := stackTrieFromPool(db) st.nodeType = leafNode @@ -346,8 +440,7 @@ func (st *StackTrie) hash() { panic(err) } case emptyNode: - st.val = st.val[:0] - st.val = append(st.val, emptyRoot[:]...) + st.val = emptyRoot.Bytes() st.key = st.key[:0] st.nodeType = hashedNode return @@ -357,17 +450,12 @@ func (st *StackTrie) hash() { st.key = st.key[:0] st.nodeType = hashedNode if len(h.tmp) < 32 { - st.val = st.val[:0] - st.val = append(st.val, h.tmp...) + st.val = common.CopyBytes(h.tmp) return } - // Going to write the hash to the 'val'. Need to ensure it's properly sized first - // Typically, 'branchNode's will have no 'val', and require this allocation - if required := 32 - len(st.val); required > 0 { - buf := make([]byte, required) - st.val = append(st.val, buf...) - } - st.val = st.val[:32] + // Write the hash to the 'val'. We allocate a new val here to not mutate + // input values + st.val = make([]byte, 32) h.sha.Reset() h.sha.Write(h.tmp) h.sha.Read(st.val) diff --git a/trie/stacktrie_test.go b/trie/stacktrie_test.go index 29706f2e9d..ccdf389d52 100644 --- a/trie/stacktrie_test.go +++ b/trie/stacktrie_test.go @@ -1,9 +1,12 @@ package trie import ( + "bytes" + "math/big" "testing" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/ethdb/memorydb" ) @@ -119,3 +122,96 @@ func TestUpdateVariableKeys(t *testing.T) { t.Fatalf("error %x != %x", st.Hash(), nt.Hash()) } } + +// TestStacktrieNotModifyValues checks that inserting blobs of data into the +// stacktrie does not mutate the blobs +func TestStacktrieNotModifyValues(t *testing.T) { + st := NewStackTrie(nil) + { // Test a very small trie + // Give it the value as a slice with large backing alloc, + // so if the stacktrie tries to append, it won't have to realloc + value := make([]byte, 1, 100) + value[0] = 0x2 + want := common.CopyBytes(value) + st.TryUpdate([]byte{0x01}, value) + st.Hash() + if have := value; !bytes.Equal(have, want) { + t.Fatalf("tiny trie: have %#x want %#x", have, want) + } + st = NewStackTrie(nil) + } + // Test with a larger trie + keyB := big.NewInt(1) + keyDelta := big.NewInt(1) + var vals [][]byte + getValue := func(i int) []byte { + if i%2 == 0 { // large + return crypto.Keccak256(big.NewInt(int64(i)).Bytes()) + } else { //small + return big.NewInt(int64(i)).Bytes() + } + } + for i := 0; i < 1000; i++ { + key := common.BigToHash(keyB) + value := getValue(i) + st.TryUpdate(key.Bytes(), value) + vals = append(vals, value) + keyB = keyB.Add(keyB, keyDelta) + keyDelta.Add(keyDelta, common.Big1) + } + st.Hash() + for i := 0; i < 1000; i++ { + want := getValue(i) + + have := vals[i] + if !bytes.Equal(have, want) { + t.Fatalf("item %d, have %#x want %#x", i, have, want) + } + + } +} + +// TestStacktrieSerialization tests that the stacktrie works well if we +// serialize/unserialize it a lot +func TestStacktrieSerialization(t *testing.T) { + var ( + st = NewStackTrie(nil) + nt, _ = New(common.Hash{}, NewDatabase(memorydb.New())) + keyB = big.NewInt(1) + keyDelta = big.NewInt(1) + vals [][]byte + keys [][]byte + ) + getValue := func(i int) []byte { + if i%2 == 0 { // large + return crypto.Keccak256(big.NewInt(int64(i)).Bytes()) + } else { //small + return big.NewInt(int64(i)).Bytes() + } + } + for i := 0; i < 10; i++ { + vals = append(vals, getValue(i)) + keys = append(keys, common.BigToHash(keyB).Bytes()) + keyB = keyB.Add(keyB, keyDelta) + keyDelta.Add(keyDelta, common.Big1) + } + for i, k := range keys { + nt.TryUpdate(k, common.CopyBytes(vals[i])) + } + + for i, k := range keys { + blob, err := st.MarshalBinary() + if err != nil { + t.Fatal(err) + } + newSt, err := NewFromBinary(blob, nil) + if err != nil { + t.Fatal(err) + } + st = newSt + st.TryUpdate(k, common.CopyBytes(vals[i])) + } + if have, want := st.Hash(), nt.Hash(); have != want { + t.Fatalf("have %#x want %#x", have, want) + } +} diff --git a/trie/sync.go b/trie/sync.go index dd8279b665..3a6076ff8f 100644 --- a/trie/sync.go +++ b/trie/sync.go @@ -398,7 +398,14 @@ func (s *Sync) children(req *request, object node) ([]*request, error) { // Notify any external watcher of a new key/value node if req.callback != nil { if node, ok := (child.node).(valueNode); ok { - if err := req.callback(child.path, node, req.hash); err != nil { + var paths [][]byte + if len(child.path) == 2*common.HashLength { + paths = append(paths, hexToKeybytes(child.path)) + } else if len(child.path) == 4*common.HashLength { + paths = append(paths, hexToKeybytes(child.path[:2*common.HashLength])) + paths = append(paths, hexToKeybytes(child.path[2*common.HashLength:])) + } + if err := req.callback(paths, child.path, node, req.hash); err != nil { return nil, err } } diff --git a/trie/trie.go b/trie/trie.go index 87b72ecf17..7ed235fa8a 100644 --- a/trie/trie.go +++ b/trie/trie.go @@ -37,9 +37,20 @@ var ( ) // LeafCallback is a callback type invoked when a trie operation reaches a leaf -// node. It's used by state sync and commit to allow handling external references -// between account and storage tries. -type LeafCallback func(path []byte, leaf []byte, parent common.Hash) error +// node. +// +// The paths is a path tuple identifying a particular trie node either in a single +// trie (account) or a layered trie (account -> storage). Each path in the tuple +// is in the raw format(32 bytes). +// +// The hexpath is a composite hexary path identifying the trie node. All the key +// bytes are converted to the hexary nibbles and composited with the parent path +// if the trie node is in a layered trie. +// +// It's used by state sync and commit to allow handling external references +// between account and storage tries. And also it's used in the state healing +// for extracting the raw states(leaf nodes) with corresponding paths. +type LeafCallback func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error // Trie is a Merkle Patricia Trie. // The zero value is an empty trie with no database. diff --git a/trie/trie_test.go b/trie/trie_test.go index 3aa4098d14..492b423c2f 100644 --- a/trie/trie_test.go +++ b/trie/trie_test.go @@ -569,7 +569,7 @@ func BenchmarkCommitAfterHash(b *testing.B) { benchmarkCommitAfterHash(b, nil) }) var a account - onleaf := func(path []byte, leaf []byte, parent common.Hash) error { + onleaf := func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error { rlp.DecodeBytes(leaf, &a) return nil } @@ -830,8 +830,8 @@ func TestCommitSequenceStackTrie(t *testing.T) { val = make([]byte, 1+prng.Intn(1024)) } prng.Read(val) - trie.TryUpdate(key, common.CopyBytes(val)) - stTrie.TryUpdate(key, common.CopyBytes(val)) + trie.TryUpdate(key, val) + stTrie.TryUpdate(key, val) } // Flush trie -> database root, _ := trie.Commit(nil)