-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Geth sync performance #14643
Comments
This software is buggy. Try parity. If that fails too stop thinking to start a node on ethereum. |
This is a known issue (see #14575, ethereum/mist#2466 for some possible duplicates) which is currently being worked on (see #14460). Hopefully, it will be solved soon. I believe the pull request is in need of another reviewer right now if you have such talent. |
It means it is a known serious bug. I started with geth on 27-may on 8-may I stuck at block number 2468707, I save that data for future use (69.5GB). I was syncing only 10 hours daily. I shoved in different fora but can't get any relevant information but one that you should use --fast mode. I thought I should give a try. After running geth for continuous 10 days I have to stopped it yesterday (see #14647). At last I export from geth (2468707) and import in parity and in 6 hours I have the best block. As, I am not a geek, you may understand my frustration. My opinion, do not use geth in fast mode. |
@remyroy Thanks for the links! |
@dsvi can you try to start geth with an increased cache:
for 2GB of cache allocation, or any other higher value? |
I had tried with --cache 8192 with no difference. |
Check out this thread: #15001 |
Geth is painfully slow on non SSD drives. Syncing speed is 1 block per 5 sec.
Also leveldb, used by geth, has serious write amplification problems (essentially an SSD killer) when the same data is written tens of times in process of numerous compactions.
Are other DBs considered for geth? The current one seem to be not well suited neither for HDD (too slow) nor for SSD (kills it quickly). Which might be a bigger problem when chain data keeps growing.
The text was updated successfully, but these errors were encountered: