-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Massive spike in DB size on Goerli #1159
Comments
It doesn't seem like
|
But
|
The
|
Running
This is very weird. |
If I try to copy the contents to the root volume I can see it takes up more then 395 GB:
So there's something off with how I'm counting the files size. |
Actually, if I manually add up the sizes of all files using
So there's something wrong with how |
My suspicion has been that the persistent storage of consensus snapshots could make problems. By default, there is a persistent snapshot for every 1k blocks (I changed that to 4k for the upcoming pull request.) I added some snapshot storage logging in TRACE mode and run against the As it seems, the impact of reducing the number of persistent snapshots by 300% is negligible for the first 2.2m blocks. The disk storage size showed not much difference for either sample. Here are the statistics for syncing ~2.2m blocks. Caching every 1k blocks:
Caching every 1k blocks after replacing legacy LRU handler
Caching every 4k blocks after replacing legacy LRU handler
Legend:
|
I'm running out of space on the host. Unless you have objections @mjfh I will purdge the node data to let it re-sync. |
No objections :) |
Done. Lets see how it will look after resyncing. |
We got re-synced(I think?) and we're back to 1.6 TB:
I'm not actually sure if we are synced because the RPC call times out:
|
I've identified a very big spike in disk usage by our
nimbus-eth1
node running on Goerli:It appears the version running at the time was 7f0bc71:
This caused the DB to grow up to 1.6 TB:
Which is over 10x more than a fully synced Geth node on Goerli:
The text was updated successfully, but these errors were encountered: