Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

investigating block 50295 crash #228

Closed
jangko opened this issue Feb 8, 2019 · 1 comment
Closed

investigating block 50295 crash #228

jangko opened this issue Feb 8, 2019 · 1 comment

Comments

@jangko
Copy link
Contributor

jangko commented Feb 8, 2019

how to reproduce:
using nimbus or persist. sync until block 50295. it will crash with:

persist.nim(104)         persist
persist.nim(76)          main
eth_types.nim(377)       persistBlocks
chain.nim(46)            persistBlocks
executor.nim(176)        processBlock
executor.nim(9)          processTransaction
executor.nim(27)         contractCall
vm_state_transactions.nim(50) execComputation
interpreter_dispatch.nim(255) executeOpcodes
interpreter_dispatch.nim(243) updateOpcodeExec
interpreter_dispatch.nim(210) frontierVM
opcodes_impl.nim(452)    sstore
state_db.nim(104)        setStorage     /// <---- interesting line
hexary.nim(632)          put
hexary.nim(624)          put
hexary.nim(610)          mergeAt
hexary.nim(529)          mergeAtAux
hexary.nim(519)          mergeAt
rlp.nim(67)              rawData
rlp.nim(236)             currentElemEnd
system.nim(3790)         failedAssertImpl
system.nim(3783)         raiseAssert
system.nim(2830)         sysFatal

state_db.setStorage crash only happened if the db is corrupted or the state trie is incomplete.

When using hunter, it pass block validation without any issue. This means the bug is not inside the VM but outside VM.

It involves contract address 0x109c4f2ccc82c4d77bde15f306707320294aea3f

Looking at etherscan.io history, right before block 50295, there is a failed OOG contract call at block 50294. My current suspicion is when the OOG happened, the state trie is not updated properly, but where? can some transaction.dispose() call at wrong place? or we already need some journalDB?

@jangko
Copy link
Contributor Author

jangko commented Feb 8, 2019

I thought my local lmdb db was corrupted, but when i start a fresh db with rocksdb, same thing happened.

jangko added a commit to jangko/nimbus-eth1 that referenced this issue Feb 12, 2019
jangko added a commit to jangko/nimbus-eth1 that referenced this issue Feb 15, 2019
@zah zah closed this as completed in c53e7fa Feb 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant