Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EVMC: Small stacks when using EVMC, closes #575 (segfaults) #598

Merged
merged 1 commit into from
Apr 27, 2021

Conversation

jlokier
Copy link
Contributor

@jlokier jlokier commented Apr 26, 2021

This patch reduces stack space used with EVM in ENABLE_EVMC=1 mode, from 13 MB worst case to 550 kB, a 24x reduction.

This completes fixing the "stack problem" and closes #575 (EVM: Different segmentation faults when running the test suite with EVMC)

It also closes #256 (recursive EVM call trigger unrecoverable stack overflow).

After this patch, it is possible to re-enable the CI targets which had to be disabled due to #575.

This change is also a required precursor for switching over to "nearly EVMC" as the clean and focused Nimbus-internal API between EVM and sync/database processes, and is also key to the use of Chronos async in those processes when calling the EVM.

(The motivation is the internal interface has to be substantially changed anyway for the parallel sync and database processes, and EVMC turns out to be well-designed and well-suited for this. It provides good separation between modules, and suits our needs better than our other current interface. Might as well use a good one designed by someone else. EVMC is 98% done in Nimbus thanks to great work done before by @jangko, and we can use Nimbus-specific extensions where we need flexibility, including for performance. Being aligned with the ecosystem is a useful bonus feature.)

All tests below were run on Ubuntu 20.04 LTS server, x86-64. This matches one of the targets that has been disabled for a while in CI in EVMC mode due to stack overflow crashing the tests, so it's a good choice.

Measurements before

Testing commit e76e0144 2021-04-22 11:29:42 +0700 add submodules: graphql and toml-serialization.

$ rm -f build/all_tests && make ENABLE_EVMC=1 test
$ ulimit -S -s 16384 # Requires larger stack than default to avoid crash.
$ ./build/all_tests 9 | tee tlog
[Suite] persist block json tests
...
Stack range 38416 depthHigh 3
...
Stack range 13074720 depthHigh 1024
[OK] tests/fixtures/PersistBlockTests/block1431916.json

These tests use 13.07 MB of stack to run, and so crash with the default stack limit on Ubuntu Server 20.04 (8MB). Exactly 12768 bytes per EVM call stack frame.

$ rm -f build/all_tests && make ENABLE_EVMC=1 test
$ ulimit -S -s 16384 # Requires larger stack than default.
$ ./build/all_tests 7 | tee tlog
[Suite] new generalstate json tests
    ...
Stack range 14384 depthHigh 2
    ...
Stack range 3495456 depthHigh 457
[OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest639.json
...
Stack range 3709600 depthHigh 485
[OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest458.json
    ...
Stack range 7831600 depthHigh 1024
[OK] tests/fixtures/eth_tests/GeneralStateTests/stCreate2/Create2OnDepth1024.json

These tests use 7.83MB of stack to run. About 7648 bytes per EVM call stack frame. It only just avoids crashing with the default Ubuntu Server stack limit of 8 MB. However, it still crashes on Windows x86-64, which is why the Windows CI EVMC target is currently disabled.

On Linux where this passes, this is so borderline that it affects work and testing of the complex storage code, because that's called from the EVM.

Also, this greatly exceeds the default thread stack size.

Measurements after

$ rm -f build/all_tests && make ENABLE_EVMC=1 test
$ ulimit -S -s 600 # Because we can!  600k stack.
$ ./build/all_tests 9 | tee tlog
[Suite] persist block json tests
...
Stack range 1936 depthHigh 3
...
    Stack range 556272 depthHigh 1022
    Stack range 556512 depthHigh 1023
    Stack range 556816 depthHigh 1023
    Stack range 557056 depthHigh 1024
    Stack range 557360 depthHigh 1024
    [OK] tests/fixtures/PersistBlockTests/block1431916.json

$ rm -f build/all_tests && make ENABLE_EVMC=1 test
$ ulimit -S -s 600 # Because we can!  600k stack.
$ ./build/all_tests 7 | tee tlog
[Suite] new generalstate json tests
    ...
Stack range 1392 depthHigh 2
    ...
Stack range 248912 depthHigh 457
[OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest639.json
...
Stack range 264144 depthHigh 485
[OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest458.json
    ...
    Stack range 557360 depthHigh 1024
[OK] tests/fixtures/eth_tests/GeneralStateTests/stStaticCall/static_CallRecursiveBombPreCall.json

For both tests, a satisfying 544 bytes per EVM call stack frame, and EVM takes less than 600 kB total. With other overheads, both tests run in 600 kB stack total at maximum EVM depth.

We must add some headroom on this for database activity called from the EVM, and different compile targets. But it means the EVM itself is no longer a stack burden.

This is much smaller than the default thread stack size on Linux (2MB), with plenty of margin. (Just fyi, it isn't smaller than a small thread stack on Linux from a long time ago (128kB), and some small embedded C targets.)

This size is well suited to running EVMs in threads.

Further reduction

This patch solves the stack problem. Windows and Linux 64-bit EVMC CI targets can be re-enabled, and there is no longer a problem with stack usage.

We can reduce further to ~340 bytes per frame and 350 kB total, while still complying with EVMC. But as this involves changing how errors are handled to comply fully with EVMC, and removing dispose calls, it's not worth doing now while there are other EVMC changes in progress that will have the same effect.

A Nimbus-specific extension will allow us to avoid recursion with EVMC anyway, bringing bytes per frame to zero. We need the extension anyway, to support Chronos async which parallel transaction processing is built around.

Interop with non-Nimbus over EVMC won't let us avoid recursion, but then we can't control the stack frame size either. To prevent stack overflow in interop I anticipate using [this method in Aleth]
(https://github.com/ethereum/aleth/blob/6e96ce34e3f131e2d42f3cb00741b54e05ab029d/libethereum/ExtVM.cpp#L61).

Smoke test other versions of GCC and Clang/LLVM

As all builds including Windows use GCC or Apple's Clang/LLVM, this is just to verify we're in the right ballpark on all targets. I've only checked x86_64 though, not 32-bit, and not ARM.

It's interesting to see GCC 10 uses less stack. This is because it optimises struct returns better, sometimes skipping an intermediate copy. Here it benefits the EVMC API, but I found GCC 10 also improves the larger stack usage of the rest of nimbus-eth1 as well.

Apple clang 12.0.0 (clang-1200.0.26.2) on MacOS 10.15:

  • 544 bytes per EVM call stack frame

GCC 10.3.0 (Ubuntu 10.3.0-1ubuntu1) on Ubuntu 21.04:

  • 464 bytes per EVM call stack frame

GCC 10.2.0 (Ubuntu 10.2.0-5ubuntu1~20.04) on Ubuntu 20.04 LTS:

  • 464 bytes per EVM call stack frame

GCC 11.0.1 20210417 (experimental; Ubuntu 11-20210417-1ubuntu1) on Ubuntu 21.04:

  • 528 bytes per EVM call stack frame

GCC 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) on Ubuntu 20.04 LTS:

  • 544 bytes per EVM call stack frame

GCC 8.4.0 (Ubuntu 8.4.0-3ubuntu2) on Ubuntu 20.04 LTS:

  • 544 bytes per EVM call stack frame

GCC 7.5.0 (Ubuntu 7.5.0-6ubuntu2) on Ubuntu 20.04 LTS:

  • 544 bytes per EVM call stack frame

GCC 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2) on Ubuntu 19.10:

  • 528 bytes per EVM call stack frame

nimbus.nimble Outdated
@@ -31,7 +31,8 @@ proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =

proc test(name: string, lang = "c") =
buildBinary name, "tests/", "-d:chronicles_log_level=ERROR"
exec "build/" & name
# Validate stack usage stays reasonable by setting 1024k stack limit
exec "ulimit -s 1024 && build/" & name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess there is no ulimit on Windows? hence the CI failing.

Copy link
Contributor Author

@jlokier jlokier Apr 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not the reason. It fails CI because on Windows, NimScript exec doesn't run the string as a shell script. It's given directly to WIN32 CreateProcess{A,W}. On non-Windows it gives the string to system which runs a shell, so Nim is inconsistent about this (it could use a shell on Windows, or use execve on other targets).

There actually is a stack limit on Windows, and we can call out to $SHELL to run ulimit. But due to another quirk, Windows Bash ulimit has no effect, it's accepted but quietly ignored. So there's no point doing it.

There's another way to set the stack limit on Windows, but it's more complicated than a one-liner and I won't implement it for this patch. The Linux/Mac stack limit is enough to keep our code validated.

Updated patch skips setting the stack limit on Windows.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember how exec was implemented. But on Linux, Nim's startProcess also have it's own quirk.

It doesn't matter, we have more important things to do.

Copy link
Contributor Author

@jlokier jlokier Apr 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Windows, basically you can either edit a field in the executable to specify what the startup stack size should be, or create a thread on startup because you can set the stack size of a new thread, and run everything else from the second thread, with the initial thread just waiting for the second thread to finish.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's now implemented as you suggested.

@jlokier jlokier linked an issue Apr 27, 2021 that may be closed by this pull request
This patch reduces stack space used with EVM in ENABLE_EVMC=1 mode, from 13 MB
worst case to 550 kB, a 24x reduction.

This completes fixing the "stack problem" and closes #575 (`EVM: Different
segmentation faults when running the test suite with EVMC`).

It also closes #256 (`recursive EVM call trigger unrecoverable stack overflow`).

After this patch, it is possible to re-enable the CI targets which had to be
disabled due to #575.

This change is also a required precursor for switching over to "nearly EVMC" as
the clean and focused Nimbus-internal API between EVM and sync/database
processes, and is also key to the use of Chronos `async` in those processes
when calling the EVM.

(The motivation is the internal interface has to be substantially changed
_anyway_ for the parallel sync and database processes, and EVMC turns out to be
well-designed and well-suited for this.  It provides good separation between
modules, and suits our needs better than our other current interface.  Might as
well use a good one designed by someone else.  EVMC is 98% done in Nimbus
thanks to great work done before by @jangko, and we can use Nimbus-specific
extensions where we need flexibility, including for performance.  Being aligned
with the ecosystem is a useful bonus feature.)

All tests below were run on Ubuntu 20.04 LTS server, x86-64.  This matches one
of the targets that has been disabled for a while in CI in EVMC mode due to
stack overflow crashing the tests, so it's a good choice.

Measurements before
===================

Testing commit `e76e0144 2021-04-22 11:29:42 +0700 add submodules: graphql and
toml-serialization`.

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 16384 # Requires larger stack than default to avoid crash.
    $ ./build/all_tests 9 | tee tlog
    [Suite] persist block json tests
    ...
    Stack range 38416 depthHigh 3
    ...
    Stack range 13074720 depthHigh 1024
    [OK] tests/fixtures/PersistBlockTests/block1431916.json

These tests use 13.07 MB of stack to run, and so crash with the default stack
limit on Ubuntu Server 20.04 (8MB).  Exactly 12768 bytes per EVM call stack
frame.

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 16384 # Requires larger stack than default.
    $ ./build/all_tests 7 | tee tlog
    [Suite] new generalstate json tests
        ...
    Stack range 14384 depthHigh 2
        ...
    Stack range 3495456 depthHigh 457
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest639.json
    ...
    Stack range 3709600 depthHigh 485
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest458.json
        ...
    Stack range 7831600 depthHigh 1024
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stCreate2/Create2OnDepth1024.json

These tests use 7.83MB of stack to run.  About 7648 bytes per EVM call stack
frame.  It _only just_ avoids crashing with the default Ubuntu Server stack
limit of 8 MB.  However, it still crashes on Windows x86-64, which is why the
Windows CI EVMC target is currently disabled.

On Linux where this passes, this is so borderline that it affects work and
testing of the complex storage code, because that's called from the EVM.

Also, this greatly exceeds the default thread stack size.

Measurements after
==================

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 600 # Because we can!  600k stack.
    $ ./build/all_tests 9 | tee tlog
    [Suite] persist block json tests
    ...
    Stack range 1936 depthHigh 3
    ...
        Stack range 556272 depthHigh 1022
        Stack range 556512 depthHigh 1023
        Stack range 556816 depthHigh 1023
        Stack range 557056 depthHigh 1024
        Stack range 557360 depthHigh 1024
        [OK] tests/fixtures/PersistBlockTests/block1431916.json

    $ rm -f build/all_tests && make ENABLE_EVMC=1 test
    $ ulimit -S -s 600 # Because we can!  600k stack.
    $ ./build/all_tests 7 | tee tlog
    [Suite] new generalstate json tests
        ...
    Stack range 1392 depthHigh 2
        ...
    Stack range 248912 depthHigh 457
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest639.json
    ...
    Stack range 264144 depthHigh 485
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stRandom2/randomStatetest458.json
        ...
        Stack range 557360 depthHigh 1024
    [OK] tests/fixtures/eth_tests/GeneralStateTests/stStaticCall/static_CallRecursiveBombPreCall.json

For both tests, a satisfying *544 bytes* per EVM call stack frame, and EVM
takes less than 600 kB total.  With other overheads, both tests run in 600 kB
stack total at maximum EVM depth.

We must add some headroom on this for database activity called from the EVM,
and different compile targets.  But it means the EVM itself is no longer a
stack burden.

This is much smaller than the default thread stack size on Linux (2MB), with
plenty of margin.  (Just fyi, it isn't smaller than a _small_ thread stack on
Linux from a long time ago (128kB), and some small embedded C targets.)

This size is well suited to running EVMs in threads.

Further reduction
=================

This patch solves the stack problem.  Windows and Linux 64-bit EVMC CI targets
can be re-enabled, and there is no longer a problem with stack usage.

We can reduce further to ~340 bytes per frame and 350 kB total, while still
complying with EVMC.  But as this involves changing how errors are handled to
comply fully with EVMC, and removing `dispose` calls, it's not worth doing now
while there are other EVMC changes in progress that will have the same effect.

A Nimbus-specific extension will allow us to avoid recursion with EVMC anyway,
bringing bytes per frame to zero.  We need the extension anyway, to support
Chronos `async` which parallel transaction processing is built around.

Interop with non-Nimbus over EVMC won't let us avoid recursion, but then we
can't control the stack frame size either.  To prevent stack overflow in
interop I anticipate using (this method in Aleth)
[https://github.com/ethereum/aleth/blob/6e96ce34e3f131e2d42f3cb00741b54e05ab029d/libethereum/ExtVM.cpp#L61].

Smoke test other versions of GCC and Clang/LLVM
===============================================

As all builds including Windows use GCC or Apple's Clang/LLVM, this is just to
verify we're in the right ballpark on all targets.  I've only checked `x86_64`
though, not 32-bit, and not ARM.

It's interesting to see GCC 10 uses less stack.  This is because it optimises
`struct` returns better, sometimes skipping an intermediate copy.  Here it
benefits the EVMC API, but I found GCC 10 also improves the larger stack usage
of the rest of `nimbus-eth1` as well.

Apple clang 12.0.0 (clang-1200.0.26.2) on MacOS 10.15:

- 544 bytes per EVM call stack frame

GCC 10.3.0 (Ubuntu 10.3.0-1ubuntu1) on Ubuntu 21.04:

- 464 bytes per EVM call stack frame

GCC 10.2.0 (Ubuntu 10.2.0-5ubuntu1~20.04) on Ubuntu 20.04 LTS:

- 464 bytes per EVM call stack frame

GCC 11.0.1 20210417 (experimental; Ubuntu 11-20210417-1ubuntu1) on Ubuntu 21.04:

- 8 bytes per EVM call stack frame

GCC 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) on Ubuntu 20.04 LTS:

- 544 bytes per EVM call stack frame

GCC 8.4.0 (Ubuntu 8.4.0-3ubuntu2) on Ubuntu 20.04 LTS:

- 544 bytes per EVM call stack frame

GCC 7.5.0 (Ubuntu 7.5.0-6ubuntu2) on Ubuntu 20.04 LTS:

- 544 bytes per EVM call stack frame

GCC 9.2.1 20191008 (Ubuntu 9.2.1-9ubuntu2) on Ubuntu 19.10:

- 528 bytes per EVM call stack frame

Signed-off-by: Jamie Lokier <[email protected]>
@jlokier jlokier merged commit a3c8a5c into master Apr 27, 2021
@jlokier jlokier deleted the jl/evmc-fix-stack-overflow branch April 27, 2021 05:19
if not c.msg.isCreate:
c.afterExecCall()
else:
c.afterExecCreate()

template chainTo*(c, toChild: Computation, after: untyped) =
template chainTo*(c: Computation, toChild: typeof(c.child), after: untyped) =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to be careful with templates that use their parameters more than once. Nim will replace each occurrence of the parameter within the template body with the passed in expression (potentially evaluating it multiple times; sometimes described as call-by-name). This might be a problem when the passed-in expression has side effects.

You can avoid the problem here by assigning c to a local variable before using it in the rest of the template.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, thanks for pointing it out.

let child = newComputation(c.vmState, childMsg, Uint256.fromEvmc(m.create2_salt))
child.execCallOrCreate()

template leaveCreateImpl(c, child: Computation, res: nimbus_result) =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a mild dilemma with this one, as the goal was to be a minimal-size "obvious" change, so that anyone could clearly see what it does. The pre-existing templates were using c like this already, I just split the templates up, I didn't introduce any new multiple evaluation possibilities. There's no actual bug, because we can see clearly where it's used just a few lines later. But there's an increased risk of introducing one later by mistake, I agree. Performance can also suffer from multiple evaluations.

@@ -153,7 +152,7 @@ template createImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
copyMem(res.output_data, child.output[0].addr, child.output.len)
res.release = hostReleaseResultImpl

template callImpl(c: Computation, m: nimbus_message, res: nimbus_result) =
template enterCallImpl(c: Computation, m: nimbus_message): Computation =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

let child = newComputation(c.vmState, childMsg)
child.execCallOrCreate()

template leaveCallImpl(c, child: Computation, res: nimbus_result) =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

@@ -646,7 +646,8 @@ template genCreate(callName: untyped, opCode: Op): untyped =
c.gasMeter.consumeGas(createMsgGas, reason="CREATE")

when evmc_enabled:
let msg = nimbus_message(
let msg = new(nimbus_message)
msg[] = nimbus_message(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use the following short-cut syntax here:

let msg = (ref nimbus_message)(
     kind: ...
   )

... or define nimbus_message_ref as a type and then use it directly: nimbus_message_ref(...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that's good to know.

jlokier added a commit that referenced this pull request Apr 29, 2021
We can't use `ulimit -s` to limit stack size on Windows.  Even though Bash
accepts `ulimit -s` and the numbers change it has no effect and is not passed
to child processes.

(See https://public-inbox.org/git/alpine.DEB.2.21.1.1709131448390.4132@virtualbox/)

Instead, set it when building the test executable, following, a suggestion from
@stefantalpalaru.
#598 (comment)

Note: This is a WIP patch with an intentionally small value, to verify that it
_fails_ CI and is therefore reducing the stack limit.  It will be replaced with
the correct value after.
jlokier added a commit that referenced this pull request Apr 29, 2021
We can't use `ulimit -s` to limit stack size on Windows.  Even though Bash
accepts `ulimit -s` and the numbers change it has no effect and is not passed
to child processes.

(See https://public-inbox.org/git/alpine.DEB.2.21.1.1709131448390.4132@virtualbox/)

Instead, set it when building the test executable, following, a suggestion from
@stefantalpalaru.
#598 (comment)

To ensure no conflict with `config.nims`, `-d:windowsNoSetStack` is used.  This
proved unnecessary in practice because the command-line option is passed to the
linker after the config file option.  But given we don't have an automated test
to verify linker behaviour, it's best not to rely on the option order, neither
how the linker treats it, or whether Nim will always send them in that order.

Testing:

This has been verified by using a smaller limit.  At 400k, all `ENABLE_EVMC=0`
OS targets passed as expected, and all `ENABLE_EVMC=1` OS targets failed with
expected kinds of errors due to stack overflow.  That is, except 32-bit x86
Windows which passed; 200k caused that to fail.

Note: This is a WIP patch with an intentionally small value, to verify that it
_fails_ CI and is therefore reducing the stack limit.  It will be replaced with
the correct value after.

Signed-off-by: Jamie Lokier <[email protected]>
jlokier added a commit that referenced this pull request Apr 30, 2021
We can't use `ulimit -s` to limit stack size on Windows.  Even though Bash
accepts `ulimit -s` and the numbers change it has no effect and is not passed
to child processes.

(See https://public-inbox.org/git/alpine.DEB.2.21.1.1709131448390.4132@virtualbox/)

Instead, set it when building the test executable, following, a suggestion from
@stefantalpalaru.
#598 (comment)

To ensure no conflict with `config.nims`, `-d:windowsNoSetStack` is used.  This
proved unnecessary in practice because the command-line option is passed to the
linker after the config file option.  But given we don't have an automated test
to verify linker behaviour, it's best not to rely on the option order, neither
how the linker treats it, or whether Nim will always send them in that order.

Testing:

This has been verified by using a smaller limit.  At 200k, all `ENABLE_EVMC=0`
OS targets passed as expected, and all `ENABLE_EVMC=1` OS targets failed with
expected kinds of errors due to stack overflow, including Windows.
(400k wasn't small enough; 32-bit x86 Windows passed on that).

Signed-off-by: Jamie Lokier <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants