Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
A few QuickJS Improvements
Spidermonkey only sets the stack limit size, so do the same for QuickJS.
This changes how the oom test behaves and since already have two oom tests remove the flaky one.
For some internal errors, especially when bumping into memory limits, a null exception is thrown, instead of re-throwing try to behave like the SM loop.js and other error handling by returning an error line and then indicating we shouldn't continue. This way process crashes will be reserved for segfaults and such.
Be more resilient to errors, so we log the errors but do not get stuck forever retrying to rescan the same db/ddoc/doc.
Reduce the memory and doc batch size limits. Previous size and memory limit were taken from the mrview index settings, but we're also dealing with VDUs, filters, and other things, so to keep the engines from crashing or consuming too many resources opt to reduce the limits a bit at the expense of possibly running longer.
Previously, if a view check crashed the JS process, and it died, then any filter or VDU checks for the same DB will fail because the new JS processes won't have the "taught" design documents. To fix this, if we notice the SM or QJS process is dead, we respawn it, and then also load all the design docs.
Since we're not building an actual view btree, don't set a reduce limit. We send potentially a lot more data to the reduce functions, and we'd have a higher chance of triggering the reduce limit exception. So then we just end up showing a discrepancy in stack traces and reduce limit error format, instead of validating the actual output of the reduce functions.
Since QuickJS is the new engine, and is mostly likely to show issues or crashes, always try to use the Spidermonkey process first instead of the QuickJS one. Otherwise if SM crashes we may not get a good signal if there is some discrepancy between engines, or just a bad JS code in general that throws errors anyway.
Add a few more tests for some large reduce values and some odd formatting around functions.