Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quickjs Fixes #5099

Merged
merged 3 commits into from
Jun 22, 2024
Merged

Quickjs Fixes #5099

merged 3 commits into from
Jun 22, 2024

Conversation

nickva
Copy link
Contributor

@nickva nickva commented Jun 22, 2024

A few QuickJS Improvements

  1. Make QuickJS memory limits match Spidermonkey

Spidermonkey only sets the stack limit size, so do the same for QuickJS.

This changes how the oom test behaves and since already have two oom tests remove the flaky one.

  1. Make QuickJS dispatch respond with an error instead of throwing a null

For some internal errors, especially when bumping into memory limits, a null exception is thrown, instead of re-throwing try to behave like the SM loop.js and other error handling by returning an error line and then indicating we shouldn't continue. This way process crashes will be reserved for segfaults and such.

  1. Improve QuickJS scanner plugin
  • Be more resilient to errors, so we log the errors but do not get stuck forever retrying to rescan the same db/ddoc/doc.

  • Reduce the memory and doc batch size limits. Previous size and memory limit were taken from the mrview index settings, but we're also dealing with VDUs, filters, and other things, so to keep the engines from crashing or consuming too many resources opt to reduce the limits a bit at the expense of possibly running longer.

  • Previously, if a view check crashed the JS process, and it died, then any filter or VDU checks for the same DB will fail because the new JS processes won't have the "taught" design documents. To fix this, if we notice the SM or QJS process is dead, we respawn it, and then also load all the design docs.

  • Since we're not building an actual view btree, don't set a reduce limit. We send potentially a lot more data to the reduce functions, and we'd have a higher chance of triggering the reduce limit exception. So then we just end up showing a discrepancy in stack traces and reduce limit error format, instead of validating the actual output of the reduce functions.

  • Since QuickJS is the new engine, and is mostly likely to show issues or crashes, always try to use the Spidermonkey process first instead of the QuickJS one. Otherwise if SM crashes we may not get a good signal if there is some discrepancy between engines, or just a bad JS code in general that throws errors anyway.

  • Add a few more tests for some large reduce values and some odd formatting around functions.

Spidermonkey only sets the stack limit size, so do the same for QuickJS.

This changes how the oom test behaves and since already have two oom tests
remove the flaky one.
For some internal errors, especially when bumping into memory limits, a null
exception is thrown, instead of rethrowing try to behave like the SM loop.js
and other error handling by returning an error line and then indicating we
shouldn't continue. This way process crashes will be reserved for segfaults and
such.
Some improvements I noticed we could make after scanning a few clusters.

 * Be more resilient to errors, so we log the errors but do not get do not get
 stuck forever retrying to rescan the same db/ddoc/doc.

 * Reduce the memory and doc batch size limits. Previous size and memory limit
 were taken from the mrview index settings, but we're also dealing with VDUs,
 filters, and other things, so to keep the engines from crashing or consuming
 too many resources opt to reduce the limits a bit at the expense of possibly
 running longer.

 * Previously, if a view check crashed the JS process, and it died, then any
 filter or VDU checks for the same DB will fail because the new JS processes
 won't have the "taught" design documents. To fix this, if we notice the SM or
 QJS process is dead, we respawn it, and then also load all the design docs.

 * Since we're not building an actual view btree, don't set a reduce limit. We
 send potentially a lot more data to the reduce functions, and we'd have a
 higher chance of triggering the reduce limit exception. So then we just end up
 showing a discrepancy in stack traces and reduce limit error format, instead
 of validationg the actual output of the reduce functions.

 * Since QuickJS is the new engine, and is mostly likely to show issues or
 crashes, always try to use the Spidermonkey process first instead of the
 QuickJS one. Otherwise if SM crashes we may not get a good signal if there is
 some discrepancy between engines, or just a bad JS code in general that throws
 errors anyway.

 * Add a few more tests for some large reduce values and some odd formatting
 around functions.
@nickva nickva merged commit f1fc2f0 into main Jun 22, 2024
17 checks passed
@nickva nickva deleted the quickjs-fixes branch June 22, 2024 05:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants