Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory access out of bounds issues with the nodejs-server-sdk #868

Open
gleidin opened this issue Jun 3, 2024 · 5 comments
Open

memory access out of bounds issues with the nodejs-server-sdk #868

gleidin opened this issue Jun 3, 2024 · 5 comments

Comments

@gleidin
Copy link

gleidin commented Jun 3, 2024

Hi! It's me again! haha

This time I bring up something more critical, we deployed the solution to stage, then production we saw a sudden increase of memory from the pods, and then the pods started crashing due to the memory access out of bounds error.

The error started occurring more frequently, and we rolled it back.
We are using the same stack, just updated the library as you recommended.
Node: v16.20.2 (npm v8.19.4)
@devcycle/nodejs-server-sdk: ^1.30.1

RuntimeError: memory access out of bounds\n
at wasm:https://wasm/000c6366:wasm-function[5]:0xd06\n 
at wasm:https://wasm/000c6366:wasm-function[6]:0xd45\n
at wasm:https://wasm/000c6366:wasm-function[499]:0x2b403\n
at wasm:https://wasm/000c6366:wasm-function[12]:0x1308\n
at wasm:https://wasm/000c6366:wasm-function[15]:0x1661\n
at __lowerTypedArray (/app/dist/node_modules/@devcycle/bucketing-assembly-script/build/bucketing-lib.release.js:573:38)\n
at Object.variableForUser_PB (/app/dist/node_modules/@devcycle/bucketing-assembly-script/build/bucketing-lib.release.js:100:18)\n 
at variableForUser_PB (/app/dist/node_modules/@devcycle/nodejs-server-sdk/src/utils/userBucketingHelper.js:52:65)\n
at DevCycleClient.variable (/app/dist/node_modules/@devcycle/nodejs-server-sdk/src/client.js:159:77)\n
at DevCycleClient.variableValue (/app/dist/node_modules/@devcycle/nodejs-server-sdk/src/client.js:177:21)\n

Could someone help with this issue?

@elliotCamblor
Copy link
Contributor

Hey @gleidin, sorry to hear about the issue you're running in to. Would it be possible to get some more information about your stack and implementation? Valuable information would include: how your app starts up/shuts down, as well how your initDevcycle function that you shared in the previous issue is being used. That would help me very much in my investigation!

@gleidin
Copy link
Author

gleidin commented Jun 5, 2024

Hey @elliotCamblor thanks for the fast replay.

Of course, the stack is quite messy, I would say, the application is a huge monolith, using express, and the initDevcycle function is called at the moment the api server starts, and we listen to the SIGTERM to shut down the server. However, we haven't implement to do the same with the devcycle library (let me know in case we should use it). I believe that is good to mention that the server gets at least 2k requests per minute at peak hours (that's the moment we saw the high number of errors coming from the devcycle sdk)

One thing that is worth mention, we have a to-deprecate feature-flag service that we have implemented before that works as an authorization feature flag service. So we implemented the devcycle to start listening or validating at the same level as this one, and we prob had a few hundred thousand requests on that function at peak moments.

I am not quite sure if the library is made to support such traffic, or should we implement it at lower usage operation?

@elliotCamblor
Copy link
Contributor

Hey @gleidin, thank you! I have a few more followup questions:
Do you use any sort of hot reloading, or any process that could result in the devcycle client being initialized several times?
As for the frequency, does it only seem to happen during peak hours in production? Or have you seen it in local development as well?

We don't believe that the loads you mentioned shouldn't be a problem, as we run the sdk under similar load ourselves. However, I don't want to rule anything out.

Additionally, if you aren't comfortable sharing details in this public forum, you're welcome to join our discord/slack. Just let me know and I can get you an invite to either/both.

@gleidin
Copy link
Author

gleidin commented Jun 5, 2024

Hello!

Starting from the hot reloading question, no, we don't it. It happened on production only (typically) the stage environment was using it too for more than 2 days and we couldn't detect issues running our acceptance tests. And yes, we saw that with more frequency during the peak time. However, we had a few cases before.

Note: Since you confirmed that load shouldn't be a problem I will work here to add it back and roll out it slowly, and stressing more on our backend side. And if I get a better error message or more details I'll bring them up here.

@elliotCamblor
Copy link
Contributor

That would be great, let us know what you find!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants