Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If Sentry responds with 413 request too large, try to send the exception again without any state #95

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ndbroadbent
Copy link

@ndbroadbent ndbroadbent commented Oct 13, 2018

Related: #42

This change ensures that all exceptions are reported, even if they don't include any state. In my own app, I'm also going to add some logic that removes the redux-undo state and only sends the current state.

Another solution would be to use the pako library to gzip the whole request, then send the data with a Content-Encoding: gzip header. Apparently some other Sentry clients do this, and since my state had a ton of repeated data due to the undo history, I was able to compress 220 KB down to 5 KB. I'm going to try this next and see if Sentry will accept the request.

Actually, I just realized that it would be much better to just intercept the request before it is sent, and check that the request body length is less than 204800. I think that would be a better idea - We can just do stringify(opts.data) in the transport and check the length. I'm just not sure if the "maximum payload size" of 200kB also includes request headers. For my test requests, the headers added 527B. So it might be good to give a few KB of leeway and just set the limit at 200000. And even if we detect the limit and bail out early, I think it's still a good idea to keep the retry logic for the 413 response, because it might change in the future.

Anyway, I'm going to keep working on this, but wanted to submit now to get some feedback. Thanks!

@codecov-io
Copy link

codecov-io commented Oct 13, 2018

Codecov Report

Merging #95 into master will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@          Coverage Diff          @@
##           master    #95   +/-   ##
=====================================
  Coverage     100%   100%           
=====================================
  Files           1      1           
  Lines          26     58   +32     
  Branches       10     13    +3     
=====================================
+ Hits           26     58   +32
Impacted Files Coverage Δ
index.js 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0ab4e52...e101118. Read the comment docs.

@ndbroadbent
Copy link
Author

ndbroadbent commented Oct 13, 2018

I've updated this PR with the 200kB check, so that we skip the initial request and just send the error without any state. But I decided it would be a good idea to still hande the 413 error and retry the request, because it was only a few extra lines of code.

Unfortunately there was no way to access the json-stringify-safe code that raven-js is using, because they include a vendored copy with some changes in the repo. I also couldn't find any way to intercept the stringified data before it was sent. So I ended up copying that same file into raven-for-redux. It's only 76 lines of code, so not too important. (And felt a lot better than overriding fetch(), XMLHttpRequest, and XDomainRequest.)

@ndbroadbent
Copy link
Author

ndbroadbent commented Oct 14, 2018

Unfortunately, zlib compression with pako was a dead end. But I finished some proof-of-concept code to test it out.

It was actually very easy to get the code working, and the only issue is that Sentry refuses the Content-Encoding header during the CORS preflight request.

They probably just don't support gzip/deflate for browser requests. But it's a bit weird, because most of their backend API clients seem to send error reports with gzip compression. And also, I don't think this will even solve the underlying issue. Even after decompressing the data, they will probably still check to make sure it's under 200KB. I've also posted about this on the Sentry forums.

In the end, I've just removed the past and future keys added by redux-undo (as a stateTransformer in my own code), and now my state is always under 200KB (including breadcrumbs.) I was thinking about sending the history to my own server and storing it in my own database, but I realized that it's not actually that useful. I usually only need the latest state, action, and breadcrumbs to see why something crashed.

This might be controversial, but I think it would be great if the default stateTransformer did a quick check for any redux-undo history and automatically removed it, because that was the main source of my problem. The past and present arrays include full copies of the state whenever a user makes a change. It's not a problem in the browser because they share the same memory references, but it's a huge problem when you need to serialize that data. This library is already specific to redux, and redux-undo is the most common way of adding undo/redo functionality to a React/Redux application. So maybe this makes sense as a good default. Otherwise most redux-undo users will end up with 413 responses, or no state at all (when using the code in this PR.)

I might send another PR for this idea, as a proof of concept.

@ndbroadbent
Copy link
Author

ndbroadbent commented Oct 14, 2018

Opened the PR to remove redux-undo history: #96

Even if you decide not to merge this, hopefully some people can find this and use removeReduxUndoHistoryFromState as their stateTransformer. Or maybe it would be a good addition to the README / Wiki.

@schrizsz
Copy link

+1 when is this planned to be merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants