Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security: Use crypto.randomBytes, not Math.random #832

Closed
wants to merge 1 commit into from

Conversation

coolaj86
Copy link
Contributor

@coolaj86 coolaj86 commented Sep 20, 2016

Depending on the JavaScript engine (node now runs on Chakra and V8) Math.random can be anywhere between extremely insecure (v8's implementation has collisions at around 24,000 cycles) and cryptographically pseudo-random.

However, using crypto.randomBytes() directly ensures that the secure version is always used.

@JoshuaWise
Copy link
Member

Using the synchronous version of cyrpto.randomBytes doesn't sit well with me. The event loop could get blocked for as much as a few milliseconds while it waits for sufficient entropy. I think getRandomMask() needs to be refactored into an asynchronous function, to avoid this potential bottleneck.

@sequoiar
Copy link

@JoshuaWise incase a cached _randomMask like
var mask = this._randomMask || (this._randomMask = getRandomMask());

@JoshuaWise
Copy link
Member

JoshuaWise commented Sep 20, 2016

@sequoiar I think that might be a premature optimization. Caching random bytes might be something we do in the future, but we should avoid unnecessary complexity before we've measured the performance impact.

As for using the synchronous vs asynchronous version of cyrpto.randomBytes, the synchronous version is almost guaranteed to cause a bottleneck. According to the Node.js docs, a few milliseconds is considered a "good case" for how long it might wait for before generating the random bytes. And it could take much longer in "bad cases". A few milliseconds is a long time when you're dealing with thousands of connections.

The actual generation of the bytes (after it's done waiting for entropy) is likely several orders of magnitude less than a few milliseconds.

@coolaj86
Copy link
Contributor Author

coolaj86 commented Sep 20, 2016

@JoshuaWise You may be misinterpreting the docs. the synchronous version is almost guaranteed to never cause a bottleneck, but in the rare cases that it does (i.e. you're generating gigabytes of random data in another process), it may take a few milliseconds to get a normal about of random re-seeded. On first system boot (or when there is no cached random data from a previous boot - not the case to optimize for), it may take slightly longer.

If you search around a little bit on the net you'll find people complaining about cases where they run out of system entropy and it's generally A) they're using Arch Linux, haven't started the entropy daemon yet, and are generating large ssh keys as part of setup or B) are generating gigabytes of random data for some particular purpose.

If a benchmark shows that 4 bytes of random data a few thousand times every few minutes is killing performance I would not disbelieve it, but I wouldn't jump to assuming that a normal use case would cause unusual performance behavior.

@coolaj86
Copy link
Contributor Author

coolaj86 commented Sep 20, 2016

Also, the very idea of generating enough random masks to deplete the entropy pool is counter intuitive. How would you be sending and receiving all of this websocket data and not generating entropy from disk and network access in that process?

@JoshuaWise
Copy link
Member

Ah, you're right, I did misread the docs. Thanks for pointing that out. I'll let the other maintainers chime in if they have any thoughts, but otherwise, LGTM.

@lpinca
Copy link
Member

lpinca commented Sep 20, 2016

It would be nice to see a benchmark but I think it makes sense.
I would avoid wrapping crypto.randomBytes() in another function though, the mask is only used in one place.

@ericmdantas
Copy link

@lpinca

I would avoid wrapping crypto.randomBytes() in a another function [...]

IMHO, it makes sense to leave it as it is, just in case the way the mask is returned changes again.

@lpinca
Copy link
Member

lpinca commented Sep 20, 2016

The function is probably inlined so yeah there is no overhead but it triggers my OCD 😄
Anyway it's up to the maintainers to decide.

@JoshuaWise
Copy link
Member

JoshuaWise commented Sep 21, 2016

I agree with @ericmdantas. The function is certainly inlined, and I like self-documenting code.

@JoshuaWise
Copy link
Member

@coolaj86
Before merging I'd like to see the benchmark run before and after this change. Just copy-paste the results here. Surely there will be no noticeable difference, and I think security is more important than performance, but I think it will be nice to see confirmation of what we already suspect.

@coolaj86
Copy link
Contributor Author

coolaj86 commented Sep 21, 2016

Looks like there's no noticeable difference - just a little noise.

node bench/speed.js

Math.random():

Generating 500 MB of test data ...
Running 10000 roundtrips of 64 B binary data:   4.2s    150.49 kB/s
Running 5000 roundtrips of 16 kB binary data:   6.8s    11.46 MB/s
Running 1000 roundtrips of 128 kB binary data:  12s 10.38 MB/s
Running 100 roundtrips of 1 MB binary data: 9.3s    10.71 MB/s

crypto.randomBytes():

Running 10000 roundtrips of 64 B binary data:   4.3s    146.34 kB/s
Running 5000 roundtrips of 16 kB binary data:   6.8s    11.46 MB/s
Running 1000 roundtrips of 128 kB binary data:  12s 10.42 MB/s
Running 100 roundtrips of 1 MB binary data: 9.2s    10.84 MB/s

node bench/sender.benchmark.js

Math.random():

  Sender frameAndSend, unmasked (200 kB)
  3044095.19 ops/sec, 251567 times executed, benchmark took 5.073 sec.

  Sender frameAndSend, masked (200 kB)
  5530.86 ops/sec, 557 times executed, benchmark took 5.138 sec.

crypto.randomBytes():

  Sender frameAndSend, unmasked (200 kB)
  2908586.85 ops/sec, 243897 times executed, benchmark took 5.064 sec.

  Sender frameAndSend, masked (200 kB)
  5008.09 ops/sec, 976 times executed, benchmark took 5.124 sec.

@JoshuaWise
Copy link
Member

JoshuaWise commented Sep 21, 2016

Yeah LGTM. I'll leave this open for another day to give other maintainers the opportunity to chime in. Otherwise I'll go ahead and merge tomorrow.

There's a 5% performance loss on the sends that aren't even masked, meaning we can expect at least 5% of random noise. So the 10% loss on the masked sends doesn't worry me. Especially since masked sends are only client-to-server, which are certainly the least important in terms of performance.

@3rd-Eden
Copy link
Member

As for the caching, we're not allowed to cache masks because they should be unique each time as far as i'm concerned. As for the sync generation. I would prefer that is done async to prevent event loop blocking when a lot of messages are being send.

@coolaj86
Copy link
Contributor Author

coolaj86 commented Sep 21, 2016

(╯°□°)╯︵ ┻━┻

My plea has fallen upon deaf ears.

  1. The benchmarks show that it's (insignificantly) faster
  2. We don't say we wouldn't use node because of the occasional GC pause (which happens very very regularly and is very very very very slow)
  3. Network traffic and disc access generate entropy, so it's not even possible to run the entropy dry with this operation, there would have to be another process draining the entropy pool faster than it's being created
  4. Srsly?!?!?

@JoshuaWise
Copy link
Member

JoshuaWise commented Sep 21, 2016

@3rd-Eden:
@coolaj86 is right, the system would never run out of entropy and cause an event loop block in this use case (websockets). That being said, there's no harm in making it async to cover our bases and make everyone happy.

@JoshuaWise
Copy link
Member

I think it would be nice for this to be merged before the next big release.

@lpinca
Copy link
Member

lpinca commented Oct 25, 2016

@JoshuaWise I agree.

@lpinca
Copy link
Member

lpinca commented Nov 25, 2016

Merged as 7253f06.
Thanks!

@lpinca lpinca closed this Nov 25, 2016
@sgress454
Copy link

sgress454 commented Feb 7, 2017

@coolaj86 @lpinca Would you consider backporting this for a 1.1.2 release? That would allow projects that still need to support Node < 4 to get this security patch.

I forked and patched the 1.1.1 code here: https://github.com/sgress454/ws/tree/backport-crypto-patch-to-1.0, but not sure if there's any way to make a PR out of that unless there's a 1.x branch to submit it against.

@JacksonTian
Copy link
Contributor

@lpinca Let's do a 1.x release.

@lpinca
Copy link
Member

lpinca commented Feb 8, 2017

I've just created a v1.x branch, can you please submit the PR against that branch?
Thanks.

@sgress454
Copy link

#994

Thanks!

@lpinca
Copy link
Member

lpinca commented Feb 8, 2017

If we are going to release 1.1.2 I think it makes sense to also backport #810.

@lpinca lpinca mentioned this pull request Dec 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants