Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SHA1 compute time source. #1

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

padraic
Copy link

@padraic padraic commented Feb 24, 2013

@ircmaxell
Copy link
Owner

Looking at the implementation, I can't help but to see the similarities with the already added MicroTime Source: https://github.com/ircmaxell/RandomLib/blob/master/lib/RandomLib/Source/MicroTime.php#L67

It looks like it's doing basically the same thing, looping and adding new time data each iteration.

The problem that I have with this particular implementation is that it's fairly complex (and hence difficult to really see what's going on), but doesn't have any significant entropy sources. So it's basically just throwing together a bunch of logic. Looking deeper into it, I see that the only actual entropy that enters into the $entropy variable is timestamps. While this wouldn't be a deal breaker, this appears to just be artificially slowing down the loops for the sake of making the timestamps better. So a lot of CPU power is going to be wasted just just churning. If it fed back the entropy into itself, then at least that would be something. But as of now, it'd just burn the CPU for seemingly no reason.

In the end, it looks like a LOT of undocumented code and complex algorithms for not much benefit. And considering that there's already a source (above) based on microtime that uses a simpler and more standard gather-process-output algorithm that's pretty well documented.

I'll leave this open for a little while in case anyone can see something that I missed, or give a good justification for it.

Additionally, if it was accepted, the strength would need to be reduced to VERYLOW as there is no actual random entropy other then the timestamps (which is very minor)...

Thanks!

@padraic
Copy link
Author

padraic commented Feb 26, 2013

You're right that it burns CPU time and is microtime based. Basically, it's relying on there being an uncertain delta between each pair of microsecond measurements performed over a fixed period of time. This keeps it low quality but it should yield a little entropy each iteration. My own concern is its performance more than anything else. It's trading time (~20ms) for a large stack of deltas.

I actually missed what you were doing in the Microtime source - it's a similar idea but I think the difference depends on whether more is better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants