Matt Mackall wrote:
On Tue, Dec 04, 2007 at 08:54:52AM -0800, Ray Lee wrote:
(Why hasn't anyone been cc:ing Matt on this?)
On Dec 4, 2007 8:18 AM, Adrian Bunk <[email protected]> wrote:
On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:
While debugging Exim4's GnuTLS interface, I recently found out that
reading from /dev/urandom depletes entropy as much as reading from
/dev/random would. This has somehow surprised me since I have always
believed that /dev/urandom has lower quality entropy than /dev/random,
but lots of it.
man 4 random
This also means that I can "sabotage" applications reading from
/dev/random just by continuously reading from /dev/urandom, even not
meaning to do any harm.
Before I file a bug on bugzilla,
...
The bug would be closed as invalid.
No matter what you consider as being better, changing a 12 years old and
widely used userspace interface like /dev/urandom is simply not an
option.
You seem to be confused. He's not talking about changing any userspace
interface, merely how the /dev/urandom data is generated.
For Matt's benefit, part of the original posting:
Before I file a bug on bugzilla, can I ask why /dev/urandom wasn't
implemented as a PRNG which is periodically (say, every 1024 bytes or
even more) seeded from /dev/random? That way, /dev/random has a much
higher chance of holding enough entropy for applications that really
need "good" entropy.
A PRNG is clearly unacceptable. But roughly restated, why not have
/dev/urandom supply merely cryptographically strong random numbers,
rather than a mix between the 'true' random of /dev/random down to the
cryptographically strong stream it'll provide when /dev/random is
tapped? In principle, this'd leave more entropy available for
applications that really need it, especially on platforms that don't
generate a lot of entropy in the first place (servers).
The original /dev/urandom behavior was to use all the entropy that was
available, and then degrade into a pure PRNG when it was gone. The
intent is for /dev/urandom to be precisely as strong as /dev/random
when entropy is readily available.
The current behavior is to deplete the pool when there is a large
amount of entropy, but to always leave enough entropy for /dev/random
to be read. This means we never completely starve the /dev/random
side. The default amount is twice the read wakeup threshold (128
bits), settable in /proc/sys/kernel/random/.
In another post I suggested having a minimum bound (use not entropy) and
a maximum bound (grab some entropy) with the idea that between these
values some limited entropy could be used. I have to wonder if the
entropy available is at least as unpredictable as the entropy itself.
But there's really not much point in changing this threshold. If
you're reading the /dev/random side at the same rate or more often
that entropy is appearing, you'll run out regardless of how big your
buffer is.
Right, my thought is to throttle user + urandom use such that the total
stays below the available entropy. I had forgotten that that was a lower
bound, although it's kind of an on-off toggle rather than proportional.
Clearly if you care about this a *lot* you will use a hardware RNG.
Thanks for the reminder on read_wakeup.
--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]