On Sat, May 06, 2006 at 11:48:08AM -0500, Matt Mackall wrote:
> Case 3:
> Hash function broken, entropy accounting is over-optimistic:
> /dev/urandom and /dev/random are both equally insecure because both
> are revealing more internal entropy than they're collecting. So you
> should just use /dev/urandom because at least it doesn't block.
>
> Putting aside all the practical issue of what exactly is entropy and
> what decent entropy sources are, we should be able to agree on this
> much. And this basically says that if you do your entropy accounting
> badly, you throw the baby out with the bathwater.
Agreed, but I'd an additional point of nuance; this assumes that the
attacker (call him Boris for the sake of argument) can actually gain
access to enough /dev/random or /dev/urandom outputs, and be
knowledgable about all other calls to /dev/random and exactly when
they happen (since entropy extractions cause the TSC to be mixed into
the pool) so Boris can can actually determine the contents of the
pool. Note that simply "breaking" a cryptographic hash, in the sense
of finding two input values that collide to the same output value,
does not mean that the hash has been sufficiently analyzed that it
would be possible to accomplish this feat. And given that it took
80,000 CPU hours to determine find this collision, and the complexity
of the attack was 2**51, it seems highly likely that with a poolsize
of 4096 bits, that it would take a huge amount of /dev/random
extractions, complete with the exact TSC timestamp when the
extractions were happening, such that an attacker would be able to
have enough information to break the pool.
Does this mean we should __depend__ on this? No, we should always do
the best job that we can. But it's also fair to say that even if the
hash function is "broken", that the results are not automatically
going to be catastrophic. If the attacker can't get their hands on
enough of an output stream from /dev/random, then it's not likely to
do much. For an attacker who only has network access, this could be
quite difficult.
> But network traffic should be _assumed_ to be observable to some
> degree. Everything else in network security makes the assumption that
> traffic is completely snoopable. By contrast, we assume people can't
> see us type our passwords[1]. So while our entropy estimator assumes
> observability == 0, for the network case, 0 < observability <= 1. And
> if it's greater than what our entropy estimator assumes, our entropy
> estimates are now too optimistic and /dev/random security degrades to
> that of /dev/urandom.
The timing of network arrivals is observable to *some* degree. Of
course, so is the timing from block I/O interrupts. The question is
whether or not it is predictable enough at the finest resolution. For
block device interrupts, there has been papers showing that air
currents inside the spinning media is chaotic enough that it can
actually influence the timing at which interrupts are returned --- at
least for systems a decade ago. To be honest, it would probably be a
useful and worthwhile masters thesis for a grad student to revisit the
problem using modern disk drive technologies, controllers, and
interconnects to see if this is still a valid assumption.
For network traffic, again, it depends on your threat model. For an
attacker stationed in Fort Mead, Maryland, with over a dozen router
and ethernet switches between them and the target system, there are
enough packet queues and buffers that any hope of observing packet
interarrival times is almost certainly hopeless. As I've said, if an
attacker has direct physical access to the ethernet segment between
your host and the switch, then while the attacker is busy installing a
keyboard switcher and possibly doing a black bag job on your hard disk
to install monitoring software directly in the OS software, yes, the
FBI agent could probably install something on the ethernet between
your host and the switch that could precisely measure packet arrival
times very accurately. Is that something the system administrator
should care about? Maybe, maybe not.
That's why I think it should be configurable. If you don't have a
real hardware number generator, maybe blocking would be preferable to
not having good entropy. But in other circumstances, what people need
is the best possible randomness they can get, and their security is
not enhanced by simply taking it away from them altogether. That
makes about as much sense as GNOME making its applications "easier to
use" by removing functionality (to quote Linus).
> Yes, this is all strictly theoretical. But the usefulness of
> /dev/random is exactly as theoretical. So if you use /dev/random for
> it's theoretical advantages (and why else would you?), this defeats
> that.
This becomes a philosophical arugment. Yes, we should strive for as
much theoretical perfection as possible. But at the same time, we
need to live in the real world, and adding network entropy which can
defeat the bored high school student in Russia using some black hat
toolkit they downloaded of the internet is useful --- even if it can't
defeat the NSA/FBI agent who can perform a black bag job and place a
monitoring device on your internal ethernet segment.
- Ted
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]