From: Chuck Ebbert <[email protected]>
Date: Thu, 04 Oct 2007 17:02:17 -0400
> How do you simulate reading 100TB of data spread across 3000 disks,
> selecting 10% of it using some criterion, then sorting and
> summarizing the result?
You repeatedly read zeros from a smaller disk into the same amount of
memory, and sort that as if it were real data instead.
You're not thinking outside of the box, and you need to do that to
write good test cases and fix kernel bugs effectively.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]