On Thu, Oct 04, 2007 at 03:07:18PM -0700, David Miller wrote:
> From: Chuck Ebbert <[email protected]> Date: Thu, 04 Oct 2007 17:47:48
> -0400
>
> > On 10/04/2007 05:11 PM, David Miller wrote:
> > > From: Chuck Ebbert <[email protected]> Date: Thu, 04 Oct 2007 17:02:17
> > > -0400
> > >
> > >> How do you simulate reading 100TB of data spread across 3000 disks,
> > >> selecting 10% of it using some criterion, then sorting and summarizing
> > >> the result?
> > >
> > > You repeatedly read zeros from a smaller disk into the same amount of
> > > memory, and sort that as if it were real data instead.
> >
> > You've just replaced 3000 concurrent streams of data with a single stream.
> > That won't test the memory allocator's ability to allocate memory to many
> > concurrent users very well.
>
> You've kindly removed my "thinking outside of the box" comment.
>
> The point is was not that my specific suggestion would be perfect, but that
> if you used your creativity and thought in similar directions you might find
> a way to do it.
>
> People are too narrow minded when it comes to these things, and that's the
> problem I want to address.
And it's a good point, too, because often problems to one person are a
no-brainer to someone else.
Creating lots of "fake" disks is trivial to do, IMO. Use loopback on sparse
files containing sparse filesxi, use ramdisks containing sparse files or write a
sparse dm target for sparse block device mapping, etc. I'm sure there's more than the
few I just threw out...
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]