Reading a bunch of file as fast a possible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After searching for kinda-keywords in a locked-in-memory index, I get
a list of 50-100 files out of several hundred thousands I want to read
as fast as possible.  I can ensure that the directory structure in hot
in the dcache by re-reading it from time to time, but there isn't
enough memory to lock the documents there.  So I'd like to read 50-100
files for which I have the sizes (I put them in the index) and memory
space as fast as possible (less than 0.1s would be great) from
cold-ish cache.

The best way is I think to find a way to give all the requests to the
system and have it sort them optimally at the elevator level.  But how
can I do that?  Can aio do it, or something else?

  OG.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux