Re: sendfile() with 100 simultaneous 100MB files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 20, 2006 at 04:53:44PM -0500, Jon Smirl wrote:

> Any other ideas why sendfile() would get into a seek storm?

I can't really comment on the quality of the linux sendfile() implementation,
I've never looked at the code.  However, a couple of general observations.

The seek storm happens because linux is trying to be "fair," where fair
means no one process get to starve another for I/O bandwidth.

The fastest way to transfer 100 100M files would be to send them one at a
time.  The 99th person in line of course would percieve this as a very poor
implementation.  The current sendfile implementation seems to live at the
other end of the extream.

It is possible to come up with a compromise behavior by limiting the
number of concurrent sendfiles running, and the maximum size they are
allowed to send in one squirt.

Thanks,

Jim

-- 
[email protected]
SDF Public Access UNIX System - http://sdf.lonestar.org
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux