Re: [PATCH 2.6.13-rc6 1/2] New Syscall: get rlimits of any process (update)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2005-08-19 at 00:13 +0100, Alan Cox wrote:
> On Iau, 2005-08-18 at 14:17 -0400, Lee Revell wrote:
> > Maybe the distros need to just increase the default FD limit to 1024.  I
> > hit this constantly with gtk-gnutella, if try to download a file that's
> > available on more than 1024 hosts it will open sockets until it hits
> > that limit then bomb out.
> 
> Sounds like a remarkably badly designed application. The author should
> perhaps look at the papers on tcp capture and efficiency unless they
> have a truely remarkably huge network pipe.
> 

OK say your DSL can dl at 350KB/sec.

A search for a 200MB file tells you it's available on 2000 hosts, all of
whom are on dialup.  You break it into 100 chunks and request a chunk
from what should be the fastest 100 hosts.  You wait a bit for the
speeds to stabilize and find you're only getting 150KB/sec aggregate, so
you reduce the size of the chunks and connect to 100 more hosts.  Repeat
until the pipe is full or you run out of FDs.

What's inefficient about this?  It's remarkably fast even for high
demand files and does not suck up all available upstream BW like
bittorrent and is leech-friendly ;-)

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]
  Powered by Linux