Re: NFS and kernel cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2006-12-21 at 08:14 -0600, Chris Adams wrote:
> Once upon a time, Les <hlhowell@xxxxxxxxxxx> said:
> > Ethernet only works well when the network is utilized about 50%.
> 
> I'm familiar with how Ethernet works, and this is not Ethernet that is a
> problem.  Properly configured switched Ethernet can work without errors
> at much more than 50% (I do that every day).  Gigabit Ethernet should
> run much faster than 200Mbps.  When I ran a similar test with a Fast
> Ethernet interface, I saw similar "stuttering" behavior, but it still
> averaged around 70-80Mbps.
> 
> There is no router in my problem setup; this is two systems connected to
> a switch (not a hub).  Neither is running anything else (one is booted
> in rescue mode).  Both systems show full duplex 1000Mbps link with flow
> control enabled (both have tg3 chips).  I was going to try jumbo frames
> to see what difference that made, but one system doesn't support jumbo
> frames.
> 
> The fact that Linux stops sending on the network sometimes and stops
> reading the hard drive other times points directly to how the kernel is
> caching writes to NFS (but I can't tell if it is the filesystem layer or
> the network stack).

If you run ether net above 50% you get lots of collisions.  This is due
to the nature of ethernet.  You can use synchronous transport or polling
techniques to avoid that, but that is not part of the ethernet
specification.  As to the caching and disk I/O processing, that is
dependent upon many things.  If you have Sata, and it is connected to a
memory mapped I/O, through the "memory bus", then you may see about 50%
of your processors bus speed.  For example if the buss is running 400M,
you will receive usable 200Mbytes at full through put.  MAXIMUM when the
path and data are fully cached.  If on the other hand, you get a cache
miss, which will occur somewhat frequently with large files, that speed
will drop as the data to/from the disk is read from the drive cache into
the system cache and ultimately into the processor cache.  This is for
processor controlled I/O, which I think must be the case for the data to
get from the disk to the ethernet port.  This is then further
constricted by the processor execution time, the cache interplay and
delay, and the access to the ethernet port, generally through PCI
interface, which is about 80Mbits maximum, of 640Mbits, which is then
further inhibited by the ethernet protocol, and/or the buffering and
other processes designed into the ethernet controller and I/O drivers.
All of this is then processed either by the ethernet controller (with a
smart controller) or by the system if using a dumb controller.  There is
no magic path from the disk to the ethernet port.  YMMV.

	If you want to find out information below this level, you have access
to the source code, and can examine and/or rewrite it yourself.
However, don't overlook the overhead in the system.  It is considerable,
and it is not possible to simply program around it (although carefully
written assembly routines, coupled with some real time programming
tricks might minimize it.)

Regards,
Les H


[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux