Re: Nehalem network performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2010-01-27 at 10:49 -0500, Kelvin Ku wrote:
> > 
> > Please post the output of:
> > $ cat /proc/interrupts | grep eth
> 
> We rename our interfaces to lan:
> 
> $ grep lan /proc/interrupts 
>  61:          1          0          0          0   PCI-MSI-edge      lan0
>  62:    7194004          0          0          0   PCI-MSI-edge      lan0-TxRx-0
>  63:          0          1          0          0   PCI-MSI-edge      lan1
>  64:          0          0   49842410          0   PCI-MSI-edge      lan1-TxRx-0
> 
> Note that irqbalance is disabled. I found that it wasn't balancing IRQs like on
> our older machines. I note that the irqbalance docs say that NIC interrupts
> should not be balanced, which is what we're seeing whether irqbalance is running or not.

Interrupts look OK. (You want the interrupts for each MSI queue to land
in one CPU core instead of getting bumped around.


> lan1 (multicast interface) is below. Note that rx_missed_errors is non-zero. I
> previously encountered this with the e1000e NIC after disabling cpuspeed, 

I usually disable cpuspeed, but my boxes have multiple 1 and 10GbE
NICs...


> which
> was throttling the CPUs to 1.6 GHz (from a maximum of 2.4 GHz). I attempted to
> remedy this by setting InterruptThrottleRate=0,0 in the e1000e driver, after
> which we had one full day of testing with zero rx_missed_errors, but the
> application still reported packet loss.

rx_missed_error usually get triggered when the kernel is slow to handle
incoming hardware interrupts.
There's a trade-off here, increase the interrupt rate and you'll
increase the kernel CPU usage as the expense of lower latency - decrease
the interrupt rate, and you'll reduce the CPU usage at the expense of a
higher chance of hitting the RX queue limit.
I'd suggest you try setting the InterruptThrottleRate to 1000, while
increasing the RX queues to 4096.
(sbin/ethtool -G DEVICE rx 4096)

You could try enabling multi-queue by adding IntterruptType=2,
RSS=NUM_OF_QUEUE and MQ=1 to your modprobe.conf.d.

> Supermicro X8DTL-iF

We are using a similar SuperMicro board for 10GbE.

> > Have you tried enabling pci=msi in your kernel's command line?
> 
> No. Do I need to do this? MSI seems to be enabled:

OK. looks OK.

> Agreed. I ran a local netperf test and was seeing about 8 Gbps of throughput on a single core, so this should be adequate for 1 Gbps traffic.

Can you post the output of $ mpstat -P 1 ALL during peak load?

- Gilboa

-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines

[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux