Re: UDP packets loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 15, 2006 at 01:08:41AM +0200, [email protected] wrote:
> Thanks for the commets.
> I actually use UDP because I am seeking for ways to improve the
> performance of IPOIB and I wanted to avoid TCP's flow control. I am really
> up to making anaysis. Can you tell me more about irqbalnced? Where can I
> find more info how to control it? I would like my interrupts serviced by
> all CPUs in a somehow equal manner. I mentioned MSIX - the driver already
> make use of MSIX and I thought this is relevant to interrupts affinity.
> 

If you want complete control over which CPU's service which interrupts, just
turn irqbalance off (usually service irqbalance stop).  Then use
/proc/irq/<irq_number>/smp_affinity to tune the cpu affinity for each interrupt.

That being said however, As Auke and others have mentioned, servicing interrupts
on multiple cpu's leads to lower performance, not higher performance.  cache
line bouncing is going to create greater latency for each interrupt you service
and slow you down overall.  I assume these are gigabit interfaces?  You're best
focus to improve throughput is to (if the driver supports it), tune your
interrupt coalescing factors such that you minimize the number of interrupts you
actually receive from the card.

Regards
Neil

> > On Wed, 15 Nov 2006 00:15:47 +0200 (IST)
> > [email protected] wrote:
> >
> >> Hi,
> >> I am running a client/server test app over IPOIB in which the client
> >> sends
> >> a certain amount of data to the server. When the transmittion ends, the
> >> server prints the bandwidth and how much data it received. I can see
> >> that
> >> the server reports it received about 60% that the client sent. However,
> >> when I look at the server's interface counters before and after the
> >> transmittion, I see that it actually received all the data that the
> >> client
> >> sent. This leads me to suspect that the networking layer somehow dropped
> >> some of the data. One thing to not - the CPU is 100% busy at the
> >> receiver.
> >> Could this be the reason (the machine I am using is 2 dual cores - 4
> >> CPUs).
> >
> > If receiver application can't keep up UDP drops packets. The counter
> > receive buffer errors (UDP_MIB_RCVBUFERRORS) is incremented.
> >
> > Don't expect flow control or reliable delivery; it's a datagram service!
> >
> >> The secod question is how do I make the interrupts be srviced by all
> >> CPUs?
> >> I tried through the procfs as described by IRQ-affinity.txt but I can
> >> set
> >> the mask to 0F bu then I read back and see it is indeed 0f but after a
> >> few
> >> seconds I see it back to 02 (which means only CPU1).
> >
> > Most likely, the user level irq balance daemon (irqbalanced) is adjusting
> > it?
> >
> >>
> >> One more thing - the device I am using is capable of generating MSIX
> >> interrupts.
> >>
> >
> > Look at device capabilities with:
> >
> > 	lspci -vv
> >
> >
> > --
> > Stephen Hemminger <[email protected]>
> >
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-net" in
> the body of a message to [email protected]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *gpg keyid: 1024D / 0x92A74FA1 - http://pgp.mit.edu
 ***************************************************/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux