Re: Multicast group memberships lost if eth0 brought down and up

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2004-10-01 at 12:25 -0400, Deron Meranda wrote:
> Is this the correct behavior, or a bug?  (seeing this in both FC2
> as well as EL3)
> 
> I wrote an application which registers itself in a couple IPv6
> multicast groups on an ethernet interface.  A "netstat -A inet6 -g"
> then shows the group membership, such as
> 
> IPv6/IPv4 Group Memberships
> Interface       RefCnt Group
> --------------- ------ ---------------------
> lo              1      ff02::1
> eth0            1      ff02::eb42:8740
> eth0            1      ff02::f6b6:d980
> eth0            1      ff02::1:ff13:7276
> eth0            1      ff02::1
> 
> If I then leave the application running, but do
> an "ifdown eth0" then "ifup eth0", I get this,
> 
> IPv6/IPv4 Group Memberships
> Interface       RefCnt Group
> --------------- ------ ---------------------
> lo              1      ff02::1
> eth0            1      ff02::1:ff13:7276
> eth0            1      ff02::1
> 
> Are group memberships supposed to be preserved across
> up/down cycling?  If not, then what's the proper way for an
> application to detect that this has occurred so it can re-register?
> Note that the app can still successfully SEND packets to the
> group, but obviously, it no longer receives any packets.
> 
> --
> Deron Meranda
> 
up/down cycling of what?
Most switches will and should flush all tables on a port state change.
I am assuming you are in a switch that is doing igmp snooping as well.
What you probably need is for your app to detect link loss and send a
new join when the link comes back. I have seen what I consider many
broken applications that have this same issue. You may want to read up
on how igmp and Pim work, and how the network equipment you are using
implements it. Some vendors allow for you to manually join a group, this
will force that port to always get the multicast data for that group.

Also if come up with a good negative ack model , which is really the
proper approach, it should recover in a relatively fast time on any
up/down cycling assuming the network gear is set up right and working
properly.


Ted








[Index of Archives]     [Current Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux