Isolating multicast traffic with common group address using VRF-Lite on 6500 not working as expected
Hi, thanks for looking.
I have a QA and PRODUCTION environment configured for a new project. We try to keep these environments as close a mirror image in terms of config as possible, just with differing VLANs and IP addressing etc... For the purpose of this discussion lets say the QA VLANs are VLAN1, VLAN2, VLAN3 and for PRODUCTION they are VLAN11, VLAN12, VLAN13. The servers in each VLAN uses Multicast PGM to communicate. The configured MC group address is 126.96.36.199 and traffic sent to this address should be received by all servers in each of their respective environments VLANs. The software installed on the servers that uses this PGM, uses it for a sort of keepalive and synchronization of status. All servers use IGMPv2 to join.
For this I'm using a 6500 with a FWSM. The FWSM Version is 4.0(7) and the sup720 is running s72033-advipservicesk9_wan-mz.122-18.SXF17a.
Because the FWSM doesnt support PGM we had to extend the VLAN to the 6500. The server continue to use the FWSM as their default gateway though. The SVI’s for each VLAN sit in a VRF according to their assigned environment. So QA VLAN1,VLAN2 and VLAN3 sit in VRF_QA and PROD VLAN11, VLAN12, VLAN13 sit in VRF_PROD. Multicast isn't enabled globally on the 6500 but each VRF has multicast routing enabled and each SVI has PIM and PGM enabled.
So one of the vlan SVIs config looks like:
ip vrf forwarding MC_PGM_QA
ip address 188.8.131.52 255.255.255.240
ip pim sparse-dense-mode
ip pgm router
With this config in place, igmp and mroute output looks good and the servers can use multicast as required. So they join the group and the mroute reflects the correct egress interfaces for traffic on each VRF. The problem is that we would like to use the same MC group address of 184.108.40.206 on both QA and PRODUCTION, mirroring config and all that, but with the above config in place the MC traffic from QA bleeds into PRODUCTION and vice versa which makes the developers cry. In other words when QA servers send traffic to the 220.127.116.11 MC group, the production servers receive a copy rather than it being confined to the QA vlans, which according to the mroute output should be happening.
My multicast knowledge isn't great but from looking at the output of mroute tables, igmp and some debug it looks like it should work as desired. I’m not sure if this is a L3 issue or L2 due to the MC addresses being common to both environments. I'm guessing the quick fix is to just give production a different MC group address but I'd rather know if this setup is possible and what could be wrong with the config, perhaps it's a bug since we're running quite an old IOS release.
Listen: https://smarturl.it/CCRS8E37Follow us: twitter.com/ciscochampionSometimes, situations require temporary fixes. Sometimes, the network becomes an afterthought in overall office design and planning. In either situation, it may require netw...
In this special edition of the Insider Series, we hear from Cisco partners who have taken steps to be more eco-friendly and sustainable. We hear what inspires ASHRAE, Southwire, Igor, and NTT to create a workplace that is centered around people and how th...
We know that the Type-1 LSA describes the link type connected to the router, the neighbor router and the subnet number.In this topology, assume we dont have a Type-2 LSA, so each router will create its own Type-1 LSA, the Type-1 LSA will describe the neig...
Here are some commonly asked questions and answers to help with your adoption of Cisco DNA Center Wireless. Subscribe to this post to stay up-to-date with the latest Q&A and recommended Ask the Experts (ATXs) sessions to attend.
Q. I have a Cisco Appl...
Why IETF changed and inverted OSPF Type-7 LSA VS Type-5 LSA election In RFC 3101 compared to OLD RFC 1587?Many people learns that the Type-7 LSA and Type-5 election (ON Versus OE routes) depends on RFC 3101 for NSSA published in 2003 and RFC 1587 for NSSA...