01-08-2021 04:54 PM
Hello.
I'm pretty new to multicast, but I've been thrown into a pretty big network that recently replaced a pair of old catalyst switches with a pair of nexus switches using a VPC and I'm hoping I'm just missing something simple. Through a lot of time spent with cisco, we found that the VPC isn't supported with multicast, so the proposed solution was to create routed layer 3 ports on the nexus and connect those to our core Catalyst 6509 switch for multicast only. All of our typical traffic uses the VPC and a switchport trunk. Multicast worked fine before swapping out the switches, so I'm focusing mostly on the connection between these devices to find the problem. For the purposes of this question, I'm focusing on a single Nexus switch and my core 6509. To add to the fun, all of the multicast is within a vrf called Manufacturing.
First the facts.
Multicast group: 239.255.255.250
Multicast Server(source): 10.180.4.48
Server comes into the core 6509 on vlan 643 (Not the same subnet as the server).
Multicast traffic should leave the core on port Te2/3/8 (10.180.254.53).
Multicast traffic should arrive on the nexus switch on port Eth4 (10.180.254.54)
Multicast traffic goes out VLANs 208, 221, and 503
Core interface config:
interface TenGigabitEthernet2/3/8
description Multicast Link to 10.180.1.105 Eth1/4
no switchport
ip vrf forwarding Manufacturing
ip address 10.180.254.53 255.255.255.252
ip pim sparse-mode
end
Nexus interface config:
interface Ethernet1/4
description L3 P2P to Core 2/3/8
no switchport
vrf member Manufacturing
no ip redirects
ip address 10.180.254.54/30
no ipv6 redirects
ip pim sparse-mode
mroute table on the core:
show ip mroute vrf Manufacturing 239.255.255.250 10.180.4.48
(10.180.4.48, 239.255.255.250), 01:12:46/00:03:26, flags: LJT
Incoming interface: Vlan643, RPF nbr 10.180.126.10, Partial-SC
Outgoing interface list:
TenGigabitEthernet2/3/8, Forward/Sparse, 01:12:39/00:02:54, H
Vlan844, Forward/Sparse, 01:12:46/00:02:06, H
You can see that there is traffic coming into the core with the following command, and I can see it incrementing if I run it more than once.
show ip mroute vrf Manufacturing 239.255.255.250 count
Group: 239.255.255.250, Source count: 10, Packets forwarded: 68, Packets received: 68
RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0
Source: 10.180.4.48/32, Forwarding: 68/1/124/0, Other: 68/0/0
Below you can see the mroute table on the nexus. It only has the (*,G). You can also see that the count is stuck at 3 and doesn't move.
show ip mroute 239.255.255.250 detail vrf Manufacturing
(*, 239.255.255.250/32), uptime: 5w3d, igmp(3) ip(0) pim(0)
RPF-Source: 10.180.250.1 [0/1]
Data Created: No
Locally joined
VPC Flags
RPF-Source Forwarder
Stats: 8528/3995003 [Packets/Bytes], 0.000 bps
Stats: Inactive Flow
Incoming interface: Ethernet1/4, RPF nbr: 10.180.254.53
Outgoing interface list: (count: 3) (bridge-only: 0)
Vlan503, uptime: 5w3d, igmp (vpc-svi)
Vlan208, uptime: 5w3d, igmp (vpc-svi)
Vlan221, uptime: 5w3d, igmp (vpc-svi)
If I ping from the core to the multicast ip, the nexus responds:
ping vrf Manufacturing 239.255.255.250
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.255.255.250, timeout is 2 seconds:
Reply to request 0 from 10.180.254.54, 28 ms
Reply to request 0 from 10.180.2.1, 56 ms
Reply to request 0 from 10.180.254.54, 48 ms
Reply to request 0 from 10.180.44.1, 48 ms
Reply to request 0 from 10.180.254.54, 48 ms
Reply to request 0 from 10.180.126.9, 48 ms
Reply to request 0 from 10.180.254.54, 48 ms
Reply to request 0 from 10.180.250.1, 48 ms
on the core, if I run a debug ip mpacket detail 239.255.255.250, I see the following two lines every 4-6 seconds which matches the incrementing mroute count.
Jan 8 16:49:00.179: IP(1): MAC sa=1cdf.0f03.b180 (Vlan643)
Jan 8 16:49:00.183: IP(1): IP tos=0xB8, len=124, id=11859, ttl=9, prot=17
Any ideas? Or troubleshooting steps to figure this out? I'd be very grateful for any help. I've been working on this for months.
Andy
01-09-2021 01:08 AM
Hello,
not sure what the rest of the configs look like, but make sure it is something like below:
Core 6509
ip vrf Manufacturing
ip multicast-routing vrf Manufacturing
!
interface Loopback0
ip address 10.1.1.x 255.255.255.255
ip vrf forwarding Manufacturing
!
interface TenGigabitEthernet2/3/8
description Multicast Link to 10.180.1.105 Eth1/4
no switchport
ip vrf forwarding Manufacturing
ip address 10.180.254.53 255.255.255.252
ip pim sparse-mode
!
ip pim vrf Manufacturing rp-address 10.1.1.x
Nexus
vrf context Research
ip pim rp-address 10.1.1.x group-list 224.0.0.0/24
!
interface Loopback0
ip address 10.1.1.x/32
vrf member Manufacturing
ip pim sparse-mode
!
interface Ethernet1/4
description L3 P2P to Core 2/3/8
no switchport
vrf member Manufacturing
no ip redirects
ip address 10.180.254.54/30
no ipv6 redirects
ip pim sparse-mode
01-09-2021 06:13 AM
I confirmed all of that configuration with two exceptions. One is that the RP address is 10.180.250.1. The second difference is that the nexus doesn't have any loopbacks. Is there a reason it would need one? If so, does it need any specific address?
I found a couple new show commands and found something that may be the problem. See output below, but it shows the unknown IGMP message type counters incrementing. The numbers are low because I just restarted the igmp process, but they are incrementing at the same rate as the other counters I mentioned.
show ip igmp int eth1/4
IGMP Interfaces for VRF "Manufacturing"
Ethernet1/4, Interface status: protocol-up/link-up/admin-up
IP address: 10.180.254.54, IP subnet: 10.180.254.52/30
Active querier: 10.180.254.53, expires: 00:03:42, querier version: 2
Membership count: 0
Old Membership count 0
IGMP version: 2, host version: 2
IGMP query interval: 125 secs, configured value: 125 secs
IGMP max response time: 10 secs, configured value: 10 secs
IGMP startup query interval: 31 secs, configured value: 31 secs
IGMP startup query count: 2
IGMP last member mrt: 1 secs
IGMP last member query count: 2
IGMP group timeout: 260 secs, configured value: 260 secs
IGMP querier timeout: 255 secs, configured value: 255 secs
IGMP unsolicited report interval: 10 secs
IGMP robustness variable: 2, configured value: 2
IGMP reporting for link-local groups: disabled
IGMP interface enable refcount: 1
IGMP interface immediate leave: disabled
IGMP interface suppress v3-gsq: disabled
IGMP VRF name Manufacturing (id 3)
IGMP Report Policy: None
IGMP Host Proxy: Disabled
IGMP State Limit: None
IGMP interface statistics: (only non-zero values displayed)
General (sent/received):
v2-queries: 2/6, v2-reports: 0/6, v2-leaves: 0/0
Errors:
Report version mismatch: 0, Query version mismatch: 0
Unknown IGMP message type: 18
Interface PIM DR: Yes
Interface vPC SVI: No
Interface vPC CFS statistics:
01-09-2021 06:19 AM
Hello,
try and configure the loopbacks, IP addresses don't matter as long as they are reachable.
01-09-2021 06:37 AM
I guess I'm confused what the purpose of a loopback would be on the nexus. There is no multicast configuration that would reference it. There is a loopback on the core and that is set as the RP. Is there a reason for a loopback on the nexus? I'll add one, but is there something that's supposed to reference it?
01-09-2021 07:33 AM
Quick update. I don't think the loopback changed anything, but I'll leave it there for now. However, I noticed on the nexus there was only a (*,G). I did a ping to the multicast group from the core, and four (S,G) were created, one for each participating interface on the core. That makes me think that the multicast may be configured properly between the two. However, on the core, I see the following (S,G) that I don't see on the nexus. So it seems like it isn't getting forwarded for some reason.
(10.180.4.48, 239.255.255.250), 16:24:19/00:02:57, flags: LJT
Incoming interface: Vlan643, RPF nbr 10.180.126.10, Partial-SC
Outgoing interface list:
TenGigabitEthernet2/3/8, Forward/Sparse, 16:24:12/00:02:39, H
Vlan844, Forward/Sparse, 16:24:19/00:02:35, H
01-09-2021 07:34 AM
Hello,
never mind, if there is a loopback elsewhere, you don't need one locally. I am doing some more research on the interoperability of IOS and NX-OS when it comes to multicasting...
01-09-2021 07:47 AM
For reference, here's an article that Cisco sent me that references the VPC incompatibility.
Specifically: “Cisco Nexus 3000 Series switches do not support PIM adjacency with a vPC leg or with a router behind a vPC.”
01-09-2021 08:00 AM
I am looking at some NX-OS restrictions, and found this:
--> Cisco NX-OS PIM and PIM6 do not interoperate with any version of PIM dense mode or PIM Sparse Mode version 1.
When you say an 'old' 6509, which IOS version are you running on the switch ? In any case, make sure 'ip pim version 2' is configured on the Catalyst interfaces...
01-09-2021 08:04 AM
It's running Cisco IOS Software, s72033_rp Software (s72033_rp-ADVENTERPRISEK9-M), Version 15.1(2)SY9, RELEASE SOFTWARE (fc1)
Below you can see pim version 2.
show ip pim vrf Manufacturing neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
10.180.254.54 TenGigabitEthernet2/3/8 02:04:45/00:01:31 v2 1 / DR G
10.180.126.10 Vlan643 3w4d/00:01:39 v2 1 / DR G
01-09-2021 04:10 AM
Hello
As a quick test have you tried using ip pim sparse-dense mode on all pim enabled interfaces just in case you have rpf issues to/from your pim RP?
01-09-2021 06:18 AM
Thanks Paul. I really need to learn more about what sparse or dense modes are. For a single multicast path, there are 4 other switches/firewall beyond the two that I'm focusing on here and I'm not sure how the two modes work. I'll look into this further next week if my previous comment doesn't pan out. I'm seeing the "Unknown IGMP message type" counter incrementing on the nexus incoming interface and I'm not sure what that would mean.
01-09-2021 07:35 AM
As far as I recall, NX-OS does not support sparse-dense, only sparse ?
01-09-2021 09:46 AM - edited 01-09-2021 09:48 AM
Hello
The reason why i suggest to use sparse-dense mode was to just basically by pass the need for a RP however if as @Georg Pauwen state nexus doesn’t support sparse-dense i assume the reason is nexus just doesn’t support dense mode
Anyhow troubleshooting static pim (in your case) between two nodes should be fairly straightforward
-enable mc routing
-apply pim on the routed interfaces and
-add static RP mapping to each node
That’s should be it and as long as they both a reachability MC should work unless that is you have a mc rpf failure and mc traffic is trying to traverse via an additional path not enabled for multicast
01-21-2021 03:10 PM
Follow up to this one. I worked with Cisco for a long time working with a nexus tech to catalyst tech, and then they tried to get the firewall team onboard for the FWSM and they said it isn't supported. We're just going to replace the entire core. No answer to this question.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide