cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1468
Views
6
Helpful
6
Replies

Catalyst 9300 Multicast Routing

kerstin-534
Level 1
Level 1

Hi,

I'm trying to route CAPWAP Multicast traffic with Catalyst 9300.

c9300mcastlnz#sh run

 

ip multicast-routing

interface Vlan3
ip address 10.40.18.251 255.255.240.0
ip pim sparse-mode

!

interface Vlan351
ip address 10.20.18.55 255.255.240.0
ip pim dr-priority 9496
ip pim sparse-mode

!

ip pim rp-address 10.20.18.55

 

I can see the Multicast is arriving on group 239.40.20.1 in interface Vlan 351.

Doing the same on a Cisco ASA (with only done "multicast-routing") is forwarding multicast. 

 

Licensing:

network-advantage

dna-advantage

 

What is wrong with my Cat9300 configuration ?

1 Accepted Solution

Accepted Solutions

I added ip routing in addition to ip multicast-routing and everything works.

Thanks for help.

View solution in original post

6 Replies 6

Hello

Is pim enabled on all svis related to the access points and controller vlans, also try pim sparse-dense mode instead.


Post the following:
sh ip mroute
sh ip mroute count
sh ip route <pim address>
sh ip pim neigbour
sh ip pim interface
sh ip igmp group 
sh ip pim rp 
sh ip pim rp mapping 


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

WLAN Controller has ip address 10.20.18.5

WLAN AP is in subnet 10.40.16.0/20

 

c9300mcastlnz#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group,
G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
Q - Received BGP S-A Route, q - Sent BGP S-A Route,
V - RD & Vector, v - Vector, p - PIM Joins on route,
x - VxLAN group, c - PFP-SA cache created entry,
* - determined by Assert, # - iif-starg configured on rpf intf,
e - encap-helper tunnel flag, l - LISP Decap Refcnt Contributor
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.40.20.1), 02:28:07/00:01:06, RP 10.20.18.55, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Vlan351, Forward/Sparse-Dense, 02:28:07/00:01:06
Vlan3, Forward/Sparse-Dense, 02:28:06/00:01:05

(*, 224.0.5.128), 02:28:06/00:01:11, RP 10.20.18.55, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Vlan351, Forward/Sparse-Dense, 02:28:06/00:01:11

(*, 224.0.1.40), 01:56:14/00:01:04, RP 10.20.18.55, flags: SJCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Vlan3, Forward/Sparse-Dense, 01:56:14/00:01:04


c9300mcastlnz#sh ip mroute count
Use "show ip mfib count" to get better response time for a large number of mroutes.

IP Multicast Statistics
3 routes using 3688 bytes of memory
3 groups, 0.00 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 239.40.20.1, Source count: 0, Packets forwarded: 0, Packets received: 0

Group: 224.0.5.128, Source count: 0, Packets forwarded: 0, Packets received: 0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0


c9300mcastlnz#sh ip route 10.20.18.55
Routing entry for 10.20.18.55/32
Known via "connected", distance 0, metric 0 (connected)
Routing Descriptor Blocks:
* directly connected, via Vlan351
Route metric is 0, traffic share count is 1


c9300mcastlnz#sh ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
P - Proxy Capable, S - State Refresh Capable, G - GenID Capable,
L - DR Load-balancing Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
10.40.16.1 Vlan3 02:29:17/00:01:44 v2 1 / G
10.40.16.100 Vlan3 02:29:48/00:01:42 v2 1 / S P G
10.20.18.3 Vlan351 02:29:47/00:01:37 v2 1 / S P G

c9300mcastlnz#sh ip pim interface

Address Interface Ver/ Nbr Query DR DR
Mode Count Intvl Prior
10.40.18.251 Vlan3 v2/SD 2 30 1 10.40.18.251
10.20.18.55 Vlan351 v2/SD 1 30 9496 10.20.18.55
192.168.80.4 Vlan897 v2/S 0 30 1 192.168.80.4


c9300mcastlnz#sh ip igmp group
IGMP Connected Group Membership
Group Address Interface Uptime Expires Last Reporter Group Accounted
239.40.20.1 Vlan3 02:29:56 00:01:26 10.40.16.10
239.40.20.1 Vlan351 02:29:58 00:01:21 10.30.16.41
224.0.5.128 Vlan351 02:29:56 00:01:21 10.30.17.1
224.0.1.40 Vlan3 02:29:58 00:01:23 10.40.16.100


c9300mcastlnz#sh ip pim rp
Group: 239.40.20.1, RP: 10.20.18.55
Group: 224.0.5.128, RP: 10.20.18.55
Group: 224.0.1.40, RP: 10.20.18.55
c9300mcastlnz#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
RP: 10.20.18.55 (?)

Is this "normal" ?

c9300mcastlnz#sh ip rpf 10.20.18.5
failed, internal error or RIB not converged yet

 

 - FYI :  https://www.cisco.com/c/en/us/td/docs/routers/csr1000/release/notes/xe-16/csr1000v_rn-16-9-book.html#id_78681

 >...

CSCvj55699

CAT3k/9k does not create (S,G) due to RFP error "failed, internal error or RIB not converged yet"

  The bug report does not seem public accessible, contact TAC about it or use the latest advisory software release on the 9300.

 M.



-- Each morning when I wake up and look into the mirror I always say ' Why am I so brilliant ? '
    When the mirror will then always repond to me with ' The only thing that exceeds your brilliance is your beauty! '

I added ip routing in addition to ip multicast-routing and everything works.

Thanks for help.

Thanks for replying back to the answer, it solved the same issue for me. But its wierd that you have to enable ip routing. I mean the ip routing was working only RPF that didnt. Seems like a bug to me.