cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3334
Views
5
Helpful
1
Replies

IPV4 Multicast - FIBipv4-packet-proc: packet routing failed on Cat4506

leogal777
Level 1
Level 1

Hello

Cisco WS-C4506-E with cat4500es8-universalk9.SPA.03.08.01.E.152-4.E1.bin

There are MC source&receivers in 2 separate vlans and they need to commnucate using MC group 239.255.255.250.

2 separate vlans and 1 loopback used as RP

interface Vlan2107
vrf forwarding ACC-OFFICE
ip address 172.16.131.126 255.255.255.128
ip helper-address 172.16.2.112
ip pim sparse-dense-mode

interface Vlan2201
vrf forwarding ACC-OFFICE
ip address 172.16.145.254 255.255.254.0
ip pim sparse-dense-mode

interface Loopback1
vrf forwarding ACC-OFFICE
ip address 192.168.3.3 255.255.255.255
ip pim sparse-dense-mode

the switch is enabled for pim bidir

ip multicast-routing vrf ACC-OFFICE
ip pim bidir-enable

ip pim vrf ACC-OFFICE send-rp-announce Loopback1 scope 10 group-list bidir-groups bidir
ip pim vrf ACC-OFFICE send-rp-discovery scope 10

RBK-SW-CO-DC#sh access-list bidir-groups
Standard IP access list bidir-groups
10 permit 239.255.255.250

The igmp membership looks good

RBK-SW-CO-DC#sh ip igmp vrf ACC-OFFICE membership 
Flags: A - aggregate, T - tracked
L - Local, S - static, V - virtual, R - Reported through v3
I - v3lite, U - Urd, M - SSM (S,G) channel
1,2,3 - The version of IGMP, the group is in
Channel/Group-Flags:
/ - Filtering entry (Exclude mode (S,G), Include mode (G))
Reporter:
<mac-or-ip-address> - last reporter if group is not explicitly tracked
<n>/<m> - <n> reporter in include mode, <m> reporter in exclude
Channel/Group Reporter Uptime Exp. Flags Interface
*,239.255.255.250 172.16.144.8 00:17:28 02:23 2A Vl2201
*,239.255.255.250 172.16.131.28 00:17:38 02:24 2A Vl2107
*,224.0.1.39 172.16.145.254 17:27:51 02:30 2LA Vl2201
*,224.0.1.39 172.16.131.126 17:27:51 02:27 2LA Vl2107
*,224.0.1.39 192.168.3.3 17:27:51 02:25 2LA Lo1
*,224.0.1.40 192.168.3.3 18:00:50 02:22 2LA Lo1

IGMP snooping is disabled for both vlans

RBK-SW-CO-DC#sh ip igmp snooping vlan 2107
Global IGMP Snooping configuration:
-------------------------------------------
IGMP snooping : Enabled
IGMPv3 snooping : Enabled
Report suppression : Enabled
TCN solicit query : Disabled
TCN flood query count : 2
Last member query interval : 1000
Vlan 2107:
--------
IGMP snooping : Disabled
IGMPv2 immediate leave : Disabled
Explicit host tracking : Enabled
Multicast router learning mode : pim-dvmrp
Last member query interval : 1000
Vlan 2201:
--------
IGMP snooping : Disabled
IGMPv2 immediate leave : Disabled
Explicit host tracking : Enabled
Multicast router learning mode : pim-dvmrp
Last member query interval : 1000

172.16.144.8 and 172.16.131.28 are source&receivers that need to communicate.

mroute entry looks quite good

RBK-SW-CO-DC#sh ip mroute vrf ACC-OFFICE
IP Multicast Routing Table

(*,239.255.255.250), 00:52:41/-, RP 192.168.3.3, flags: BC
Bidir-Upstream: Loopback1, RPF nbr: 0.0.0.0
Incoming interface list:
Vlan2201, Accepting/Sparse-Dense
Vlan2107, Accepting/Sparse-Dense
Loopback1, Accepting/Sparse-Dense
Outgoing interface list:
Vlan2201, Forward/Sparse-Dense, 00:52:40/00:02:47
Vlan2107, Forward/Sparse-Dense, 00:52:40/00:02:40

I think there should be sources listed in this output for the group -> Source count: 0

RBK-SW-CO-DC#sh ip mroute vrf ACC-OFFICE count
Use "show ip mfib count" to get better response time for a large number of mroutes.

IP Multicast Statistics
4 routes using 2970 bytes of memory
3 groups, 0.33 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 239.255.255.250, Source count: 0, Packets forwarded: 684, Packets received: 684
RP-tree: Forwarding: 684/0/392/0, Other: 684/0/0

and I have following output from debug ip packet 77 detail with -> packet routing failed

RBK-SW-CO-DC#sh access-list 77
Standard IP access list 77
20 permit 172.16.131.28 (487 matches)
10 permit 172.16.144.8 (461 matches)

RBK-SW-CO-DC#
190046: Oct 18 09:30:26.419 CEST: FIBipv4-packet-proc: route packet from Vlan2201 src 172.16.144.8 dst 239.255.255.250
190047: Oct 18 09:30:26.419 CEST: FIBfwd-proc: ACC-OFFICE:224.0.0.0/4 multicast entry
190048: Oct 18 09:30:26.419 CEST: FIBipv4-packet-proc: packet routing failed
190049: Oct 18 09:30:26.419 CEST: IP: s=172.16.144.8 (Vlan2201), d=239.255.255.250, len 316, input feature
190050: Oct 18 09:30:26.419 CEST: UDP src=57259, dst=1900, packet consumed, MCI Check(109), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
RBK-SW-CO-DC#
190051: Oct 18 09:30:30.742 CEST: FIBipv4-packet-proc: route packet from Vlan2107 src 172.16.131.28 dst 239.255.255.250
190052: Oct 18 09:30:30.742 CEST: FIBfwd-proc: ACC-OFFICE:224.0.0.0/4 multicast entry
190053: Oct 18 09:30:30.742 CEST: FIBipv4-packet-proc: packet routing failed
190054: Oct 18 09:30:30.742 CEST: IP: s=172.16.131.28 (Vlan2107), d=239.255.255.250, len 175, input feature
190055: Oct 18 09:30:30.742 CEST: UDP src=1900, dst=1900, packet consumed, MCI Check(109), rtype 0, forus FALSE, sendself FALSE, mtu 0, fwdchk FALSE
RBK-SW-CO-DC#

172.16.131.28 is a PC, standard PC with Airtame software installed, which sends standard MC ip packets int the network.

The other IP address is Airtame device used for presentations connected via microUSB-to-Ethernet adapter to standard vlan.

Neither of them is responding...and the reason is, the MC packets are not going through the router so theres nothing to respond to.

I checked it via monitor capture. They come to router from both sides and stop right there with that "packet routing failed" message.

If I join int vlan 2201 into the group statically and ping 239.255.255.250 from 172.16.131.28(vlan 2107) I have response.

Not so if i try to get the response from the other host. The routing is not working for me. I am missing something.

Any hints please???

Thank you in advance

Leo

1 Reply 1

Peter Paluch
Cisco Employee
Cisco Employee

Hi Leo,

I see no obvious problems with your configuration. Let me try to give a couple of (mostly shoot-from-hip) suggestions:

  1. First of all, are you absolutely sure that the multicast sender is sending the packets with their TTL set to at least 2? If their TTL was set to 1 then they would be unroutable. That would actually match your observations in that the Cat4506 itself responds to the pings but the host on another VLAN does not appear to be receiving them. I suggest capturing the packets at the multicast source or configuring a SPAN session to check this out.
  2. Drawing conclusions from debug ip packet in this case must be done with a grain of salt. The debug ip packet should cover unicast routing only; for multicast debugs, the debug ip mpacket would be used. Why the multicast packet shows in your debugs is surprising to me - but let's simply take it as a given fact in this case. However, we're dealing with a multilayer switch here that performs packet forwarding in hardware; any output from these debugs suggest that the packet is being punted to IOS for handling as the hardware could not handle it in the hardware CEF path. The fact that we're actually seeing the multicast packets in your debugs suggests that they are indeed being punted - TTL expiry could be one of the reasons.
  3. To make sure that we're not dealing with a particular issue with a given PIM mode, would the behavior change in any way if you tried to run simple PIM-SM or even PIM-DM instead of Bidir-PIM?
  4. This is a very blind shot - I have never tried running ip multicast-routing in a VRF without also configuring it for the global (non-VRF) routing environment. I do not know if that is a prerequisite, but just for the sake of being very meticulous, can you activate the ip multicast-routing for the global routing environment as well?

Please keep us posted. Thank you!

Best regards,
Peter

Review Cisco Networking for a $25 gift card