cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7528
Views
0
Helpful
8
Replies

multicast between vrf on the same multiswitch

eugene.fit
Level 1
Level 1

Hello everyone. I am trying to have multicasting traffic between two vrfs. I have configured i hope everything. Although i can see trafic activity on vrf source, where multicast server localted (mc address 230.0.0.3), i can't find mroute in vrf reciever. I am not multicast expert, but as i understand there is a way to use rt import / export to achive it, but... if i put pc connected interface into vlan 237 (vrf source) i can see multicast traffic, however if i put it into vlan 956, i can't. unicast works fine. pim and bgp congigs are mentoned because i need this multicast traffic on another mpls equipment (and i have it, but only in source vrf)

ip vrf source
rd 0:201
route-target export 0:200
route-target export 1:100
route-target import 0:200
route-target import 2:100
mdt default 239.1.1.5

ip vrf reciever
rd 0:101
route-target export 0:100
route-target export 2:100
route-target import 0:100
route-target import 1:100
mdt default 239.1.1.6

ip multicast-routing
ip multicast-routing vrf source
ip multicast-routing vrf reciever

vlan 237
name source

vlan 956
name reciever

interface Loopback0
ip address 10.0.0.1 255.255.255.255
ip pim sparse-dense-mode

interface Vlan237
description source
ip vrf forwarding source
ip address 192.168.9.1 255.255.255.224
ip pim sparse-dense-mode

interface Vlan956
description reciever
ip vrf forwarding reciever
ip address 172.20.178.21 255.255.255.252
ip pim sparse-dense-mode

router bgp 65001
no synchronization
bgp log-neighbor-changes
neighbor 1.1.1.1 remote-as 65001
neighbor 1.1.1.1 update-source Loopback0
neighbor 2.2.2.2 remote-as 65001
neighbor 2.2.2.2 update-source Loopback0
no auto-summary
!
address-family ipv4 mdt
  neighbor 1.1.1.1 activate
  neighbor 1.1.1.1 send-community extended
  neighbor 2.2.2.2 activate
  neighbor 2.2.2.2 send-community extended
exit-address-family
!
address-family vpnv4
  neighbor 1.1.1.1 activate
  neighbor 1.1.1.1 send-community extended
  neighbor 2.2.2.2 activate
  neighbor 2.2.2.2 send-community extended
exit-address-family
!
address-family ipv4 vrf reciever
  no synchronization
  network 172.20.178.20 mask 255.255.255.252
exit-address-family
!
address-family ipv4 vrf source
  no synchronization
  network 192.168.9.0 mask 255.255.255.224
exit-address-family

ip pim autorp reciever

ip pim send-rp-announce Loopback0 scope 16 group-list reciever
ip pim send-rp-discovery Loopback0 scope 16
ip pim ssm range reciever


ip pim vrf source autorp reciever
ip pim vrf source send-rp-announce Vlan237 scope 16 group-list reciever
ip pim vrf source send-rp-discovery Vlan237 scope 16


ip pim vrf reciever autorp reciever
ip pim vrf reciever send-rp-announce Vlan956 scope 16 group-list reciever
ip pim vrf reciever send-rp-discovery Vlan956 scope 16

ip access-list standard reciever
permit any

pe-sw1#sh ip mro vrf source 230.0.0.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 230.0.0.3), 1d03h/stopped, RP 192.168.9.1, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan237, Forward/Sparse-Dense, 1d02h/00:02:33

(192.168.9.5, 230.0.0.3), 1d02h/00:02:57, flags: PT
  Incoming interface: Vlan237, RPF nbr 0.0.0.0, RPF-MFD
  Outgoing interface list: Null


pe-sw1#sh ip mro vrf source 230.0.0.3 count
IP Multicast Statistics
20 routes using 12884 bytes of memory
11 groups, 0.81 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 230.0.0.3, Source count: 2, Packets forwarded: 212833, Packets received: 695010
  RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0
  Source: 192.168.9.5/32, Forwarding: 1008/-7/975/59, Other: 483185/0/482177


pe-sw1#sh ip mro vrf reciever 230.0.0.3
Group 230.0.0.3 not found

pe-sw1#sh ip mro vrf reciever 230.0.0.3 count
IP Multicast Statistics
6 routes using 3688 bytes of memory
2 groups, 2.00 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

pe-sw1#sh ip int brie vl 237
Interface              IP-Address      OK? Method Status                Protocol
Vlan237                192.168.9.1     YES NVRAM  up                    up   

 

pe-sw1#mstat
VRF name: source
Source address or name: 192.168.9.1
Destination address or name: 192.168.9.5
Group address or name: 230.0.0.3
Multicast request TTL [64]:
Response address for mtrace:
Type escape sequence to abort.
Mtrace from 192.168.9.1 to 192.168.9.5 via group 230.0.0.3 in VRF source
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:

  Source        Response Dest   Packet Statistics For     Only For Traffic
0.0.0.0         192.168.9.1     All Multicast Traffic     From 192.168.9.1
     |       __/  rtt 0    ms   Lost/Sent = Pct  Rate     To 230.0.0.3
     v      /     hop 0    ms   ---------------------     --------------------
0.0.0.0        
192.168.9.1     ? Reached RP/Core
     |      \__   ttl   0  
     v         \  hop 0    ms        37         3 pps           0    0 pps
192.168.9.5     192.168.9.1    
  Receiver      Query Source


pe-sw1#sh ip int brie vl 956
Interface              IP-Address      OK? Method Status                Protocol
Vlan956                172.20.178.21   YES manual up                    up     


pe-sw1#mstat
VRF name: reciever
Source address or name: 172.20.178.21
Destination address or name: 192.168.9.5
Group address or name: 230.0.0.3
Multicast request TTL [64]:
Response address for mtrace:
Type escape sequence to abort.
Mtrace from 172.20.178.21 to 192.168.9.5 via group 230.0.0.3 in VRF reciever
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:

  Source        Response Dest   Packet Statistics For     Only For Traffic
172.20.178.21   172.20.178.21   All Multicast Traffic     From 172.20.178.21
     |       __/  rtt 0    ms   Lost/Sent = Pct  Rate     To 230.0.0.3
     v      /     hop 0    ms   ---------------------     --------------------
172.20.178.21   ?
     |      \__   ttl   0  
     v         \  hop 0    ms        0         0 pps           0    0 pps
192.168.9.5     172.20.178.21  
  Receiver      Query Source

8 Replies 8

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Evgeny,

are you using a C6500 or a C7600?

what IOS version is running on the box?

comparing your configuration with

http://www.cisco.com/en/US/docs/ios/ipmulti/configuration/guide/imc_mc_vpn_extranet_ps6922_TSD_Products_Configuration_Guide_Chapter.html#wp1057026

it looks like correct

at least SXH for hardware acceleration on C6500, for C7600

and:

Restrictions for Configuring Multicast VPN Extranet Support

The  Multicast VPN Extranet Support feature supports only Protocol  Independent Multicast (PIM) sparse mode (PIM-SM) and Source Specific  Multicast (SSM) traffic; PIM dense mode (PIM-DM) and bidirectional PIM  (bidir-PIM) traffic are not supported.

>>>>•In  Cisco IOS Release 12.2(33)SRC and subsequent 12.2SR releases, the  Multicast VPN Extranet Support feature was not supported on Cisco 7600  series routers. In Cisco IOS Release 15.0(1)S, support for the Multicast  VPN Extranet Support feature was introduced on Cisco 7600 series  routers.

When  configuring extranet MVPNs in a PIM-SM environment, the source and the  rendezvous point (RP) must reside in the same site of the MVPN behind  the same provider edge (PE) router.

I wonder if with address-family mdt we can make this kind of tricks on a single device.

Hope to help

Giuseppe

Thank you, Giuseppe for your reply. Before implementing mvrf, I did some tests on my lab using extranet mvrf guide you've mentioned, but with 1811 routers and 15 ios. Everything were perfect, but topology was a bit simplier (ce-pe-pe-ce). By the way, here I am running in c6500, and there are no any configuration RP's. Do I need it in this topology? Hopefully, don't.

Cisco IOS Software, s6523_rp Software (s6523_rp-ADVIPSERVICESK9-M), Version 12.2(33)SXH4, RELEASE SOFTWARE (fc1)

System image file is "sup-bootflash:s6523-advipservicesk9-mz.122-33.SXH4.bin"

cisco ME-C6524GT-8S (R7000) processor (revision 1.1) with 458752K/65536K bytes of memory.

R7000 CPU at 300Mhz, Implementation 0x27, Rev 3.3, 256KB L2, 1024KB L3 Cache

Hello Evgeny,

my understanding is that if you are not using PIM SSM, you need an RP on each VRF as PIM DM fallback in MVPN extranet is not supported:

>>

The  Multicast VPN Extranet Support feature supports only Protocol  Independent Multicast (PIM) sparse mode (PIM-SM) and Source Specific  Multicast (SSM) traffic; PIM dense mode (PIM-DM) and bidirectional PIM  (bidir-PIM) traffic are not supported.

by the way, you have been too kind in rating my question in the other thread!

Hope to help

Giuseppe

Hello Giuseppe. Thank you for helping me with my issue. Followed your recomendations I did some reconfiguration.

Firstly, I deleted everything about pim and then assigned rp as vlan ip address of proper vlan

ip pim rp-address 172.16.8.254                                  < - lo0

ip pim vrf source rp-address 192.168.9.1                  < - vl 237

ip pim vrf reciever rp-address 172.20.178.21            < - vl 956

so the picture didn't changed.

pe-sw1#sh ip mro vrf voip-g 230.0.0.3
(*, 230.0.0.3), 00:11:21/stopped, RP 172.20.160.65, flags: SJCF
  Incoming interface: Tunnel0, RPF nbr 172.16.160.60, Partial-SC
  Outgoing interface list:
    Vlan237, Forward/Sparse-Dense, 00:10:32/00:02:27, H

(192.168.9.5, 230.0.0.3), 00:11:21/00:02:52, flags: PFT
  Incoming interface: Vlan237, RPF nbr 0.0.0.0, RPF-MFD
  Outgoing interface list: Null

pe-sw1#sh ip mro vrf voip-mc 230.0.0.3
Group 230.0.0.3 not found

Then, I tryed to use ssm (I thing it is not a problem to have both pim-rp and pim-ssm configured)

ip pim rp-address 172.16.8.254                < - Lo0 (core)
ip pim send-rp-announce Loopback0 scope 16 group-list reciever
ip pim send-rp-discovery Loopback0 scope 16
ip pim ssm range reciever


ip pim vrf source rp-address 192.168.9.1      < - Vl 237 (vrf source)
ip pim vrf source send-rp-announce Vlan237 scope 16 group-list reciever
ip pim vrf source send-rp-discovery Vlan237 scope 16
ip pim vrf source ssm range reciever


ip pim vrf reciever rp-address 172.20.178.21   < - vl 956 (vrf reciever)
ip pim vrf reciever send-rp-announce Vlan956 scope 16 group-list reciever
ip pim vrf reciever send-rp-discovery Vlan956 scope 16
ip pim vrf reciever ssm range reciever

pe-sw1#sh ip mro vrf source 230.0.0.3
IP Multicast Routing Table

(192.168.9.5, 230.0.0.3), 00:00:44/00:02:55, flags: sPT
  Incoming interface: Vlan237, RPF nbr 0.0.0.0, RPF-MFD
  Outgoing interface list: Null

pe-sw1#sh ip mro vrf reciever 230.0.0.3
Group 230.0.0.3 not found

Althoug mstat build proper tree

pe-sw1#mstat
VRF name: reciever
Source address or name: 172.20.178.21
Destination address or name: 192.168.9.5
Group address or name: 230.0.0.3
Multicast request TTL [64]:
Response address for mtrace:
Type escape sequence to abort.
Mtrace from 172.20.178.21 to 192.168.9.5 via group 230.0.0.3 in VRF reciever
From source (?) to destination (?)
Waiting to accumulate statistics......
Results after 10 seconds:

  Source        Response Dest   Packet Statistics For     Only For Traffic
172.20.178.21   172.20.178.21   All Multicast Traffic     From 172.20.178.21
     |       __/  rtt 0    ms   Lost/Sent = Pct  Rate     To 230.0.0.3
     v      /     hop 0    ms   ---------------------     --------------------
172.20.178.21   ?
     |      \__   ttl   0  
     v         \  hop 0    ms        0         0 pps           0    0 pps
192.168.9.5     172.20.178.21  
  Receiver      Query Source

I see, that 0 packets are sent. In my first post here you can find that mstat using in vrf source shows some traffic activity. I hope, the problem is there is no information changing between vrfs. Each vrf build this own tree perfect, but for some reasons they don't communicate with each other.

Hello Evgeny,

the scenario is quite complex, before attempting to go into the finer details I would like to recap what we understand.

You have a single multilayer switch C6500 with appropriate IOS image and feature set and you are attempting to achieve mvpn extranet.

Current Cisco implementation of MVPN follows the so called draft Rosen:

this means that customer multicast traffic is sent encapsulated into a GRE packet with source = PE Loopback and destination that is a multicast destination taken from SP space.

Each VRF has a default multicast delivery tree MDT that maps directly to a specific unique SP multicast address per VRF.

The forwarding plane is implemented by having a different PE node PE y, that is member of the same mvpn  directly or via the extranet feature, to join SP MDT of the appropriate VRF.

Once the join is performed the MDT is used for low level traffic but also for exchange of VRF aware PIM messages with the mGRE acting as a virtual common LAN.

So ideally  PE y has to join MDT of other VRF in which PE x is member.

Now, all this is fine if PE x and PE y are two different boxes but in your case PE X = PE Y

this adds difficulties to the process.

The first suggestion is to have PIM SM enabled under interface loopback in order to ensure correct handling of GRE traffic with multicast destination.

the issue is that the device should build a PIM  adjacency with itself.

this looks like difficult

to understand if this is the issue  I would suggest the following:

connect with a cross-over cable two GE ports on C6500 device

make them routed ports ( no switchport)

make port 1 member of VRF A

make port 2 member of VRF B

have them share a common IP subnet

enable PIM in VRF on both ports

look at the result

if still the stream cannot go from one VRF to other one consider the use of an MSDP session between the two VRFs and appropriate static routeds for the source to pass RPF check on rx side VRF

WARNING:

I assume your setup is not a production network but a lab so the suggested actions should not harm if it is production network consider carefully all possible drawbacks

Hope to help

Giuseppe

Hello, Giuseppe! It is production equipment actually with a lot of active users and resource acvities . By the way, the problem solved simply. While playing with configs, i deleted route-target import/export and then turned them back. It helped me and I found all mroutes in vrf B. As I mantioned, I tryed to use 3 ms switches and 2 vrfs. Now everything looks fine. I am really not sure about having multicast between vrf's no the same device without mdt and pim neighboors, but using 3 swithes I got full connectivity. Also, I got one advice that to analize show command and so on in this case, at least you need 1 active reciever on VRF B. Thank you once again.

Hello Evgeny,

it is nice to know that it worked in production network with three different devices as actually our doubts were about the test with a single device.

It makes sense as creating a mGRE with itself may be difficult ....

Best Regards

Giuseppe

Hi Eugene,

I'm having the same problem and I don't know how to proceed.

In the guidelines for VPN Extranet support they said that the same mdt default group must be configured. Trying to do that, I always get the following:

ip vrf USERS

mdt default 232.0.0.2

% MDT-default group 232.0.0.2 overlaps with vrf SERVERS

I was wondering how could you solve it using different mdt groups on each VRF.

My scenario is the same. Single switch with multiple vrfs. One vrf for multicast users. Rest of vrfs for multicast servers.

Thanks,