08-21-2015 03:31 PM - edited 03-08-2019 01:28 AM
Guys,
I want to test mcast end to across my 2 DCs - my core and agg are 7k vdcs i have an MPLS core of 6500s which I manage however infrastructure is not massively relevant to the overall requirement but it is a pretty complex environment.
I also have OTV vdcs running which I am running unicast OTV cross DCs.
What I want to be able to do is test multicast say from the OTV vdc (or the agg) in DC1 to DC2 for ongoing confirmation that mcast works without poking into mcast routing tables ; I do not have access to server infrastructure from which to source / receive m-cast.
I could use m-cast OTV to do this but I do not want to set up this amount of additional config if there is a simpler way.
Obviously sourcing mcast traffic with mcast PINGs is easy ..
Q. Is there a simple means of setting up a multicast receiver on e.g. an SVI or indeed any other way ?
My use case is in NEXOS but I assume other people might have an IOS requirement here
Solved! Go to Solution.
08-24-2015 02:30 AM
Hi,
There are typically two ways to do this.
The first is with the ip igmp join-group <group> command within interface context. This command causes the router to send an IGMP Membership Query for the group on the interface you configure the command. This command results in the router process switching the traffic sent to the multicast group to the router CPU for processing.
The second option is using the ip igmp static-oif <group> command in NX-OS or ip igmp static-group <group> command in IOS. This command statically binds the multicast group to the interface on which it is configured such that traffic destined to this group is sent out the interface using the hardware or fast switching path of the router.
The obvious difference between the above commands when using ping to generate traffic is that you’ll receive ping response from the router when using the ip igmp join-group command, but not when using the ip igmp static-oif command.
So here I’ve configured two groups:
r6#sh run int gi0/1 Building configuration... Current configuration : 203 bytes ! interface GigabitEthernet0/1 ip address 10.0.6.1 255.255.255.0 ip pim sparse-mode ip igmp join-group 239.255.10.10 ip igmp static-group 239.255.20.20 duplex auto speed auto media-type rj45 end
And when sending traffic to the two groups using ping:
r5#ping 239.255.10.10 source 5.5.5.5 repeat 3 Type escape sequence to abort. Sending 3, 100-byte ICMP Echos to 239.255.10.10, timeout is 2 seconds: Packet sent with a source address of 5.5.5.5 Reply to request 0 from 10.0.6.1, 19 ms Reply to request 1 from 10.0.6.1, 1 ms Reply to request 2 from 10.0.6.1, 1 ms r5#ping 239.255.20.20 source 5.5.5.5 repeat 3 Type escape sequence to abort. Sending 3, 100-byte ICMP Echos to 239.255.20.20, timeout is 2 seconds: Packet sent with a source address of 5.5.5.5 ...
When we look at the show ip mroute for the two groups we can see the traffic has been forwarded in both cases, but there was obviously no indication at the source for the group joined using the ip igmp static-group command.
r6#sh ip mroute 239.255.10.10 count Use "show ip mfib count" to get better response time for a large number of mroutes. IP Multicast Statistics 5 routes using 2770 bytes of memory 3 groups, 0.66 average sources per group Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc) Group: 239.255.10.10, Source count: 1, Packets forwarded: 3, Packets received: 3 RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0 Source: 5.5.5.5/32, Forwarding: 3/0/100/0, Other: 3/0/0 r6#sh ip mroute 239.255.20.20 count Use "show ip mfib count" to get better response time for a large number of mroutes. IP Multicast Statistics 5 routes using 2770 bytes of memory 3 groups, 0.66 average sources per group Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc) Group: 239.255.20.20, Source count: 1, Packets forwarded: 3, Packets received: 3 RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0 Source: 5.5.5.5/32, Forwarding: 3/0/100/0, Other: 3/0/0
When using the ip igmp join-group command note the caution in the Configuring IGMP Interface Parameters section of the configuration guide:
“Caution The device CPU must be able to handle the traffic generated by using this command. Because of CPU load constraints, using this command, especially in any form of scale, is not recommended. Consider using the ip igmp static-oif command instead.”
If you’re only generating low volume multicast traffic using the ping command the first method is the simpler option as you don’t need to look at the destination router to check if the stream was received.
Regards
08-22-2015 12:26 PM
Hi,
To test multicast, you don't carelessly need a server. You can use a couple of laptops or PCs running VLC. You make one laptop your multicast source (play a movie on a CD) and one or multiple laptops as your receiver.
VLC is free:
http://vlc-media-player.en.softonic.com/
HTH
08-24-2015 02:30 AM
Hi,
There are typically two ways to do this.
The first is with the ip igmp join-group <group> command within interface context. This command causes the router to send an IGMP Membership Query for the group on the interface you configure the command. This command results in the router process switching the traffic sent to the multicast group to the router CPU for processing.
The second option is using the ip igmp static-oif <group> command in NX-OS or ip igmp static-group <group> command in IOS. This command statically binds the multicast group to the interface on which it is configured such that traffic destined to this group is sent out the interface using the hardware or fast switching path of the router.
The obvious difference between the above commands when using ping to generate traffic is that you’ll receive ping response from the router when using the ip igmp join-group command, but not when using the ip igmp static-oif command.
So here I’ve configured two groups:
r6#sh run int gi0/1 Building configuration... Current configuration : 203 bytes ! interface GigabitEthernet0/1 ip address 10.0.6.1 255.255.255.0 ip pim sparse-mode ip igmp join-group 239.255.10.10 ip igmp static-group 239.255.20.20 duplex auto speed auto media-type rj45 end
And when sending traffic to the two groups using ping:
r5#ping 239.255.10.10 source 5.5.5.5 repeat 3 Type escape sequence to abort. Sending 3, 100-byte ICMP Echos to 239.255.10.10, timeout is 2 seconds: Packet sent with a source address of 5.5.5.5 Reply to request 0 from 10.0.6.1, 19 ms Reply to request 1 from 10.0.6.1, 1 ms Reply to request 2 from 10.0.6.1, 1 ms r5#ping 239.255.20.20 source 5.5.5.5 repeat 3 Type escape sequence to abort. Sending 3, 100-byte ICMP Echos to 239.255.20.20, timeout is 2 seconds: Packet sent with a source address of 5.5.5.5 ...
When we look at the show ip mroute for the two groups we can see the traffic has been forwarded in both cases, but there was obviously no indication at the source for the group joined using the ip igmp static-group command.
r6#sh ip mroute 239.255.10.10 count Use "show ip mfib count" to get better response time for a large number of mroutes. IP Multicast Statistics 5 routes using 2770 bytes of memory 3 groups, 0.66 average sources per group Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc) Group: 239.255.10.10, Source count: 1, Packets forwarded: 3, Packets received: 3 RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0 Source: 5.5.5.5/32, Forwarding: 3/0/100/0, Other: 3/0/0 r6#sh ip mroute 239.255.20.20 count Use "show ip mfib count" to get better response time for a large number of mroutes. IP Multicast Statistics 5 routes using 2770 bytes of memory 3 groups, 0.66 average sources per group Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc) Group: 239.255.20.20, Source count: 1, Packets forwarded: 3, Packets received: 3 RP-tree: Forwarding: 0/0/0/0, Other: 0/0/0 Source: 5.5.5.5/32, Forwarding: 3/0/100/0, Other: 3/0/0
When using the ip igmp join-group command note the caution in the Configuring IGMP Interface Parameters section of the configuration guide:
“Caution The device CPU must be able to handle the traffic generated by using this command. Because of CPU load constraints, using this command, especially in any form of scale, is not recommended. Consider using the ip igmp static-oif command instead.”
If you’re only generating low volume multicast traffic using the ping command the first method is the simpler option as you don’t need to look at the destination router to check if the stream was received.
Regards
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide