our network has simplified the following topology: Customers are connected via a switch which is in turn connected to NX-3172PQ. This switch has several vlan interfaces. A DHCP running its service in a different network, so there is a need for a dhcp relay agent. This is configured on this Nexus on all is operated as expected.
Now we want to implement ipv6 as dual stack but the ipv6 relay on the nexus didn’t work as expected because no packets are forwarded towards the dhcpv6 server. The configuration is listed below.
ipv6 dhcp relay
ipv6 dhcp relay source-interface loopback1
interface vlan 100
ipv6 dhcp relay address fe80::70c7:db9c:cd6:a045 inter vlan 101
The output of “show ipv6 dhcp relay” doesn’t show any hints why packets aren't relayed.
device# sh ipv6 dhcp relay
DHCPv6 relay service : Enabled
Relay source interface : loopback1
Insertion of VPN options : Disabled
Insertion of CISCO options : Disabled
DHCPv6 Relay is configured on the following interfaces:
Interface Vlan100 :
Relay Address Dest. Interface VRF name
------------- --------------- --------
Even with several debug command and the use of ethanalyzer I didn’t find out why this feature don’t work. It seems that no traffic is proceeded by this nexus device.
Digging deeper into this issue I found out, that the interface is not listing for multicast traffic to the well know addresse of All_DHCP_Relay_Agents_and_Servers (FF02::1:2). The output of “show ipv6 interface vlan 100” is listed below.
device# sh ipv6 interface vlan 100
IPv6 Interface Status for VRF "default"(1)
Vlan100, Interface status: protocol-up/link-up/admin-up, iod: 11
! output shortend
IPv6 multicast groups locally joined:
ff02::2 ff02::1 ff02::1:ff00:1 ff02::1:ff35:6fc
IPv6 multicast (S,G) entries joined: none
IPv6 MTU: 1500 (using link MTU)
! output shortend
If I configure a device running ios or ios xe with a dhcpv6 relay agent I can observe, that this interface is listing to the multicast address FF02::1:2.
device-ios-xe#sh ipv6 interface lo1
Loopback1 is up, line protocol is up
Joined group address(es):
MTU is 1514 bytes
I was wondering why the dhcp_snoop service is listed as an ipv6 client
device# sh ipv6 client dhcp_snoop
IPv6 Registered Client Status
Client: dhcp_snoop, status: up, pid: 7608, extended pid: 7608
Protocol: (none), pib-index: 10, uuid: 442
Routing VRF id: 65535, flags: 3
Control mts SAP: 360
Data mts SAP: 360
IPC messages to control mq: 16 (failed: 0)
IPC messages to data mq: 0 (failed: 0)
even if the interface is not listing to the multicast address, so I configure another relay agent in the same network as the clients to relay dhcpv6 packets to the unicast address of the nexus device. Now, the nexus device work as a relay agent and encapsulate the reply forward packet into his own reply forward packet and delivered this package to this dhcpv6 server.
My question is, if this behavior is expected. Is a nx os device only capable of relaying unicast dhcpv6 packets or have I miss any configuration to make this scenario work with multicast as well?
Solved! Go to Solution.
Many years have passed since I last configured IPv6 on a Nexus, but the following command was required to get multicast to behave in a sane manner:
no ip igmp snooping optimised-multicast-flood
The command can be issued per-VLAN so it a be tested in isolation.
Hey thank you for your replay,
even if this command exists in the documentantion (https://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/5_x/nx-os/multicast/configuration/guide/n7k_multic_cli_5x/igmp_snoop.html) it is not avaible at our device.
device(config-if)# int vlan 100
device(config-if)# ip igmp ?
access-group IGMP access-group
group-timeout Configures group membership timeout for IGMPv2
immediate-leave Enable/Disable immediate leave
join-group Configures local group membership for router
last-member-query-count Configures number of group-specific Queries sent
last-member-query-response-time Configures last member query response time
querier-timeout Configures querier timeout for IGMPv2
query-interval Configures interval between Query transmission
query-max-response-time Configures MRT for query messages
query-timeout Configures querier timeout for IGMPv2
report-link-local-groups Send Reports for groups in 184.108.40.206/24
report-policy IGMP Report Policy
robustness-variable Configures RFC defined Robustness Variable
startup-query-count Configures number of queries sent at startup
startup-query-interval Configures query interval at startup
state-limit Configures State limit
static-oif Configures static oif for a multicast forwarding entry
version Configures IGMP version number for interface
Can you try configuring the same command, but in global mode.
Hey, i've had the same idea in mind but unfortunately it is also not avaible.
Btw, we are using this device for ipv4 multicast without any known problems.
device(config)# no ip igmp snooping ?
event-history Configure event-history buffers
group-timeout Configures group membership timeout in all VLAN/BDs
link-local-groups-suppression Configures Global link-local groups suppression
max-gq-miss Configure general query miss count
mrouter Configures static multicast router interface
proxy Configures IGMP snooping proxy
report-suppression Configures Global IGMPv1/IGMPv2 Report Suppression
syslog-threshold IGMP SNOOPING table syslog threshold
v3-report-suppression Configures Global IGMPv3 Report Suppression and Proxy Reporting
device(config)# no ip igmp snooping