cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1194
Views
5
Helpful
6
Replies

Nexus 9372 TX implementation

Florin Barhala
Level 6
Level 6

Hi guys,

 

I am on my way of replacing two classy IOS HSRP enabled core SWs with two Nexus 9372.

I vPC on both along with vPC peer link and vPC peer-keepalive link.

Next I connected all uplinks from access switches (pure L2 SWs) to each Nexus and used vPC for each.

 

show run vpc

version 7.0(3)I4(7)
feature vpc

vpc domain 102
peer-keepalive destination 5.5.5.1 source 5.5.5.2 vrf pkal
delay restore 150
peer-gateway

interface port-channel3
vpc 3

....

.....

interface port-channel9
vpc 9

interface port-channel10
vpc peer-link

 

So far, so good. Today I have shutdown all SVIs on the standby IOS HSRP switch and added Nexus_2 switch as standby HSRP switch.

Here come my issues:

1. I can't ping from Nexus_2 any connected device within a SVI.

Let's take SVI 610:

ping 172.16.11.129 aka the IP from ISP1 (see the diagram); obviously ISP1 and ISP2 run HSRP between them
PING 172.16.11.129 (172.16.11.129): 56 data bytes
Request 0 timed out
Request 1 timed out
^C
--- 172.16.11.129 ping statistics ---
3 packets transmitted, 0 packets received, 100.00% packet loss
show ip arp vlan 610

Flags: * - Adjacencies learnt on non-active FHRP router
+ - Adjacencies synced via CFSoE
# - Adjacencies Throttled for Glean
CP - Added via L2RIB, Control plane Adjacencies D - Static Adjacencies attached to down interface

IP ARP Table
Total number of entries: 1
Address Age MAC Address Interface
172.16.11.129 00:03:19 0000.0c07.ac01 Vlan610
ping 172.16.11.129
PING 172.16.11.129 (172.16.11.129): 56 data bytes
Request 0 timed out

 

interface Vlan610
description --- Data segment ---
no shutdown
ip address 172.16.11.141/28
hsrp 61
preempt delay minimum 15
priority 105
ip 172.16.11.142
track 7

 

Two mentions:

Nexus_2 acts as vPC secondary:

 

show vpc brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 102
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : failed
Type-2 inconsistency reason : SVI type-2 configuration incompatible
vPC role : secondary
Number of vPCs configured : 7
Peer Gateway : Enabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 150s)
Delay-restore SVI status : Timer is off.(timeout = 10s)

vPC Peer-link status
---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po10 up 1,3,5-7,10,13,20-21,30,32,40,42-43,50-51,55,60,65,
70,75,80-81,90,104,110-111,115,120-122,125,160,200
-203,208-211,215,218,220,249,288,505,507,555,560,6
00,605,610,615,620,630,777

 

and there's no SVI for vlan 610 on Nexus_1.

I noticed if I enable vlan 610 on Nexus_1 then ping does work, still I am not 100% why? Is it how control plane on Nexus_2 handles packets?

 

I'll post my 2nd and 3rd issue after we get this 1st one.

 

Thanks,

Florin.

6 Replies 6

Philip D'Ath
VIP Alumni
VIP Alumni

Nexus switches don't work anything like other switches.  I highly doubt you'll be able to get them to join another HSRP group.  With Nexus both switches in a vPC group process packets.  There isn't really a standby, more of an active/active.  They also wont usually forward traffic over the vPC link, and it is really easy to get caught out with a design that works in the Catalyst world that wont work in the Nexus world.

 

Have a quick look at this:

https://www.cisco.com/c/en/us/support/docs/switches/nexus-7000-series-switches/113002-nexus-hsrp-00.html

And note these restrictions:

https://www.cisco.com/c/en/us/support/docs/switches/nexus-7000-series-switches/113002-nexus-hsrp-00.html#vpc

 

Hello Filip,

 

Thanks for the follow-up. As we speak HSRP worked without any issues. 

But I can't say the same about ICMP. Now assuming ICMP is being hindered by that vPC rule, how come when I enable SVI on Nexus_1, all of a sudden Nexus 2 can ping ?

 

 

For migration purposes I've typically connected both legacy cores to the new Nexus cores. Because of the forwarding rules within vPC (anything over the peer link will be not be allowed over member ports on SW2) Also HSRP would work if they are part of the same HSRP group but you'll run into lots of issues with Asynchronous routing. You can do it if you setup the same HSRP group between the two sets of Switches and create ACL's and MACL's to prohibit HSRP from going from the catalyst switches to the nexus switches.

Hello Rick,

 

Thanks for the follow-up. Obviously I am using same HSRP group ID. Still my current issue/question stands out: 

 - why ICMP is not working?

 - let's say because traffic IS NOT sent over peer-link, OK. Then what changes when I enable same SVI ID on Nexus_1; how come then traffic is being sent over peer-link?

 

P.S. I agree my setup lacks some recommendations as you said, still I think it should've worked. As far as I read 9k documentation I did not met my topology as being NOT supported.

HSRP is rather different in Nexus implementation, but it should work in this case. Remember that HSRP in nexus is acts as active-standby just in the control plane, but in the data plane is active-active.

 

I think the problem here is related to the peer-gateway activated on the vpc, which instructs the switches to respond to the other vpc switch mac address. In this case Nexus 1 is consuming the packets destined for Nexus 2.

Try to exclude the vlan from peer-gateway:

 

peer-gateway exclude-vlan [123]

 

Regards,

Georgian

HSRP is rather different in Nexus implementation, but it should work in this case. Remember that HSRP in nexus is acts as active-standby just in the control plane, but in the data plane is active-active.

 

I think the problem here is related to the peer-gateway activated on the vpc, which instructs the switches to respond to the other vpc switch mac address. In this case Nexus 1 is consuming the packets destined for Nexus 2.

Try to exclude the vlan from peer-gateway:

 

peer-gateway exclude-vlan [123]

 

Regards,

Georgian

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: