cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2512
Views
0
Helpful
5
Replies

Dual HSRP - Nexus

bradleyordner
Level 3
Level 3

Hi, 

 

I have a customer that has split Nexus devices between DCs and is possibly using vPC between the DCs. I have read this is not recommended and OTV to be used. I was reading this article and also noticed that HSRP when configured on Dual Nexus and vPC actually responds in an active/active fashion. 

 

https://www.ciscozine.com/nexus-vpc-hsrp-vrrp-active-active/

 

It mentions that the peer device responds for all HSRP macs and then gives it to the Primary. Would this cause issues within L2 environment if the DCs were physically separate with peer in second data centre and the WAN exit back at the primary DC? There would be some delay based on latency between DCs. Is there a threshold or anything above 1ms in the DC is a no no?

 

Also, if customer has two fibres, between the DC do they transport all vpc traffic over a bundled link or one is dedicate for control traffic? (I haven't seen config yet of these Nexus device)

 

For context, the design has been as follows - 

 

Outbound from host -----

 

1. Host in VLAN 100 has default gateway of Nexus HSRP (split across two DCs)

2. Nexus then forwards to a firewall (IN THE SAME VLAN...what the?)

3. Traffic flows over firewall and enters WAN VLAN and is routed to another Nexus HSRP (split across two DCs) 

4. Nexus forwards to WAN device (in the SAME VLAN)

 

Inbound - 

 

1. WAN to Nexus to Firewall

2. Firewall then sends direct to host due to ARP. So traffic is asymmetric. 

 

Thanks 

 

1 Accepted Solution

Accepted Solutions

Most likely a micro burst and buffer exhaustion issue (common issue on FEXs).

 

Cheers,

Sergiu

View solution in original post

5 Replies 5

Sergiu.Daniluk
VIP Alumni
VIP Alumni

Hi @bradleyordner 

If you say that your customer has a vpc domain split in two DCs, meaning one vPC peer in one DC and another vPC peer in the second DC, that is definitely something which is not a good idea. First, what is the point of having vPC like that since the whole purpose of vPC is to offer redundancy on layer 2, where the devices are logically one virtual switch. Plus, if there is any fiber cut and peer-link goes down, one DC (where the operational secondary peer is located) will go down. So if you have this, I would suggest you rethink the design of the DCI.

If on the other hand, you are talking about 2 vPC domains back-to-back, with a vPC port-channel between them, then this is something else. This is a supported design. It has some caveats but is supported.

 

Stay safe,

Sergiu

Thanks Sergiu. I am still waiting to get more detailed design documents and
diagrams. I don't even have access to the Nexus so a lot of assumptions
have been made.

One of the reasons I am looking into this, is we have performance issues
towards the WAN.

I know they have 2 Nexus at one DC and 2 at another with spanned VLANs. I
can see traffic coming from two source mac addresses when going to WAN and
because it was HSRP I couldn't work out why. Seems that when HSRP is used
with VPCs it is active/active.

Once I get more detail on the DCI I will share.

Brad

Hello,

"I know they have 2 Nexus at one DC and 2 at another with spanned VLANs."

So that most likely means that each pair represents a vPC domain and you have a back-to-back vpc design as DCI:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/interfaces/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_Interfaces_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-OS_Interfaces_Configuration_Guide_7x_appendix_011...

 

For this design to work, you need to implement something which is called "HSRP isolation", where you filter the HSPR msgs between the vpc peers (over DCI) to enforce having active VIP in each DC. The config is present in the link above.

 

Stay safe,

Sergiu

 

Out of the blue, I just heard they moved our WAN router (ASR 1001) from a FEX Fabric Extender via NExus 5k (N2K-C2248TP-1GE) to a new Nexus 9000 switch and the upload issue is resolved. 

 

Previously via iPerf I could not push more than 50 Mbits up and 400mbits down with one tcp stream. I can now do it both ways at 400Mbits. 

 

 

 

Most likely a micro burst and buffer exhaustion issue (common issue on FEXs).

 

Cheers,

Sergiu