Hi, I just wanted to confirm a few things in terms of vPC design.
1. The DCI is connected via layer 2 link using vPC. It's not a full mesh but instead cross connected. DC1-01 is connected to DC2-02 and DC1-02 connected to DC2-01. Is this a problem? The design guide doesn't have it crossed over.
2. How would I configure the HSRP addressing on both DC's for the same stretched vlan? Example vlan 10. Assume FHRP isolation is in place between the DC's. Do I need to have a unique vlan 10 ip address on each vPC peer core switch and then all four switches would share the same hsrp address? Do you see any problems here?
3. Can I do ospf peering between the two data centers using a vPC vlan 20 SVI interface between the 01 or 02 switches?
4. Any issues you see with the access switch that is connected as orphan to both DC's? Assuming the core 01 or 02 switches are the root bridge, one of the access switch uplinks will be SPT blocking state right. Any other problems apart from this?
Please refer to my diagram:
- two Data centers - each data center with back-to-back vPC between core and edge switches (n9k and n3k).
- both DC's connected via DCI using vPC (not in full mesh)
- Primary and standby vPC role labelled on diagram as P and S.
- a special access switch is connected to a single vPC switch in each DC (DC1-01 and DC2-01)
- FHRP isolation is in place
What is the reason to have the DCI links crossed?
please let me know if you had any progress, wanted to implement the same, together with MACSec on DCI lines.
This document will be helpful: https://www.cisco.com/c/en/us/support/docs/switches/nexus-7000-series-switches/118934-configure-nx7k-00.html
Thanks Jason, I saw that link, which is for n7k's.
But I wasn't able to find anything for n9300 with MACSec. But I hope it should work with some limitations the same, like to use MACSec on physical interfaces only, and not on L2/L3 port-channels. And for L3 requirements use peering between SVI's.
Here you can find the full list of guidelines and limitations for MacSec: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/93x/security/configuration/guide/b-cisco-nexus-9000-nx-os-security-configuration-guide-93x/b-cisco-nexus-9000-nx-os-security-configuration-guide-93x_chapter_011001.html#id_72606
First thing which which I want to highlight there, is that you have 4 VPC domains changed in double-sided vPC. As far as I remember, this is something which was not supported and maximum was 3 domains - not sure if anything changed in newer versions. However, I would still advise you to avoid this chaining, as it can complicate things especially if you need to troubleshoot some inter-dc flows.
Simplification of the design can be done either by removing the DCI vpc, or vpc domains of DC#-03-04.
Opting for the second option, your topology will look like this:
In order to isolate the HSRP, you must configure a Port Access Control List (PACL) on the DCI port-channel and disable HSRP GARPs on the SVIs for the VLANs that move across the DCI.
ip access-list DENY_HSRP_IP 10 deny udp any 188.8.131.52/32 eq 1985 20 deny udp any 184.108.40.206/32 eq 1985 30 permit ip any any interface <DCI-Port-Channel> ip port access-group DENY_HSRP_IP in interface Vlan <x> no ip arp gratuitous hsrp duplicate
Alternatively, you can adopt a more modern solution like VXLAN EVPN. Here you can find the the whitepapers for vxlan:
Yeach, I have it in our Roadmap, but for now Customer didn't accepted it and for the first step they wanted to implement DCI. I think we need to make some pressure, and discuss VXLAN EVPN option ;)