This document explains what is First Hop Redundancy Protocol (FHRP—HSRP, VRRP, and so on) isolation in Overlay Transport Virtulization (OTV). In order to allow for the existence of the same default gateway in different locations and optimize the outbound traffic flows (server to client direction) OTV filters FHRP messages across the logical overlay. Given that the same VLAN/IP subnet is available in different sites, the free exchange of FHRP messages across the OTV connection would lead to the election of a single default gateway. This would force traffic to follow a suboptimal path to reach the default gateway (in the site where it is deployed) each time it is required to be routed outside the subnet and the server is located in a different site.
There are two Datacenter with three layers (Core, Distribution and Access). In Core layer its only routing, in Distribution layer vlans and interfaces vlans, and in Access layer are users. Need to extend vlan 200 from my DC1 to DC2 Site. Vlan 200 is extended to DC2 Site and this work fine without FHRP enable. When HSRP is enabled in DC2 Site in Listen mode but many Servers In DC2 Site with VLAN 100 that get information from DC1 Site in vlan 200 don't work, but ping inter-sites and inter-vlans work very well.
Since HSRP is configured for Vlan 200 in both DC1 (between switch A and B) and DC2 (between switch C and D). Because it is a contiguous subnet all hosts in Vlan 200 has their gateway set to A.B.C.D, which is the same virtual gateway IP address in both sites. Now ideally traffic from DC1 should exit via switch A and traffic from DC2 should exit via switch C since both of these switches where setup with a same HSRP priority . But this is not what happens by default.
On switch A, the active HSRP device for Vlan 200 becomes switch C. This would be the same on switch B and D. This happens since both switch A and C have their HSRP priorities set to same, the router with the higher interface IP address will be elected as the Active HSRP device. In this case switch C. This means that all traffic towards the gateway of A.B.C.D will be forward to DC2.
FHRP isolation is the act of filtering HSRP, VRRP or GLBP traffic from going across the overlay, and thereby forcing local FHRP elections. There are two parts to filter:
1) The election process should be contained within each site to elect a local active devices.
2) The virtual MAC addresses will still be advertised, which would cause constant MAC move moves. i.e., local site, remote site, local site, etc.
Point number 1 is accomplished using a VLAN ACL on the OTV Edge Devices to filter the respective traffic depending on which FHRP protocol used. To prevent the virtual MAC addresses from causing MAC moves and allow for a cleaner design, an OTV route-map must be configured. This route-map must match the virtual MAC of the FHRP protocol used. You will need to make MAC ACL and IP ACL extended. In case of Nexus switches you can use VACLs for this.
The current Cisco N7K OTV implementation requires separation between SVI routing and OTV encapsulation for a given VLAN. In addition, the Join interface can only be defined as a physical interface (or subinterface) or as a logical one (i.e. Layer 3 port channel or Layer 3 port channel subinterface).
Note: Support for loopback interfaces as OTV Join interfaces is planned for a future NX-OS release.
To meet these constraints, two VDCs will be deployed at the Delta N7Ks:
1) an OTV VDC dedicated to perform the OTV functionality 2) a Distr VDC used to provide SVI routing support.
We could either configure the OTV VDC ‘on-a-stick’ with regards to the Distr VDC, or directly connect the OTV VDC to the transport WAN. Its good to use the ‘on-a-stick’ design. One clear advantage to the ‘on-a-stick’ design is that when the NX-OS no longer requires separation between SVI routing and OTV encapsulation for a given VLAN, Delta will easily be able to migrate to using just the Distr VDC for SVI routing and OTV encapsulation. The only migration steps needed will be to move the OTV configuration from the OTV VDC to the Distr VDC and deactivate the OTV VDC. This migration would be transparent to the rest of the data center network.
Note: There is a small cost, the ‘on-a-stick’ design does use at least an extra pair of interfaces on each N7K.
The N7K also requires that the routed SVI not be carried across the vPC peer link, so a separate Layer 3 link will be implemented between the vPC peers. This Layer 3 link can also be used for the vPC peer-keep alives. The current OTV implementation on the Nexus 7000 enforces the separation between SVI routing and OTV encapsulation for a given VLAN. This separation can be achieved with the traditional workaround of having two separate network devices to perform these two functions.
Configuring FHRP Isolation
Follow these steps to configure FHRP isolation.
Step1. Configure a VLAN ACL (VACL) on the OTV VDC.
ip access-list HSRP_IP 10 permit udp any 184.108.40.206/32 eq 1985 20 permit udp any 220.127.116.11/32 eq 1985 ! mac access-list HSRP_VMAC 10 permit 0000.0c07.ac00 0000.0000.00ff any 20 permit 0000.0c9f.f000 0000.0000.0fff any
Step2. Apply a route-map to the OTV control protocol.
Hello,I have recently detected a LACP balance issue which I seem unable to solve. That's why I decided to share what I found out so far in order to see if anybody else has also seen this. The switch is 3164Q with NX-OS 7.0.3.I7.6. Reproduced the same...
Hi, I just need to know the total consumption of the Catalyst 9410 with full config.This question is due each 3200 W AC power supplies states 16A but it don´t clarify if this is per PS or is a total per entry SW.Thank you guys.
Original Blog published @ https://blogs.cisco.com/datacenter/spinning-up-an-nvme-over-fibre-channel-strategy
Every so often there comes a time when we witness a major shift in the networking industry that fundamentally changes the landscap...
Hello, Is it Posible to create a port channel between and F3 and M3 Card? I have a 2 Nexus 7702 with a M3 card and the other with and F3 card, so can i bring a portchannel between this 2? Thanks for the help.
I am labbing to learn eBGP underlay for EVPN so i did create following lab Spine: 65000Leaf-1:65001Leaf-2:65002 ## spinerouter bgp 65000
address-family ipv4 unicast
redistribute direct route-map TAG-UL