This document explains what is First Hop Redundancy Protocol (FHRP—HSRP, VRRP, and so on) isolation in Overlay Transport Virtulization (OTV). In order to allow for the existence of the same default gateway in different locations and optimize the outbound traffic flows (server to client direction) OTV filters FHRP messages across the logical overlay. Given that the same VLAN/IP subnet is available in different sites, the free exchange of FHRP messages across the OTV connection would lead to the election of a single default gateway. This would force traffic to follow a suboptimal path to reach the default gateway (in the site where it is deployed) each time it is required to be routed outside the subnet and the server is located in a different site.
There are two Datacenter with three layers (Core, Distribution and Access). In Core layer its only routing, in Distribution layer vlans and interfaces vlans, and in Access layer are users. Need to extend vlan 200 from my DC1 to DC2 Site. Vlan 200 is extended to DC2 Site and this work fine without FHRP enable. When HSRP is enabled in DC2 Site in Listen mode but many Servers In DC2 Site with VLAN 100 that get information from DC1 Site in vlan 200 don't work, but ping inter-sites and inter-vlans work very well.
Since HSRP is configured for Vlan 200 in both DC1 (between switch A and B) and DC2 (between switch C and D). Because it is a contiguous subnet all hosts in Vlan 200 has their gateway set to A.B.C.D, which is the same virtual gateway IP address in both sites. Now ideally traffic from DC1 should exit via switch A and traffic from DC2 should exit via switch C since both of these switches where setup with a same HSRP priority . But this is not what happens by default.
On switch A, the active HSRP device for Vlan 200 becomes switch C. This would be the same on switch B and D. This happens since both switch A and C have their HSRP priorities set to same, the router with the higher interface IP address will be elected as the Active HSRP device. In this case switch C. This means that all traffic towards the gateway of A.B.C.D will be forward to DC2.
FHRP isolation is the act of filtering HSRP, VRRP or GLBP traffic from going across the overlay, and thereby forcing local FHRP elections. There are two parts to filter:
1) The election process should be contained within each site to elect a local active devices.
2) The virtual MAC addresses will still be advertised, which would cause constant MAC move moves. i.e., local site, remote site, local site, etc.
Point number 1 is accomplished using a VLAN ACL on the OTV Edge Devices to filter the respective traffic depending on which FHRP protocol used. To prevent the virtual MAC addresses from causing MAC moves and allow for a cleaner design, an OTV route-map must be configured. This route-map must match the virtual MAC of the FHRP protocol used. You will need to make MAC ACL and IP ACL extended. In case of Nexus switches you can use VACLs for this.
The current Cisco N7K OTV implementation requires separation between SVI routing and OTV encapsulation for a given VLAN. In addition, the Join interface can only be defined as a physical interface (or subinterface) or as a logical one (i.e. Layer 3 port channel or Layer 3 port channel subinterface).
Note: Support for loopback interfaces as OTV Join interfaces is planned for a future NX-OS release.
To meet these constraints, two VDCs will be deployed at the Delta N7Ks:
1) an OTV VDC dedicated to perform the OTV functionality 2) a Distr VDC used to provide SVI routing support.
We could either configure the OTV VDC ‘on-a-stick’ with regards to the Distr VDC, or directly connect the OTV VDC to the transport WAN. Its good to use the ‘on-a-stick’ design. One clear advantage to the ‘on-a-stick’ design is that when the NX-OS no longer requires separation between SVI routing and OTV encapsulation for a given VLAN, Delta will easily be able to migrate to using just the Distr VDC for SVI routing and OTV encapsulation. The only migration steps needed will be to move the OTV configuration from the OTV VDC to the Distr VDC and deactivate the OTV VDC. This migration would be transparent to the rest of the data center network.
Note: There is a small cost, the ‘on-a-stick’ design does use at least an extra pair of interfaces on each N7K.
The N7K also requires that the routed SVI not be carried across the vPC peer link, so a separate Layer 3 link will be implemented between the vPC peers. This Layer 3 link can also be used for the vPC peer-keep alives. The current OTV implementation on the Nexus 7000 enforces the separation between SVI routing and OTV encapsulation for a given VLAN. This separation can be achieved with the traditional workaround of having two separate network devices to perform these two functions.
Configuring FHRP Isolation
Follow these steps to configure FHRP isolation.
Step1. Configure a VLAN ACL (VACL) on the OTV VDC.
ip access-list HSRP_IP 10 permit udp any 22.214.171.124/32 eq 1985 20 permit udp any 126.96.36.199/32 eq 1985 ! mac access-list HSRP_VMAC 10 permit 0000.0c07.ac00 0000.0000.00ff any 20 permit 0000.0c9f.f000 0000.0000.0fff any
Step2. Apply a route-map to the OTV control protocol.
Hi there.In order to do a health check of ACI's nodes, I was trying to get the global temperature from each node by API REST.The output is similar to the 'show temperature' command:C220-FCH1807V02V /sensor # show temperature
Name Sensor Reading Units...
I have a Nexus Dashboard cluster and when I tried to add ACI/APIC as site using the admin account, it just tells me login failure as below...I can use the same admin account to login from browser and also from postman...Any further log/debug I can use to ...
HiWe have a pair of 5K's in a vpc domain and I am wondering if I want to configure BGP do whats the best way to achieve this i cant find any decent explanation, I want to peer with NSX devices and Nexus devices but need some basic info, also If...