I have a problem with my vlan extensión:
1.- I have two Datacenter with three layers (Core, Distribution and Access), In My Core layer only routing, Distribution layer vlans and interfaces vlans, and Access my users.
2.- I create two vdc for DataCenter with unicast. All my traffic inter Datacenter is by Dark-Fiber (MPLS is only backup link). I want to extend vlan 200 from my San Borja Site to Lima Site.
3.- I extended vlan 200 to Lima Site and this work fine without a FHRP enable but if I tried ping -l 1440 172.24.144.10 I don't receive answer.
I tried enable MTU 9216 in all interfaces layer 3 but don't receive answer in ping -l 1440 172.24.144.10. With ping -l 1430 172.24.144.10 I receive answer without problems.
4.- I tried enable HSRP in Lima Site in Listen mode but many Servers In Lima Site with VLAN 100 that get information from San Borja Site in vlan 200 don't work, but ping inter Sites and intervlans work very well. I Don't kwon What happend, please help me...
Could you please send a more detail diagram of your network setup mentioning the interface details.(in MS visio if you have and then zip it or else I will create one in case you dont have)
I am checking your configuration right now.
This is a very renowned case called FHRP isolation.
You have configured HSRP for VLAN 200 in both Site-1 (between switch A and B) and Site-2 (between switch C and D). Because it is a contiguous subnet all hosts in VLAN-200 has their gateway set to A.B.C.D, which is the same virtual gateway IP address in both sites. Now ideally traffic from Site-1 should exit via switch A and traffic from Site-2 should exit via switch C since both of these switches where setup with a same HSRP priority . But this is not what happens by default.
On switch A, the active HSRP device for VLAN-200 is switch C. This would be the same on switch B and D. Why would this be? Since both switch A and C have their HSRP priorities set to same, the router with the higher interface IP address will be elected as the Active HSRP device. In this case switch C. This means that all traffic towards the gateway of A.B.C.D will be forward to Site-2. This is the problem that we will now correct. The next output shows the traffic from Site-1 is destined via the overlay. Not cool.
Could you please send me the output of
show hsrp brief
once you got the output you wil lfind out IP address of Active device.
Send me the out put of following after getting Active router IP
show ip arp | i Active_Router_IP|Add
Here you will get that MAC address
Send me the out put of following after getting MAC address
show mac add add MAC_Address | b VLAN
FHRP isolation is the act of filtering HSRP, VRRP or GLBP traffic from going across the overlay, and thereby forcing localized FHRP elections. There are two parts to filter.
The election process should be contained within each site to elect a local active devices.
The virtual MAC addresses will still be advertised, which would cause constant MAC move moves. I.e., local site, remote site, local site, etc.
Point number 1 is accomplished using a VLAN ACL on the OTV Edge Devices to filter the respective traffic depending on which FHRP protocol used.
To prevent the virtual MAC addresses from causing MAC moves and allow for a cleaner design, an OTV route-map must be configured. This route-map must match the virtual MAC of the FHRP protocol used.
You have to make MAC ACL and IP ACL extended.
On the Nexus, it appears that IP and MAC filtering cannot be done simultaneously on Layer 2 interfaces (cf. “mac packet-classify”). So the best bet appears to be using VACLs on Nexus, which is where most of us will likely be doing this sort of thing.
Once you can share your HSRP details I can help you out in making the ACLs.
You can refer following link for further reading.
The current Cisco N7K OTV implementation requires separation between SVI routing and OTV encapsulation for a given VLAN. In addition, the Join interface can only be defined as a physical interface (or subinterface) or as a logical one (i.e. Layer 3 port channel or Layer 3 port channel subinterface).
Note: Support for loopback interfaces as OTV Join interfaces is planned for a future NX-OS release.
To meet these constraints, two VDCs will be deployed at the Delta N7Ks:
an OTV VDC dedicated to perform the OTV functionality
a Distr VDC used to provide SVI routing support.
We could either configure the OTV VDC ‘on-a-stick’ with regards to the Distr VDC, or directly connect the OTV VDC to the transport WAN. We chose to use the ‘on-a-stick’ design. One clear advantage to the ‘on-a-stick’ design is that when the NX-OS no longer requires separation between SVI routing and OTV encapsulation for a given VLAN, Delta will easily be able to migrate to using just the Distr VDC for SVI routing and OTV encapsulation. The only migration steps needed will be to move the OTV configuration from the OTV VDC to the Distr VDC and deactivate the OTV VDC. This migration would be transparent to the rest of the data center network.
Note: There is a small cost, the ‘on-a-stick’ design does use at least an extra pair of interfaces on each N7K.
The N7K also requires that the routed SVI not be carried across the vPC peer link, so a separate Layer 3 link will be implemented between the vPC peers. This Layer 3 link can also be used for the vPC peer-keep alives.
The current OTV implementation on the Nexus 7000 enforces the separation between SVI routing and OTV encapsulation for a given VLAN. This separation can be achieved with the traditional workaround of having two separate network devices to perform these two functions.
An alternative, cleaner and less intrusive solution is proposed here by introducing the use of Virtual Device Contexts (VDCs) available with Nexus 7000 platforms. Two VDCs would be deployed: an OTV VDC dedicated to perform the OTV functionality and a Routing VDC used to provide SVI routing support.
Please refer the following doc from Cisco:
Hi sachinga.hcl ,
I don't understand very well.
Now I have an OTV VDC dedicated and Distribution VDC.
My OTV VDC is connected to Distribution VDC, and my Distribution VDC have SVI and routing for it.
Do You help me about it?
I am new to Cisco Nexus platform and need information in order to implement OTV between data center with Active-active setup. I dont know how and where to start both hardware, software and configuration requirement. Please guide me here and attached our existing OS and inventory information.
OS:n7000-s1-kickstart.5.2.3a.bin and n7000-s1-dk126.96.36.199a.bin
Data Center A Modules:
M1 10G: N7K-M132XP-12L
M1 1G: N7K-M148GT-11
Data Center B Module:
F1 Fiber: N7K-F132XP-15
M1 1G: N7K-M148GT-11L
I would like to know pros and cons of implementing OTV using above F1 and M1 module. Also need guidance and suggestion to proceed further on this.
Thanks you so much !
Let me review that documents and get back to you. Parallely if you have any document about F1 and M1 pros and cons, please share it with me.
It would probably be better if you could post your responses and questions to the original post you started on this subject, rather than using this thread.