Can I Mix a Cisco Nexus 7000 and a Cisco ASR 1000 for OTV?
Mixing the two types of devices is not supported at this time when the devices will be placed within the same site.
However, using Cisco Nexus 7000s in one site and Cisco ASR 1000s at another site for OTV is fully supported. For
this scenario, please keep the separate scalability numbers in mind for the two different devices, because you will
have to account for the lowest common denominator.
Please check the below link and I believe it will answer your question..
I took a diagram and paragraph from the above URL and pasted below:
The use of VDCs must be introduced in this case, even if the SVIs are not defined on the DCI layer devices. The basic requirement for OTV to be functional is in fact that every OTV control packet originated by the Join interface of an OTV edge device must be received on the Join interface of all the other edge devices being part of the same OTV overlay.
Since OTV will be on a dedicated VDC and the DCI circuits will be on another Router or VDC, OTV adjacency will be formed by all OTV Edge devices.
Please let me know if this answers your question..
Thank you for the reply, Mohamed.
In the document you mention there is a point to point design with a collapsed aggregation core (figure. 1-46). Is there a configuration example for that design anywhere?
I am unable to see any public facing document with the specific configuration you are looking for. But please give me a day, I will publish the config fromone of my deployment.
I have published this document.
Please let me know if you have any questions.
Hi Expert Team,
I have following query. We have two data centers. In both data centers we have two Nexus 7K switches. Configured production VDC and OTV VDC in both the switches and in both data centers. Multiple VRF's are configured in both Data Centers under production VDC. There are two point to point fiber links between two data centers that is part of VRF-A under production VDC. One link has 10G BW and Second Link has 1G BW. We have L3 joint interface from OTV VDC to Production VDC in internal VRF.Need to run dynamic routing protocol between data centers to have link failover. Now question is do we need to have multiple joint interfaces per VRF at both the data centers or just single joint interface and in which VRF it should be ideally configured. Also do we need to have port channeling between OTV VDC's in same data centers. ..Please provide answer to following queries..that would be gr8 help. Thanks
Answer to the first part of your question :
No, you do not need multiple join interfaces. There are two ways you can achieve what you are trying to :
1. Extend the internal VRF between the two Data Centers and run the a separate instance of dynamic routing protocol for that VRF as well.
2. Have the join interface in the same VRF as the VRF-A.
Answer to the second part of your question :
No, you do not need port-channel between the two OTV-VDCs.
Hope this helps.
Thanks for the prompt reply. Can you please share any config. example of Multi-VRF & dual homed OTV Config.
I have doubt how dynamic protocol adjency would form on JOIN interface for Multi-VRF config.
Please review this document and let me know if that answers your question.
With the advent of VXLAN, considering the fact that it can be configured entirely at the virtual layer (no reconfiguration required on the physical networking equipment as long as the destinations are pingable, do you see a real requirement for OTV in a virtual DC1 and DC2 environment?
you are correct. If you use virtual Data centers with VXLAN and you don't have vlans extended to physical network, then you don't have the requirement to use OTV.
We implementing OTV between two DCs, but the OTV adjancency don't come UP.
In one DC is NX7Ks with SUP1 and in the second DC are NX-7Ks with SUP2.
It's possible to have problems because the OTV should work between equiments with different supervisors?
OTV should come up even if the Nexus at one end is running on SUP1 and the other is running on SUP2. If you are using OTV in multicast mode here is something that could help assist in troubleshooting :
Would suggest revisiting the configs or reach out to Cisco TAC to troubleshoot this in detail.