Introduction
OTV is a MAC-in-IP method that extends Layer 2 connectivity across a transport network infrastructure. OTV uses MAC address-based routing and IP-encapsulated forwarding across a transport network to provide support for applications that require Layer 2 adjacency, such as clusters and virtualization.
The core principles on which OTV operates are the use of a control protocol to advertise MAC address reachability information (instead of using data plane learning) and packet switching of IP encapsulated Layer 2 traffic (instead of using circuit switching) for data forwarding. These features are a significant departure from the core mechanics of traditional Layer 2 VPNs. In traditional Layer 2 VPNs, a static mesh of circuits is maintained among all devices in the VPN to enable flooding of traffic and source-based learning of MAC addresses. This full mesh of circuits is an unrestricted flood domain on which all traffic is forwarded. Maintaining this full mesh of circuits severely limits the scalability of existing Layer 2 VPN approaches. At the same time, the lack of a control plane limits the extensibility of current Layer 2 VPN solutions to properly address the requirements for extending LANs across data centers.
OTV Support
OTV uses Ethernet over Generic Router Encapsulation (GRE) and adds an OTV shell to the header to encode VLAN information. The OTV encapsulation is 42 bytes. OTV does not require the configuration of pseudo-wires or tunnels. The encapsulation of the packets sent to the overlay is performed dynamically based on a Layer 2 address destination lookup. OTV is supported on Nexus 7000 series all M-series modules. OTV is not supported on F1-series modules.
Benefits of OTV
No effect on existing network design
- Transport agnostic: OTV is IP encapsulated and can therefore use any core capable of forwarding IP traffic. OTV therefore does not pose any requirements on the core transport.
- Transparent to the sites: OTV extensions do not affect the design or protocols of the Layer 2 sites that OTV interconnects. It is as transparent as a router connection to the Layer 2 domain and therefore does not affect the local spanning tree or topology.
Failure isolation and site independence
- Failure boundary preservation and site independence preservation: OTV does not rely on traffic flooding to propagate reachability information for MAC addresses. Instead, a control protocol is used to distribute such information. Thus, flooding of unknown traffic is suppressed on the OTV overlay; ARP traffic is forwarded only in a controlled manner, and broadcasts can be forwarded based on specific policies. Spanning tree Bridge Protocol Data Units (BPDUs) are not forwarded at all on the overlay. The result is failure containment comparable to that achieved using a Layer 3 boundary at the Layer 2 domain edge. Sites remain independent of each other, and failures do not propagate beyond the OTV edge device.
Optimized operations
- Single-touch site additions and removals: OTV is an overlay solution that needs to be deployed only at specific edge devices. Edge devices join a signaling group in a point-to-cloud fashion. Therefore, the addition or removal of an edge device does not affect or involve configuration of other edge devices on the overlay.
- Single protocol with no add-ons: Before OTV, multihoming, loop prevention, load balancing, multipathing, etc. required the addition of protocols to solutions: for instance, virtual private LAN service (VPLS). With OTV, all capabilities are included in a single control protocol and single configuration.
Optimal bandwidth utilization resiliency and scalability
- Multipathing: When multihoming sites, OTV provides the capability to actively use multiple paths over multiple edge devices. This feature is crucial to keeping all edge devices active and thus optimizing the use of available bandwidth. The capability to use multipathing is also critical to supporting end-to-end multipathing when the Layer 2 sites are using Layer 2 multipathing locally. OTV is the only Layer 2 extension protocol that can provide transparent multipathing integration when virtual PortChannel (vPC) or Transparent Interconnection of Lots of Links (TRILL) technology is used at the Layer 2 sites being interconnected over an IP cloud. Since OTV is IP encapsulated, OTV also uses any multipathing available in the underlying IP core.
- Multipoint connectivity: OTV provides multipoint connectivity in an easy-to-manage point-to-cloud model. OTV uses the IP Multicast capabilities of the core to provide optimal multicast traffic replication to multiple sites and avoid head-end replication that leads to suboptimal bandwidth utilization.
- Transparent multihoming with built-in loop prevention: Multihoming does not require additional configuration because it is built into OTV along with all the necessary loop detection and suppression mechanisms. The loop prevention mechanisms in OTV prevent loops from forming on the overlay, and also prevent loops from being induced by sites when these are multihomed to the overlay. In the past, it was necessary to use complex extensions of spanning tree over Layer 2 VPNs or reduce the multihoming of the VPN to an active-standby model. With OTV, all edge devices are active while loops are prevented inherently in the protocol.
- Fast failover: Since OTV uses a control protocol based on an Interior Gateway Protocol (IGP), OTV benefits from the fast failover characteristics of equal-cost multipath (ECMP). Additionally, OTV can use bidirectional forwarding detection (BFD) extensions to provide failover between edge devices in less than 1 second.
Scalability
- Optimized and distributed state: OTV does not create nailed up tunnels; the only state maintained is that of a MAC-address routing table. As in any packet-switched model, no hard state is maintained, and therefore scalability is much better than that achieved in circuit-switched models. State is distributed and can be programmed in the hardware conditionally to allow larger numbers of MAC addresses to be handled by the overlay.
- Flexibility: The level of scalability possible with OTV is designed to allow the network architect to deploy OTV capabilities at the data center aggregation layer if necessary. Deployment of circuit-switched solutions at the aggregation layer can be prevented by the scalability limitations of the solutions because the number of aggregation devices is usually much greater than the number of endpoints that a circuit-switched solution can support with its full mesh of circuits. Having the flexibility to deploy the required LAN extensions at any layer can significantly improve the operational model of the data center.
- Reduced effect of ARP traffic: Scalability is improved by the capability of the OTV control plane to localize ARP traffic and therefore reduce the effect of ARP traffic on all hosts present on the extended LAN.
Transparent migration path
- Incremental deployment: Since OTV is agnostic to the core and transparent to the sites, it can be incrementally deployed over any existing topology without altering its network design. When vPC is being used as a data center interconnect (DCI) and there is a desire to migrate to OTV, OTV can simply be deployed over the existing vPC interconnect to bring the benefits of failure containment and improved scale. Alternatively, OTV can use the main IP core between data centers as its transport, and the vPC interconnect can be taken out of service later. In the latter scenario, VLANs can be migrated in phases from the vPC interconnect to the OTV overlay.
Prerequisites for OTV
OTV has the following prerequisites:
- Globally enable the OTV feature.
- Enable IGMPv3 on the join interfaces.
- Ensure connectivity for the VLANs to be extended to the OTV edge device.
- If you configure VDCs, install the Advanced Services license and enter the desired VDC.
Guidelines and Limitations for OTV
OTV has the following configuration guidelines and limitations:
- If the same device serves as the default gateway in a VLAN interface and the OTV edge device for the VLANs being extended, configure OTV on a device (VDC or switch) that is separate from the VLAN interfaces (SVIs).
- An overlay interface will only be in an up state if the overlay interface configuration is complete and enabled (no shutdown). The join interface has to be in an up state.
- Configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the highest maximum transmission unit (MTU) size supported by the IP core. OTV sets the Don't Fragment (DF) bit in the IP header for all OTV control and data packets so the core cannot fragment these packets.
- Only one join interface can be specified per overlay. You can decide to use one of the following methods:
Configure a single join-interface, which is shared across multiple overlays.
Configure a different join interface for each overlay, which increases the OTV reliability.
- For a higher resiliency, you can use a port-channel, but it is not mandatory. There are no requirements for 1 Gigabit-Ethernet versus 10 Gigabit-Ethernet or dedicated versus shared mode.
- If your network includes a Cisco Nexus 1000V switch, ensure that switch is running 4.0(4)SV1(3) or later releases. Otherwise, disable Address Resolution Protocol(ARP) and Neighbor Discovery (ND) suppression for OTV.
- The transport network must support PIM sparse mode (ASM) or PIM-Bidir multicast traffic.
- OTV is compatible with a transport network configured only for IPv4. IPv6 is not supported.
- Do not enable PIM on the join-interface.
- Do not configure OTV on an F-series module.
Related Information
Troubleshooting OTV Adjacency
Troubleshooting ARP issues across OTV