on 06-26-2014 10:33 AM
Cisco continues to recommend back-to-back cabled nV Edge solutions (as this is the most reliable configuration). However, in the case that nV Edge systems are geographically separated, this document will provide recommendations for such conditions. Additionally, this document does not cover nV Edge deployment. For that, please visit the ASR9000 XR nV Edge Deployment Guide. |
[TOC:faq]
ASR9K nV Edge (cluster) systems are formed by connecting two chassis back-to-back using both control links and data links. However, if necessity requires the chassis to be geographically separated, and L2 or L3 "cloud" could be inserted between the systems and used to extend the EOBC functionality.
NOTE: While there are an abundant number of ways to "extend" the EOBC via L2 or L3 clouds, this document will only discuss and L2 EoMPLS PW solution (as this is the officially recommended method).
Cisco recommends an EoMPLS PW (Ethernet over MPLS Pseudowire) cloud network for distance-separated nV Edge systems.
Figure 3 - EoMPLS PW configuration
Before answering this question, it is important that the reader understand that both the Control Links and Data Links have minimum operational requirements. Meaning that the L2 cloud between Rack 0 and Rack 1 must be stable enough to (a) meet packet delay requirements (b) guaranteed packet deliver and (c) not introduce stray traffic, packets or noise that could disrupt the control plane communication.
It is for these reasons that Cisco recommends the Control Link and Data Link traffic be tunneled from Rack0 to Rack1.
Using Figure 3 above, consider the following:
This section details the network requirements of both Control Links and Data Links. In other words, these are the conditions that the network (between cluster systems) must meet in order to make the cluster systems "believe" they are connected back-to-back.
These are the message intervals and timeout values that need to be preserved/supported in the L2 cloud:
The following requirements apply only to Control Link traffic:
Before employing Pseudowire connections, we need to mention that there are very specific wiring patterns required for EOBC (control plane) connections - see Figure 4 below:
Figure 4 - Cluster EOBC Physical Wiring Requirements
When creating Pseudowire mappings, the thing to remember is to always ensure that the mappings support the physical wiring requirements (as depicted in Figure 5 below):
Figure 5 - Pseudowire Mapping Supporting Physical Wiring Requirements
The following serves as a simple configuration example of the recommended EoMPLS PW EOBC L2 network.
The following is a physical representation. For a logical representation, please refer to Figure 3 above.
PE 0 | PE 1 |
---|---|
hostname pe0 ! interface Loopback0 ipv4 address 1.1.1.1 255.255.255.255 ! interface GigabitEthernet0/0/0/16 l2transport ! ! interface GigabitEthernet0/0/0/17 l2transport ! ! interface GigabitEthernet0/0/0/18 l2transport ! ! interface GigabitEthernet0/0/0/19 l2transport ! ! interface TenGigE0/0/1/0 mtu 1600 ipv4 address 17.1.1.1 255.255.255.0 ! interface TenGigE0/0/1/1 mtu 1600 ipv4 address 21.1.1.1 255.255.255.0 ! interface TenGigE0/0/1/2 l2transport ! ! interface TenGigE0/0/1/3 l2transport ! ! interface TenGigE0/0/2/0 mtu 1600 ipv4 address 31.1.1.1 255.255.255.0 ! interface TenGigE0/0/2/1 mtu 1600 ipv4 address 41.1.1.1 255.255.255.0 ! router ospf 100 router-id 1.1.1.1 area 0 interface Loopback0 ! interface TenGigE0/0/1/0 ! interface TenGigE0/0/1/1 ! interface TenGigE0/0/2/0 ! interface TenGigE0/0/2/1 ! ! ! l2vpn xconnect group x1 p2p 1 interface GigabitEthernet0/0/0/16 neighbor ipv4 2.2.2.2 pw-id 1 ! ! ! xconnect group x2 p2p 1 interface GigabitEthernet0/0/0/17 neighbor ipv4 2.2.2.2 pw-id 2 ! ! ! xconnect group x3 p2p 1 interface GigabitEthernet0/0/0/18 neighbor ipv4 2.2.2.2 pw-id 3 ! ! ! xconnect group x4 p2p 1 interface GigabitEthernet0/0/0/19 neighbor ipv4 2.2.2.2 pw-id 4 ! ! ! xconnect group y5 p2p 1 interface TenGigE0/0/1/2 neighbor ipv4 2.2.2.2 pw-id 5 ! ! ! xconnect group y6 p2p 1 interface TenGigE0/0/1/3 neighbor ipv4 2.2.2.2 pw-id 6 ! ! ! ! mpls ldp router-id 1.1.1.1 interface TenGigE0/0/1/0 ! interface TenGigE0/0/1/1 ! interface TenGigE0/0/2/0 ! interface TenGigE0/0/2/1 ! ! end | hostname pe1 ! interface Loopback0 ipv4 address 2.2.2.2 255.255.255.255 ! interface GigabitEthernet0/0/0/16 l2transport ! ! interface GigabitEthernet0/0/0/17 l2transport ! ! interface GigabitEthernet0/0/0/18 l2transport ! ! interface GigabitEthernet0/0/0/19 l2transport ! ! interface TenGigE0/0/1/0 mtu 1600 ipv4 address 17.1.1.2 255.255.255.0 ! interface TenGigE0/0/1/1 mtu 1600 ipv4 address 21.1.1.2 255.255.255.0 ! interface TenGigE0/0/1/2 l2transport ! ! interface TenGigE0/0/1/3 l2transport ! ! interface TenGigE0/0/2/0 mtu 1600 ipv4 address 31.1.1.2 255.255.255.0 ! interface TenGigE0/0/2/1 mtu 1600 ipv4 address 41.1.1.2 255.255.255.0 ! router ospf 100 router-id 2.2.2.2 area 0 interface Loopback0 ! interface TenGigE0/0/1/0 ! interface TenGigE0/0/1/1 ! interface TenGigE0/0/2/0 ! interface TenGigE0/0/2/1 ! ! ! l2vpn xconnect group x1 p2p 1 interface GigabitEthernet0/0/0/16 neighbor ipv4 1.1.1.1 pw-id 1 ! ! ! xconnect group x2 p2p 1 interface GigabitEthernet0/0/0/19 neighbor ipv4 1.1.1.1 pw-id 2 ! ! ! xconnect group x3 p2p 1 interface GigabitEthernet0/0/0/18 neighbor ipv4 1.1.1.1 pw-id 3 ! ! ! xconnect group x4 p2p 1 interface GigabitEthernet0/0/0/17 neighbor ipv4 1.1.1.1 pw-id 4 ! ! ! xconnect group y5 p2p 1 interface TenGigE0/0/1/2 neighbor ipv4 1.1.1.1 pw-id 5 ! ! ! xconnect group y6 p2p 1 interface TenGigE0/0/1/3 neighbor ipv4 1.1.1.1 pw-id 6 ! ! ! ! mpls ldp router-id 2.2.2.2 interface TenGigE0/0/1/0 ! interface TenGigE0/0/1/1 ! interface TenGigE0/0/2/0 ! interface TenGigE0/0/2/1 ! ! end |
Troubleshooting section coming soon ...
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: