VXLAN EVPN Integration with LISP
Host Move Detection in a VXLAN EVPN Fabric
In the VXLAN EVPN fabric, the host routes and MAC address information are distributed in the MP-BGP EVPN control plane, which means that the fabric itself performs the host detection. The LISP site gateways use these host routes for triggering the LISP mobility encapsulation and decapsulation. LISP, when integrated with VXLAN fabric, provides ingress route optimization for traffic from the clients to the data center (Figure 1).
Figure 1. LISP Functional Roles in A VXLAN Fabric
For detailed configuration of VXLAN using the EVPN control plane, please see the following white paper: http://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-734107.html
When a virtual machine or host attaches to a leaf or top-of-rack (ToR) switch, the Layer 2 information is transported to its peers in the fabric using MP-BGP. This approach helps ensure connectivity between hosts within a data center fabric (Figure 2).
Figure 2. Host 1 in VLAN 1000 Attaches to Leaf or ToR Switch 1 and Is Associated with VNI 5000
When the virtual machine or host moves from one leaf switch to another, the new leaf switch detects that a virtual machine has moved behind it by snooping on Domain Host Configuration Protocol (DHCP) or ARP packets. It populates the reachability information in MP-BGP and advertises the updated MAC address route to its peers with an updated sequence number (Figure 3).
Figure 3. Host 1 Moves from Leaf or ToR Switch 1 to Leaf or ToR Switch 3
When the original leaf or ToR switch receives the route update with the modified sequence number, it sends a withdraw message for the stale reachability information (Figure 4).
Figure 4. BGP Control Plane: Old Route Withdrawn from Leaf or ToR Switch 1
Host Mobility across VXLAN EVPN Fabrics
When the leaf or ToR switch detects a host movement across data centers, it injects that host route into the MP-BGP EVPN control plane with an updated sequence number. The sequence number is a mobility community attribute that represents the state of mobility. It increments every time the server moves from one location to another. This sequence number attribute has to be carried to the original leaf or ToR switch from which the host moved, because it needs to withdraw that particular host route from BGP. The host route withdrawal happens only when the leaf or ToR switch receives a route with an updated sequence number. LISP currently cannot carry the mobility community attribute across the data center through the WAN.
To help LISP achieve mobility semantics across VXLAN EVPN fabrics, you need to establish an Exterior BGP (eGBP) relationship between the data centers. This eBGP relationship is used to carry the mobility community attribute in BGP EVPN across the data center sites for the stale reachability information (Figure 5 ).
Figure 5. HOST Mobility across Data Centers with LISP
In Figure 5:
1) The end system or server, after moving to a new location, sends a DHCP and ARP packet to join the new network.
2) The leaf or ToR switch detects the new host and redistributes the IP address and MAC reachability information in the MP-BGP EVPN control plane with an updated sequence number. This sequence number attribute is carried across the data centers using an eBGP relationship between AS 65001 and 65002. When the original leaf or ToR switch receives the route information with an updated sequence number, it withdraws its original route from BGP.
When the host first comes online (before moving across data centers), the sequence number attribute will be 0. This value indicates that this was the first time that the host is coming online in any data center (Figure 6).
Figure 6. Host Mobility with Sequence Number “0”
After the host moves from one location to another, the sequence number is updated to 1, which triggers the route update through the eBGP connection and the route withdrawal from the original leaf or ToR switch (Figure 7).
Figure 7. Host Mobility with Sequence Number “1”
3) When the LISP site gateway (also running MP-BGP EVPN in the fabric) detects this new host, it sends a map‑register message to the map-system database to register the new IP address in its own data center (BGP AS 65002).
4) When the map system receives the map-register message from BGP, AS 65002 sends a map-notify message to the old LISP site gateways, notifying them that the host has moved from their data center. This message helps ensure that the LISP site gateways install a Null 0 route for that prefix in their routing tables. This Null 0 prefix indicates that the host is in a location remote to that data center.
Figure 8. LISP Map System Updates
5) When the clients in the remote branch sites try to send traffic to the LISP site gateways at which the host was present (BGP AS 65001) before the mobility event, the site gateways see that the host is reachable through a Null 0 route. This event triggers a solicit-map request (SMR) from the site gateways to the LISP-enabled router in the branch site asking it to update its database.
6) The branch router then sends a map request to the mapping system asking for the new location of the host. This request is relayed to the LISP site gateways to which the host has moved (BGP AS 65002).
7) The LISP site gateways in BGP AS 65002 unicast a map reply to the LISP-enabled branch router asking it to update its database with the new location.
Now data traffic starts to flow to the correct data center (BGP AS 65002).
Functional Roles and Configuration
Figure 9 shows the topology of the LISP solution.
Figure 9. Topology
*In this topology the EBGP EVPN relationship between the two data centers is through an Layer 3 Data-center Interconnect (DCI). The Layer 3 connection between the data centers is highlighted using green dotted lines in the above topology
This topology uses one of the Border Spine switches as a BGP route reflector to depict what is supported. This might not be an ideal topoliges deployed by customers
Hardware and Software Details
Table 1 summarizes the hardware and software versions used in the configuration example.
Table 1. Hardware and Software Used in Configuration Example
Border spine and border leaf
Cisco Nexus 7000 Series and 7700 platform with F3 line card
Cisco NX-OS Software Release 7.2
Map server and map resolver
Cisco ASR 1000 Series Aggregation Services Routers
Cisco IOS® XE Software Release 3.13.2
Border Spine Configuration in Data Center 1 (BGP AS 65001)
This section summarizes the steps for configuring LISP for hand-off from VXLAN on the border spine or border leaf switch.
Step 1. Enable the LISP control plane.
Step 2. Configure the LISP map-server and map-resolver reachability.
Step 3. Configure the LISP hand-off for the tenant VRF instances.
The following example shows a configuration for a two-tenant VRF instance.
* If you need to configure additional EID (IP address) subnets to map to the VRF instance, then you will have to create another dynamic EID subnet name.
The "register-route-notification tag" should match the local BGP AS tag as the LISP xTR uses that tag information to import the route into the LISP table and perform a LISP encapsulation and de-capsulation.
The LISP instance ID provides a means of maintaining unique address spaces in the control and data plane. Instance IDs are numerical tags defined in the LISP canonical address format (LCAF). The instance ID has been added to LISP to support virtualization.
When multiple organizations within a LISP site are using private addresses as EID prefixes, their address spaces must remain segregated to prevent address duplication. An instance ID in the address encoding can be used to create multiple segmented VPNs within a LISP site at which you want to keep using EID-prefix-based subnets. The LISP instance ID is currently supported in LISP ingress tunnel routers and egress tunnel routers (ITRs and ETRs), map server (MS), and map resolver (MR).
The LISP locator VRF is used to associate a VRF table through which the routing locator address space is reachable with a router LISP instantiation.
Border Leaf Configuration in Data Center 2 (BGP AS 65002)
Configuration of border leaf is the same as the border spine we discussed above
For the other Border Spine in Data center 1(BGP AS 65001) and Border Leaf in Data center 2 (BGP AS 65002) the above configuration can be replicated.
LISP Map-System Database Configuration
Step 1. Configure the map server and map resolver on the switch.
The map server and map resolver can be on either the same device or multiple devices.
The scenario here uses an ASR 1000 Series router as the map server and map resolver.
Branch Site Configuration
To check for the EID (host IP address) learned on the LISP site gateway on a Cisco Nexus 7000 Series or 7700 platform switch, use the configuration shown here.
To check for LISP map-cache entries on the map server, use the configuration shown here.
LISP as a solution is very easy to configure (with just a few commands, as shown in the configurations that follow), and it provides an optimal way to resolve ingress route optimization challenges that result from workload mobility across data centers. The Cisco Nexus 7000 Series and 7700 platform are switches with comprehensive feature sets that can be used to implement the VXLAN-to-LISP solution discussed in this document using the F3 line cards.
F3 line cards provide multiple-data-plane encapsulation in hardware and control-plane protocols. VXLAN encapsulation is implemented in hardware on the southbound side, and LISP is implanted in hardware on the northbound side on the F3 cards, making the Cisco Nexus 7000 Series and 7700 platform with F3 line cards an excellent solution.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.