04-11-2019 06:23 PM - edited 07-12-2019 12:46 PM
Currently we are relying on solutions such as MPLS/BGP for fabric sites to talk to each other and prefixes are exchanged between fabric sites using the above technologies using Border routers to steer traffic.
This document helps us to implement SD-Access Distributed Campus that uses SD-Access Transit to interconnect multiple SD-Access fabric sites. The design follows the Cisco SD-Access of running independent Mapping Systems on each fabric site, so that they are run as independent administrative domains.
The architecture has requirement of running independent control plane nodes for each fabric site which helps us achieve resiliency, survivability for each fabric sites.
A “Fabric site” is a portion of the fabric which has its own set of Control Plane Nodes, Border Nodes and Edge Nodes. Different levels of redundancy and scale can be designed per Site
by including local resources: DHCP, AAA, DNS, Internet, etc. A Fabric Site may cover a single physical location, multiple locations, or just a subset of a location.
Single Location Branch, Campus or Metro Campus
Multiple Locations Metro Campus + Multiple Branches
Subset of a Location Building or Area within a Campus
SD-Access fabric domain is hierarchal representation of fabric sites managed by Cisco DNAC SD-Access fabric domain can consist of multiple fabric sites and each site has its own devices which helps us achieve the benefits such as scale, resiliency and survivability. A Fabric domain should be able to scale horizontally by having state specific information within each site.
Multiple Fabric sites in a single Fabric domain will be interconnected using a Transit.
The SD-Access transit consists of Control plane nodes which helps interconnect multiple fabric sites.
There are two types of Transits:
SD-Access Transit - Enables a native SD-Access (LISP,VXLAN,CTS) fabric, with a domain-wide Control Plane node for inter-site communication.
IP- Based Transit - Leverages a traditional IP-based (VRF-LITE, MPLS) network, which requires remapping of VRFs and SGTs between sites.
SD-Access Transit
Traffic between sites will be use VXLAN+SGT encapsulation driven by a LISP
The SD-Access Border will do a LISP lookup with the control plane node in the transit network and perform a VXLAN encapsulation to deliver the traffic to the other site.
End to end policy is maintained using SGT group tags .
No performance based routing is possible in the Transit network/WAN.
Fully Automated Turn key solution.
SD-Access Transit can be used if customers have multiple sites connected via Dark Fiber links or DWDM Links or sites in same metropolitan area.
Typical use cases
Consistent policy and end-to-end segmentation using VRFs and SGTs
Why Multiple Fabric sites Vs Single Fabric Site ?
Advantages:
Smaller or isolated Failure Domains
Helps scaling number of Endpoints
Cisco DNAC provides Automation and Single View of entire system
VNs and SGTs gets pushed to all sites (consistent policy)
Local breakout at each Site for Direct Internet Access (DIA)
Smaller and Isolated fault domains
Resiliency and Scalability
Basically a Distributed Campus is collection of Fabric sites interconnected via a transit
Picture below interprets the same
The control plane node in each of the fabric sites will hold prefixes of end points connected to Edge nodes within that fabric site only which means end hosts in each site will be registered to the local control plane node within the site. Any end points which isn’t registered with the local control plane node will have to be reachable via the Border Nodes connected to the Transit Site.
The Control Plane node in the transit site will have the summary state information for all Fabric Sites that it is connected to. This information is made available to the Transit control plane node by Border Nodes that connect fabric sites to the Transit Site. Transit control plane needs to be a dedicated node.
When using Cisco DNAC to configure SD-Access Distributed campus with SD-Access transit we need to choose Outside world(external border) or Anywhere(internal +external) border when connecting to SD-Access transit.From the picture 1, we see that not every fabric site is connected to the Internet or Data Center. Border which is connected to the internet is the External Border and the Border connecting to the DC is Internal Border.
We can have two control plane nodes in the SD-Access Transit and Transit control plane nodes today receive aggregate routes from site borders at each fabric site using LISP. Transit Control Plane nodes can be located Physically at different fabric sites or centralized in a headquarters or Data center basically requiring ip connectivity in the underlay reachability from site borders.
To achieve end to end segmentation in a SD-Access Distributed Campus environment, we need to make sure that extra 50 byte of header is taken care of in the fabric and transit as well by having Jumbo MTU configured on all the devices.
Alternatively, we can configure TCP MSS on Edge/Border nodes to achieve this as fragmentation is not supported in cisco SD-Access today. TCP MSS will have to be configured on the overlay SVI prefixes on Edge nodes and egress interfaces on Border facing outside the fabric and this will work for TCP applications in the fabric. We will send the packets with MTU set on the cli and hence no other known MTU setting is needed at the cisco SD-Access transit side and let’s say if your transport supports 1400 MTU then set the TCP MSS value to 1350 so that the 50 bytes is accounted for with VXLAN header. If we need to support UDP traffic over 1500 MTU as well then TCP MSS will not help. If the above caveats cannot be met then we have to use IP Transit instead of SD-Access Transit.
Supported Platforms for Border (as of Cisco DNAC 1.2.10)
Cisco SD-Access Border Node | Cisco SD-Access Transit | IP Transit |
---|---|---|
C9K | YES | YES |
ASR1K/ISR4K | YES | YES |
C6K | NO | YES |
N7K | NO | YES |
The Virtual Networks are preserved end-to-end as traffic traverses the Border Nodes
from Fabric-site to Fabric-site via Transit.
Policy is defined on a global basis and is relevant to the end-to-end Fabric.
The enforcement of the policy is done by default at the Edge Nodes of the multi-fabric network.
Policy Enforcement. Scalable Group Tags (SGTs) will be preserved as traffic traverses the Border Nodes between Fabric-sites and the Transit.
In a SD-Access distributed deployment, you can have ISE PSNs dedicated to each of the fabric sites or for larger deployments where you have more than 2 PSNs at a site, we can have PSNs centrally located at a data center behind a load balancer and point the VIP of load balancer from DNAC settings
Each site has a WLC associated with its Control Plane
Smaller locations in the same metro site may share a WLC
Note: Mobility across Fabric Sites is not supported
H1 behind E1 in site 1 trying to reach H2 in E1 in site 2
H1 is trying to reach H2
E1 sends a map request to CP requesting for H2
As CP in Fabric Site 1 knows about prefixes in its own site only, it is going to send a negative map reply to E1
Based on the configuration of E1, traffic is sent to Border 1 (E1 is going to pick the border on a flow basis and here we chose Border 1 )
Based on the configuration on Border 1, it is going to send a map request to Transit CP requesting for H2
As Transit CP has information about all prefixes in the fabric domain , Border 1 will receive a map reply from Transit CP with the destination address as Border 3
Traffic is forwarded from border 1 to Border 3 using VXLAN encapsulation with SGT tags.
Once Border 3 receives the traffic, based on the configuration on Border 3 it is going to send a map request to the CP node in its fabric and in this case as Border and CP are on the same node.
Border 3 will have mapping information of the destination address which is behind E1
Traffic is forwarded to the edge node using VXLAN encapsulation with SGT Tags
H2 behind E1 in fabric site 2 trying to reach Internet
H1 is trying to reach a prefix in Internet
E1 in Fabric site 2 will send a map request to the CP node requesting for the destination prefix
Border/CP node in Fabric Site 2 is going to send a negative map reply to E1 as the destination prefix is not registered in the CP node
Based on the configuration on E1 , it sends traffic to Border 3
Border 3 upon receiving the traffic , sends a map request to the Transit CP for destination prefix
As Transit CP node does not have the prefix registered, it is going to send a Negative map reply to Border 3
Based on the configuration of Border 3 , it sends traffic to either Border 1 or border 2
Border 2 after receiving the traffic, sends a map request to the Transit CP node requesting for destination prefix
Transit CP will send a negative map reply as it does not have the destination prefix registered in its database
Once Border 2 receives the map reply, it will use native forwarding to send the traffic to the Internet Router
H1 behind E1 in Fabric site 1 trying to reach DC prefix behind Internal Border 2
H1 is trying to reach a prefix behind Internal Border 2 in Fabric Site 2
E1 in Site 1 is going to send a map request to the CP node in Site 1 for DC prefix behind Internal Border 2
CP node is going to send a negative map reply as it does not have destination prefix registered in its database
Based on the configuration on E1, it sends traffic to either Border 1 or Border 2
Border 1 or Border 2 will send a map request to Transit CP for destination Prefix
As Transit CP knows about destination prefix registered in its database, it sends map request back to Border 1 or Border 2 with Border 3 ip address.
As Border 3 knows about the destination prefix it is going to send the traffic to Internal Border 2
Once Internal Border 2 receives the traffic, it is going to forward the traffic to the destination in the DC
Example HLDs
Use Case 1
Use Case 2
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: