cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6275
Views
25
Helpful
15
Replies

IS-IS and BGP domains in SD-Access

I have 2 questions about the following setup

 

- Integrate Fusion Devices with the Legacy Network by Static Routing (Default + few specifics redistributed to IS-IS)

- Legacy Core switches just have a single route back to the fusion covering all SD-Access address space

- Fusion Devices running IS-IS in a single domain with Control Plane / Border (CP|B) and Edges

- Fusion Devices running VRF Lite BGP with CP|B to exchange all per VRF routes

- Fusion Devices also running VRF Lite BGP with each other

 

Now this is the setup that works well and I am controlling injecting of the routes to Underlay from the Fusion devices. It also allows Full reachability of the underlay without Mutual Redistribution of IS-IS to BGP on the border nodes as it is explained in every example I have seen so far.

 

question 1

 

Is there anything in this setup that could cause a problem or have no TAC support i.e. specifically running IS-IS as well as BGP between the fusion and borders but also having no mutual redistribution on the border which makes it simple to me. Both domains overlap and it is basically because I started with IS-IS only everywhere to get basic connectivity and didn't remove it later on the link between Fusion and Borders.

 

question 2

 

All examples I see use VRF lite between BGP speakers which creates a lot of peerings. Is it ok to use mBGP on all those peerings:

 

- Fusion to Fusion (iBGP)   -  I have tested mBGP with MPLS encap and it works fine

- Fusion to Border (eBGP)  - I am hesitant to change the default configuration pushed by DNAC so still VRF Lite here

- Border to Border (iBGP)  -  I have tested mBGP with MPLS encap and it works fine

 

thanks

 

Wojtek

 

 

1 Accepted Solution

Accepted Solutions

jedolphi
Cisco Employee
Cisco Employee

Hello,

Re Q1, you can use whatever routing you want between fusion and legacy network. Fusion is not part of SDA thus is not constrained by SDA requirements. You can run ISIS between fusion and SDA borders if you wish, but BGP is recommended as it gives better route control tools e.g. route filter, prefix list, aggregate-address, etc. Also it's consistent with the per-VRF eBGP configuration between borders and fusion. Up to you though, either scenario (BGP in GRT or ISIS in GRT) is supported.

Re Q2, fusion to fusion, you can run whatever flavour of BGP you want, since it is not part of SDA, it is just everyday standard routing design. Fusion to border VPNV4 address-family (AF) and border to border VPNV4 AF not supported as it triggers some sub-routines in IOS-XE that do not work well with SD-Access. Border to fusion and border to border must be per-VRF BGP. First half of 2020 we should have some additional features coming to SDA that will eliminate much (not all) of this manual per-VRF BGP configuration.

FINAL NOTE: Please test all reasonable fail-over/convergence scenarios in your design before going full production e.g. reboot borders, fail links, etc.

HTH, Jerome

View solution in original post

15 Replies 15

jedolphi
Cisco Employee
Cisco Employee

Hello,

Re Q1, you can use whatever routing you want between fusion and legacy network. Fusion is not part of SDA thus is not constrained by SDA requirements. You can run ISIS between fusion and SDA borders if you wish, but BGP is recommended as it gives better route control tools e.g. route filter, prefix list, aggregate-address, etc. Also it's consistent with the per-VRF eBGP configuration between borders and fusion. Up to you though, either scenario (BGP in GRT or ISIS in GRT) is supported.

Re Q2, fusion to fusion, you can run whatever flavour of BGP you want, since it is not part of SDA, it is just everyday standard routing design. Fusion to border VPNV4 address-family (AF) and border to border VPNV4 AF not supported as it triggers some sub-routines in IOS-XE that do not work well with SD-Access. Border to fusion and border to border must be per-VRF BGP. First half of 2020 we should have some additional features coming to SDA that will eliminate much (not all) of this manual per-VRF BGP configuration.

FINAL NOTE: Please test all reasonable fail-over/convergence scenarios in your design before going full production e.g. reboot borders, fail links, etc.

HTH, Jerome

Thank you very much that is very useful. Just to clarify I run both IS-IS and eBGP between the fusion and the border at the moment and I don't have redistribution between BGP and IS-IS at the border. 

 

Regarding Q2 - Although I have tested VPNv4 BGP with MPLS encap between borders and failover worked fine I will remove it for now and only keep VPNv4 BGP between the Fusion devices to avoid being "not supported".

 

The BGP config was added to the borders using VLANs in 3000 range towards Fusion routers by DNAC. This automatic config didn't include any iBGP per VRF peering with the 2nd border so my question is -> if I put extra BGP configuration on the border that is required to provide better resilience am I risking that DNAC will at some point overwrite it because it isn't part of original Border provisioning?

 

improvement 1 - manually add iBGP peerings between borders per VRF

improvement 2 - add additional "cross" link from each border to the 2nd fusion router and manually configure eBGP peerings per VRF

 

thanks

 

 

 

 

OK sure. Another alternative is to put default information originate on borders in ISIS in GRT, that way you don't have to redistribute BGP into ISIS on borders, and you can also eliminate ISIS between borders and fusion.

Yes we know VPNV4 AF works fine in a lab, but it does create problems when you scale out. We will have a better way to solve per-VRF routing next year.

Manual iBGP between borders is fully supported, no risk, it is done regularly in many deployments. iBGP is not automated is because it will no longer be required next year when we have the proper solution that will be DNAC automated and will not require iBGP. Also, if your borders and both internal+external please note CSCvm77399 when doing iBGP. Best to make the borders external only if possible.

 

 

Thanks for that - is there any group or distribution list I could sign up to about those developments?

do you know if there is any formal Cisco doc showing Border failover designs and recommendations for that? i.e. iBGP between borders and concerns about the bug you mentioned or maybe 2 x eBGP sessions to the Fusion routers with manual per VRF BGP.

 

Default Setup I had with a single link to the fusion from each Border and no iBGP didn't survive the failover testing so I had to put a workaround with EEM script - if the border lost the link to Fusion it would isolate itself to avoid traffic blackhole. I may keep this workaround instead of risking the known bug with iBGP and wait for further developments that fix the situation.

 

thank you again for your help

No problems :)

Not aware of any border f/o documents currently. I have one internally, and it will be presented at Cisco Live in March 2020. If you would like access to the current internal doc then please reach out to your Cisco SE or AM for their help. If you point them to me, then I in turn can provide them the document. Or, if you are a partner and you have Sales Connect access, please private message me with your details and I'll share the link.

New developments are announced typically in 2 places:

1. Release notes, https://www.cisco.com/c/en/us/support/cloud-systems-management/dna-center/products-release-notes-list.html

2. Cisco live presentations, www.ciscolive.com (see the latest SD-Access presentations, currently San Diego June 2019, start with BRKCRS-2810 which contains an index of the mainstream SD-Access content)

Hi,

 

I just wanted to get some clarifications on this as well if possible as I'm working on a similar design.

 

We currently have 2 x fabric borders and 2 x fusion switches. Each of the fabric borders will be connected together and to each fusion switch using trunk links. For the underlay, I plan to configure a separate VLAN over each fabric border to fusion switch trunk link (4 in total) and use those to establish eBGP peerings. There will also be a single vlan on the border to border trunk link that will be used just for IS-IS adjacency. We will then using DNAC border handoff automation over the fabric border to fusion switch trunks for each VN (using auto provisioned vlans 3001+).

 

In this scenario, as we have dual links between each fabric border and each fusion switch, is there a need to configure an iBGP adjacency between fabric borders for both the underlay and each overlay VN? My understanding is that iBGP is only required in the event of a border to fusion switch uplink failure?

 

I have also read the SDA deployment guide and this is very confusing. The reference topology is the same as I described above, however, the example BGP configuration provided in this document dont really match the topology.

 

Thanks

 

For me, the only reason that may require iBGP (per VRF) between the borders is if your uplinks to two different fusion devices are patched into a single linecard on the border because this linecard would be SPOF and the only re-routing option would be via alternative border.

 

Personally I wouldn't bother with iBGP between borders if you don't have this situation with SPOF because as it was mentioned on this post before there are plans to address this scenario (re-routing via alternative border) without using iBGP next year.

 

 

 

Ok that make sense, thank you for the response.

 

One question:

 

1) Do you run IS-IS between your fabric borders? If so, is this via a dedicated routed link or via a trunk link with SVIs?

 

Yeah I run ISIS via dedicated layer 3

port-channel between the borders. 

Hi @wojtek.siodelski ,

 

I'm facing a similar problem with my topology of traffic blackhole with one my border device in case of link failure with the fusion device.

 

I was thinking of deploying IBGP configuration between the border nodes, but after reading your comment, I would prefer, like you, implelemt the EEM script to workaround this problem.

 

Could you share, if you dont mind, your EEM script so that I could test it in my lab?

 

Thanks in advance.

Best regards

Hi,

This is the copy of an EEM script that I am currently using in a live deployment

 

The fabric border nodes are connected to two shared services nodes, each with two uplinks. The following EEM script is applied to both border nodes and tracks the two shared services uplinks. In the event that both uplinks fail, the local loopback0 interface is shutdown which withdraws the associated IP address from IS-IS and forces the fabric edge nodes to send unknown external traffic to the healthy border node. Once the uplinks are restored, the loopback0 interface is re-enabled.

 

track 1 interface <interface> line-protocol
!
track 2 interface <interface> line-protocol
!
track 3 list threshold percentage
 object 1
 object 2
 threshold percentage up 50
!
event manager applet LINK-TO-SHARED-SERVICES-IS-DOWN
 event syslog pattern “%TRACK-6-STATE: 3 list threshold percentage Up -> Down”
 action 1.0 cli command “enable”
 action 1.1 cli command “conf t”
 action 2.0 cli command “interface loop0”
 action 3.0 cli command “shut”
 action 9.0 syslog msg “LINK TO SHARED SERVICES IS DOWN, RLOC HAS BEEN SHUTDOWN”
!
event manager applet LINK-TO-SHARED-SERVICES-IS-UP
 event syslog pattern “%TRACK-6-STATE: 3 list threshold percentage Down -> Up”
 action 1.0 cli command “enable”
 action 1.1 cli command “conf t”
 action 2.0 cli command “interface loop0”
 action 3.0 cli command “no shut”
 action 9.0 syslog msg “LINK TO SHARED SERVICES IS UP, RLOC HAS BEEN RESTORED”

For this particular deployment, it is unlikely that the two uplinks between each border node and the two shared services nodes will fail simultaneously, however an extra level of resiliency was requested for piece of mind. The EEM script made more sense then deploying a dedicated trunk between the redundant border nodes with an iBGP adjacency for each configured VN. 

 

I hope that this helps

Hi @willwetherman ,

 

Thank you very much for sharing your script. I will test it in my deployment, bit I'm pretty sure it will help me.

 

Thanks again and best regards

Regarding ISIS BGP redistribution requirements I have originated Default Route from the borders and broken ISIS to the fusion. I have also generated a BGP summary address covering all loopbacks from the Border. Nothing seems to be broken but I have 2 new concerns:

 

As per CVD

 

WLC reachability—Connectivity to the WLC should be treated like the loopback addresses. A default
route in the underlay cannot be used by the APs to reach the WLCs. A specific route to the WLC IP
address must exist in the Global Routing Table at each switch where the APs are physically connected.

 

Is this still required -> This would mean I have to perform BGP redistribution into ISIS anyway (with filters)

 

AP Subnet reachability - Borders are advertising AP subnet into BGP because it comes from LISP so sounds like I don't need to redestribute IS-IS because the only requirement is loopbacks of the devices which I have sorted out with BGP Aggregate?

 

 

WLC reachability. I'm not sure WLC subnet in fabric GRT is a real requirement any more, but you're right, it's best to be CVD compliant. Sorry, I forgot about that recommendation.

If border is originating AP subnet into BGP and DNAC can reach all your fabric node loopbacks for management then I don't see a need to redist ISIS into BGP. Maybe if you do LAN automation you might need to, since LAN Automation subnet is created dynamically by DNAC when you start LAN automation procedure

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco