cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
347
Views
0
Helpful
10
Replies

SD-Access Border Nodes interconnection design

alessandro.s
Level 1
Level 1

Hi all,

in the last months we succesfully migrated our legacy network to SDA, the design and infrastructure configuration was provided by Cisco expert as at the beginning i wasn't very confident with the new environment. We have three medium fabric sites managed by DNAC, every site with it's own co-locataed BN-CP and a 9800 WLC, all the configuration was provided by DNAC using LAN automation and we are using IS-IS Between BNs and FEs and BGP with IP transit to connect external world, also we have three VNs otherr than the Infra_VN . Only recenty i found that, in every site, Border nodes are connected each other via an L3 Port-channel that sits in the GRT and there are no iBGP or IS-IS sessions active for the VNs so my question is why? I think it's something that is part of the design but, for redundancy,  it wouldn’t be better to have iBGP or IS-IS between border nodes in evry VN?

10 Replies 10

PabMar
Cisco Employee
Cisco Employee

Alessandro,
First of all, thanks for deploying SD-Access.

If you have dual Borders and Dual Fusions upstream, if you mesh them with eBGP, then there's no need for iBGP between borders.

If you insist that iBGP may be beneficial in your case, please review these two documents:

Distributed deployment guide

Configure Fusion in SDA


I also had recorded a full SDA lab, and you may find some of the videos beneficial.

Hope that helps.

Hi PabMar,

many thanks for your reply and thanks for pointing me to the documents and your SDA Labs! Attached file is the typical connection diagram for our sites. We have full mesh eBGP towards ISP that provides us an MPLS connection that makes as connect to all the "external" world (Datacenters and internet), wo i receive default route from them (i think this is relevant for Lisp Pub/Sub).Also we have a local stack of L3 switches that we use as fusion devices(i identify them as "server block and fusion" switch)and also to connect some local services, let's say, local servers, WLC etc.

Server block switches and Fabric Edges are stacks of two or more (in the case of FE ) devices, as you can see in the diagram they are connected directly to BNs with a total of two uplinks, one per phisycal FR/FE switch so from the FR/FE redundancy perspective shouldn't be better to enable iBGP between borders?

Also, we are in progress to migrate to another ISP and they have a temporary Phisycal limitation so we cannot do a full mesh connection to their router but just a total of two links (one BN to one ISP router, other BN to other IS router), is iBGP between borders recommended in this case?

Regards

 

 

If the Fabric Site is Pub/Sub then there's no need for manual per-VRF IBGP between Border Nodes. The Catalyst Center will automatically configure VPNv4 IBGP between BN and CP for route transport scenarios, but you don't need to change this or remove it.

In a Pub/Sub SDA Fabric Site, for each VRF that has a learned default route on the External BN, the BN will register itself as default-etr (default exit aka default route) to CP. If BN external link goes down then default route goes down and BN withdraws itself as default-etr. The manual per-VRF IBGP was used in the pre-Pub/Sub SDA architecture to solve for the inability to register/withdraw default-etr.

Yes, in a 2-teir fabric architecture (ENs cabled to BNs) it does make sense to have underlay IGP links (ISIS in your case) between Border Nodes.

 

 

 

Hi Jerome,

thanks for your kind reply. I understand  that if aBN looses the default route it's not eligible to route any traffic so the flows from Fabric Edge will will be forwarded by the other BN, but what about Fusion device that is not using LISP?

Also,lLooking at the diagram i attached in my last post, let's say, as an example, i loose uplink from fusion device to BN/CP1(or from FE to BN/CP1). Let's assume we have a "Guest VN" and outbound traffic from fusion device or FE  where destination is internet, In this case outbound traffic  will reach BN2 and go outside towards one of the MPLS routers. Let's assume, then, the returning traffic comes back to BN/CP1, where the uplink to Fusion device/FE is lost so, how returning traffic in Guest VN can then reach his destination  without any iBGP session between BNs in Guest VN?

EDIT: reading your post better i see you was referring to vpnv4 configuration so probably the solution to my question is in how vpnv4 works that is not perfectly cleari to me, i'll be grateful if you can explain me!

Thanks in advance

Hi, if the BN goes down then Fusion device should see BGP session drop towards the failed BN, thus routing traffic around the failed BN. Note that you probably want Fusion-BN EBGP to drop quickly, not wait for 180s dead timer, usually this is achieved by making sure BGP peering interfaces go down immediately. Alternatively you could use BFD to drop EBGP peering, but I prefer the simpler interface down scenario where possible. I might also mention, after you think you have the design and configs correct it's usually good practice to test the failure scenarios so you know for certain it works as expected. With a correct design you should be able to get sub-second convergence.

The CatC automated VPNv4 is for BGP route transport over SDA Fabric Site. To be honest this not that common a use case and at some point in future we will probably remove the VPNv4 BGP between BNs and CPs (seen as inter-BN VPNv4 when BN and CP are on same device). The VPNv4 route transport use case is briefly explained in BRKENS-2816. The BN-CP VPNv4 IBGP not related to redundancy scenarios which I believe is the topic of this thread.

Best Regards, Jerome

 

 

 

Thanks Jerome for your kind reply,
And also thanks for sharing your presentation. In my scenario I'm using BFD to speed up eBGP convergence, and I understand that if one link from Fusion device to BN1 fails, the ougoing traffic will flow through BN2, what is not clear to me is what happens to the returning traffic if comes back to BN1 and the uplink to fusion router from BN1 and Fusion device is down.
Regards,

Hi Alessandro, apologies if I'm not understanding the question, but hopefully I have.... all SDA BNs automatically originate SDA IP ranges into EBGP towards fusion. If fusion is cabled and EBGP peering to both BNs then fusion will have two paths for each SDA IP range, path1 via BN1 and path2 via BN2. If fusion to BN1 routed interconnect fails then fusion should see BN1 EBGP neighbor drop and fusion will move all internet (or any other subnet outside SDA) to SDA traffic to the routed link connected to BN2. For the time it takes the fusion-to-BN1 EBGP peer to drop and the fusion to recalculate the valid paths to SDA (in this example via BN2) some packets will be dropped. In a well designed/performing network this should all happen in under one second. Hope that helps? Jerome

 

 

 

Hi Jerome,

no need to apologize, my bad if i was not able to explain it better. So, what is missing in the design right now, i think, is cross links between fusion devices and BNs or iBGP for VNs betwen Border Nodes. Generally speaking, i'm not worried about outgoing traffic but the issue is in returning traffic. Let's take as an example the following diagram:

alessandros_0-1720020598077.png

uplink from Fusion device to BN1 failed, so outgoing traffic in, lets' say, Guest VN to internet (green arrows) flows through the only available path vs. BN2 and then reach ISP MPLS and internet. When traffic comes back (blue arrows)  i'm not aware about how ISP manages traffic so there's the possibility that the flow will arrive to MPLS Router 1 and BN/CP . In that case the returning traffic will never reach Fusion device again unless

- there's an iBGP connection in Guest VN between borders or

- there are cross links between the phisycal switches forming the fusion device stack and the Border Nodes, so that the other uplink still available from BN/CP 1 to Fusion device will forward returning traffic.

Am i correct?

King regards,

 

Thanks for the clarification. If possible my best recommendation would be to directly connect MPLS routers to fusion devices instead of connecting them to BNs because this simplifies the BN routing design (LISP-BGP interactions in various failure scenarios). If you absolutely must use the BNs as a transport between MPLS and fusion then please advise what type of BN they are, External or External+Internal? And is the SDA CP is LISP Pub/Sub? The specific BN/CP configuration will have bearing on the routing outcomes.  Jerome

 

 

 

 

Hi Jerome,

thanks for replying and for your advices, actually i don't have a strict requirement to use BNs as a transport between Fusion and MPLS, but i prefer to leverage on DNAC automation for L3 Handoff (i did not say before that SDA was managed by DNAC). BNs are both configured as External and i confirm that SDA CP is LISP Pub/Sub.

Regards