cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1056
Views
1
Helpful
12
Replies

SD-Access Greenfield Deployment L3 handoff and BN redudancy

chingunho88
Level 1
Level 1

We’re working on a greenfield deployment of Cisco SD-Access. We have two Catalyst 9600R switches designated as BN/CP, which we’re setting up as individual devices. Many recommended avoid using VSS or SVL due to downtime during maintenance windows

Each BN/CP would have two L3 handoff connections: one to the Internet Edge Firewall for WAN/internet access and one to the Data Center firewall for DC subnets.

My Questions:

What’s the recommended approach for setting up this L3 handoff?

How should we ensure redundancy between the BN/CP nodes?

3- Is it necessary to configure IS-IS between the DNA border nodes in SD-Access, or would iBGP? Can these configurations be automated?

Any insights or best practices would be greatly appreciated! Thanks in advance!

12 Replies 12

Torbjørn
VIP
VIP

You should use automated border configuration(configuring the links of your border nodes under the fabric site) unless there are specific reasons to do otherwise. You should ideally connect your borders directly to your fusion devices provided that there are enough interfaces to do so. If your fusion nodes don't have enough interfaces to have each border connected in a redundant fashion you should establish redundant links from your borders towards switches connected to your fusion node(s). If you go the switch route you should manually prune the VLANs such that spanning tree can't block any of the links towards your border nodes.

You should ensure redundancy between the BN/CP nodes through having robust routing towards the external routing domain(s) and by following other best practices for redundancy(ensure that they have different power sources, separate rooms/cooling etc.)

Assuming that you're using LAN Auto for underlay provisioning you should set up IS-IS between the border nodes. This can either be done manually, or added through the "Add link" workflow.

Happy to help! Please mark as helpful/solution if applicable.
Get in touch: https://torbjorn.dev

What’s the recommended approach for setting up this L3 handoff?
u'd prefer it to be automated, hence u'll avoid LAN-automation of BN|CPs, but rather configure north-bound connectivity on them manually

How should we ensure redundancy between the BN/CP nodes?
u'd prefer each them to be connected to each Fusion router by separate phy link.
Some sources state that in addition to it u must have dedicated BN|CP interconnect. But i'm a bit sceptical about it as soon as we have full mash toward north & IS-IS ECPM toward south.

3- Is it necessary to configure IS-IS between the DNA border nodes in SD-Access, or would iBGP? Can these configurations be automated?
i love this disputable topic... let's constate on how it's being implemented in most of cases i've met: both IS-IS direct interconnect & VPNv4 iBGP peering between BN|CPs will be in place. DNAC can atomate IS-IS with "Add interface" in LAN-automation. i suspect  DNAC automates VPNv4 iBGP during 2nd BN|CP provisioning.

3- Is it necessary to configure IS-IS between the DNA border nodes in SD-Access, or would iBGP? Can these configurations be automated?
It is recommended to have a routed underlay (GRT) link between BNs, but it is not necessary. BN1 Lo0 needs to be able to reach BN2 Lo0 and CP2 Lo0. BN2 Lo0 needs to be able to reach BN1 Lo0 and CP1 Lo0. The easiest way to make this happen efficiently is to put p2p routed link between the two BN/CP. If you do not have a routed link between BN/CP then LISP and redirected VXLAN traffic will route through intermediate nodes or edge nodes which in some networks might create a traffic bottleneck.

For LISP Pub/Sub SDA fabrics there is no need to configure any manual BGP between BNs. Manual per-VRF IBGP was used in LISP/BGP SDA fabrics. All new deployments should use LISP Pub/Sub.

Yes the BN-to-BN underlay routed link can be automated if you've used LAN automation to bring up the underlay, as Andy said. If you've configured underlay manually then BN-to-BN routed link will need to be configured manually.

Best regards, Jerome

 

 

"The easiest way to make this happen efficiently is to put p2p routed link between the two BN/CP. "
i'd say it's not easiest as soon as u have 10Gbps fabric with ENs :0) 
But my Q here is: if i'm extremely paranoid on optimal routing (like this with that direct interconnect) & on redundancy (like with doubling that interconnect) i'm only left with option to LAN-automate "Additional interface" with 2 phy links between BN|CPa, right? with drawback of using extra /31 subnet vs the case when that 2 links would be manually configured as PC.

& my last Q on this topic: when exactly DNAC trigger VPNv4 iBGP peering between BN|CPs please?
thanks

jedolphi
Cisco Employee
Cisco Employee

i'm only left with option to LAN-automate "Additional interface" with 2 phy links between BN|CPa, right? with drawback of using extra /31 subnet vs the case when that 2 links would be manually configured as PC.

For now you could add manual port channel and put on it a /31 IP address range that is not in a LAN-A subnet. Later LAN-A will add support for port channel. I wont give timeline, but the work is happening.

When exactly DNAC trigger VPNv4 iBGP peering between BN|CPs please?

The BGP peering is between BN1 and CP2. It happens when you deploy BN+CP role, it's part of the SDA config models.

 

 

 

 

The BGP peering is between BN1 and CP2. It happens when you deploy BN+CP role, it's part of the SDA config models.
How it can be if i configure 1 BN|CP a time? DNAC has no idea about who will be 2nd BN|CP in my activity when i'm configuring 1st BN|CP.

Hi Andy, when you add BN2/CP2, CatC configures BN-CP IBGP peering on both BN1/CP1 and BN2/CP2. Same as if you add a new L3VN or subnet to Fabric Site, the CP are updated with new L3VN/subnet when the L3VN/Subnet is provisioned, it can't know beforehand.

 

that's what i suspected to happen in reality. tnx 

vishalbhandari
Spotlight
Spotlight

Setting up Cisco SD-Access with Catalyst 9600R switches as Border Nodes/Control Plane (BN/CP) devices is critical for ensuring redundancy, scalability, and smooth integration with external networks. Here’s the recommended approach to your questions:


1. Recommended Approach for L3 Handoff

For Layer 3 handoff connections to the Internet Edge Firewall and Data Center Firewall:

  • Dedicated VRFs per Domain:

    • Use separate VRFs for Internet (WAN) and Data Center subnets to maintain segmentation and scalability.
    • Example VRFs:
      • VRF-WAN for connections to the Internet Edge Firewall.
      • VRF-DC for connections to the Data Center Firewall.
  • L3 Interfaces:

    • Configure routed interfaces (physical or subinterfaces) between the BN/CP and the respective firewalls.
    • Ensure each L3 handoff connection is terminated in the appropriate VRF.
  • Dynamic Routing Protocol:

    • Use a dynamic routing protocol such as OSPF, BGP, or EIGRP between the border nodes and the firewalls for automatic route propagation.
    • For WAN/Internet, consider BGP with the firewall for scalability.
    • For the Data Center, OSPF or iBGP is often preferred.
  • Route Filtering:

    • Use route maps to control routes exchanged between the BN/CP devices and the firewalls to avoid leaking internal SD-Access routes into the wrong domain.

2. Redundancy Between BN/CP Nodes

To ensure redundancy and avoid a single point of failure:

  • Dual Border Nodes:

    • Keep the two Catalyst 9600R switches as independent border nodes.
    • Each border node should have its L3 handoff connections to the Internet and Data Center firewalls.
  • Inter-BN/CP Connectivity:

    • Establish Layer 3 links between the two border nodes for route exchange and redundancy.
    • Use iBGP or IS-IS to advertise and synchronize routes between the nodes.
  • Load Sharing:

    • Configure load balancing between the border nodes.
    • Use ECMP (Equal-Cost Multi-Path) routing to distribute traffic across both border nodes.
  • Resilient Control Plane:

    • The control plane redundancy is provided natively in SD-Access via DNA Center and does not depend on physical stacking (VSS or SVL). Avoiding VSS/SVL is a good practice here.

3. IS-IS vs. iBGP for Border Node Communication

  • IS-IS:

    • IS-IS is the default protocol for the SD-Access underlay and is responsible for establishing connectivity between fabric nodes. However, using it for the border node's external routing is uncommon.
    • In this scenario, you typically don't need IS-IS for external routing.
  • iBGP:

    • iBGP is the recommended approach for synchronizing routes between the border nodes.
    • Configure iBGP between the border nodes within each VRF to share external routes (Internet and Data Center routes).
    • Ensure that DNA Center automates the iBGP configurations for consistency.

4. Automation with DNA Center

Cisco DNA Center simplifies configurations and ensures alignment with best practices:

  • Automated iBGP Configuration:

    • DNA Center automates iBGP configurations between border nodes and external peers.
    • It also handles VRF and route target configurations.
  • Fabric Provisioning:

    • The BN/CP role setup, VRF-to-VN mapping, and route exchange policies can all be automated via DNA Center.
  • Route Leak Automation:

    • DNA Center can automate inter-VRF route leaking (e.g., between the Data Center and WAN VRFs) if needed for certain use cases.

So how do u automate this assuming that it's about link-level configuration is {int X/no switchport ; int X.Y/enca do Y} on BN|CP toward FusionNode?

L3 Interfaces:

  • Configure routed interfaces (physical or subinterfaces) between the BN/CP and the respective firewalls.

To automate the configuration for link-level interfaces (such as routed interfaces between the BN/CP and the respective firewalls), you can leverage Ansible, Python scripts, or Cisco DNA Center to handle the automation.

where exactly in DNAC workflow automation of L3-Handoff to FN can be done with routed link & its subinterfaces?
what i can see there is selecting phy interface & then assigning to VNs their transfer subnets with dedicated VLANs. Meaning that interface by default is trunk & DNAC prepares corresponding config to be pushed to BN|CP. So with what means DNAC bring to let us to override default?
tnx