cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
396
Views
3
Helpful
5
Replies

NDFC Design Questions

ex-engineer
Level 1
Level 1

Good day, folks, I have a few questions and maybe someone can give me guidance.

I appreciate all the help one can offer, but I really don't want just a reference to a white paper. That's great to have, but I would also like to have someone who has actually deployed NDFC and knows the product provide some guidance. Hopefully, I can find someone. Thanks in advance!

Scenario:

  • Main site will have a L/S with NDFC. 14 leaf switches and 4 spines. 
  • DR will have 8 leaf switches and 2 spines and NDFC cluster
  • Main and DR site have L3 isolation. L2 is not stretched today. Not too much traffic from Main to DR, mostly just storage rep. 
  • Main and DR are connected with dark fiber

First I thought of having 2 separate fabrics and 2 separate NDFC clusters in each site and treating them separately. Then someone mentioned multisite as an alternative. My understanding was that multisite is deployed when the VxLAN fabric needs to be extended across sites, meaning L2 adjacency across sites - VLAN 10/VxLAN 10010 can be in both sites and belong to the same subnet. But I may be wrong. If so, what are the determining factors that would point to a MSD vs separate fabrics?

Questions:

In NDFC parlance and constructs, you have a Border Leaf (I think its just called border today) and Border Gateways (BGW). Can I collapse the functionality of the border leaf pair and the BGW pair? It seems like a colossal waste of money to dedicate 2 leaf switches as a BGW simply to connect 1 port to the INS... am I missing something? I would like to connect our edge devices (FWs, routers, DMZ devices, etc.) to the collapsed border leaf pair, along with the single port to connect to the INS.

Then, do I really need an INS? Its just a router and I do believe it can be managed by NDFC. Why can't I just connect the provider circuit directly to the BGW at both sites and be done with it? So, you'll have BGW to BGW eBGP peering connectivity without the INS. The BGW can advertise all the management routes and underlay routes and the BGWs can also act as the VxLAN bridges, meaning the BGW does IFC and L3out. 

 

2 Accepted Solutions

Accepted Solutions

Hi @ex-engineer , we have deployed MSD vis BGWs , you can have BL and BGW on the same leaf, the only thing you might need to consider is that the vlans which you will extend you will deploy in BGWs so the capacity is one thing you need to consider. But in your case not much vlans are extended as second fabric is DR so you can have BL and BGws on the same leaf. You can terminate the Dark fiber directly to the BGWs and it will work. the NDFC will automatically discover the connectivity via CDP and push the config for BGP underlay. 

 

Thanks,

Deepak Bansal

View solution in original post

@ex-engineer 

Yes, in a separate fabric design, you would need an NDFC cluster at each site since NDFC manages each fabric independently. This ensures that Main and DR sites function autonomously without dependency on a centralized controller... While it introduces probably operational overhead in managing 2 NDFC instances, it aligns better with your current L3-isolated design.

Next, regarding MSD without an INS, it is not supported due to both NDFC template restrictions and EVPN control plane requirements. NDFC mandates an INS when configuring MSD because, in this architecture, BGWs do not peer directly; instead, they form eBGP sessions with the INS, which acts as an EVPN RR to facilitate inter-site route exchange. Without an INS, BGWs wouldn’t learn inter-site prefixes dynamically, breaking L2 extension and L3 EVPN control-plane functions. While you could technically configure direct BGW-to-BGW eBGP peering for L3-only interconnectivity, this bypasses EVPN’s capabilities, making MSD unnecessary.

So, if your requirement is only L3 inter-site routting, then MSD is not needed at all. You can achieve the same outcome by directly peering the BGWs over eBGP and handling inter-site route advertisements manually. MSD only makes sense when you require automated inter-site VxLAN bridging and unified policy management.

Given your current setup with L3 isolation and no L2 stretch, from my point of view, keeping separate fabrics with direct BGW peering is the simpler and more efficient approach.

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

View solution in original post

5 Replies 5

M02@rt37
VIP
VIP

Hello @ex-engineer 

Since you don’t require VXLAN extension today, you can keep two separate fabrics with an L3 interconnect and use a collapsed BL + BGW model to simplify infrastructure. You do not need an INS unless you adopt MSD. 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Thank you. So, in the scenario of separate fabrics, I would have an NDFC cluster at each site, right? 

Also, I can't have MSD without the INS? Is it an NDFC template restriction, or does something actually break from an R/S perspective? 

Hi @ex-engineer , we have deployed MSD vis BGWs , you can have BL and BGW on the same leaf, the only thing you might need to consider is that the vlans which you will extend you will deploy in BGWs so the capacity is one thing you need to consider. But in your case not much vlans are extended as second fabric is DR so you can have BL and BGws on the same leaf. You can terminate the Dark fiber directly to the BGWs and it will work. the NDFC will automatically discover the connectivity via CDP and push the config for BGP underlay. 

 

Thanks,

Deepak Bansal

@ex-engineer 

Yes, in a separate fabric design, you would need an NDFC cluster at each site since NDFC manages each fabric independently. This ensures that Main and DR sites function autonomously without dependency on a centralized controller... While it introduces probably operational overhead in managing 2 NDFC instances, it aligns better with your current L3-isolated design.

Next, regarding MSD without an INS, it is not supported due to both NDFC template restrictions and EVPN control plane requirements. NDFC mandates an INS when configuring MSD because, in this architecture, BGWs do not peer directly; instead, they form eBGP sessions with the INS, which acts as an EVPN RR to facilitate inter-site route exchange. Without an INS, BGWs wouldn’t learn inter-site prefixes dynamically, breaking L2 extension and L3 EVPN control-plane functions. While you could technically configure direct BGW-to-BGW eBGP peering for L3-only interconnectivity, this bypasses EVPN’s capabilities, making MSD unnecessary.

So, if your requirement is only L3 inter-site routting, then MSD is not needed at all. You can achieve the same outcome by directly peering the BGWs over eBGP and handling inter-site route advertisements manually. MSD only makes sense when you require automated inter-site VxLAN bridging and unified policy management.

Given your current setup with L3 isolation and no L2 stretch, from my point of view, keeping separate fabrics with direct BGW peering is the simpler and more efficient approach.

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

ex-engineer
Level 1
Level 1

Thanks to the two of you

Review Cisco Networking for a $25 gift card