02-24-2025 01:41 PM - edited 02-24-2025 01:44 PM
Good day, folks, I have a few questions and maybe someone can give me guidance.
I appreciate all the help one can offer, but I really don't want just a reference to a white paper. That's great to have, but I would also like to have someone who has actually deployed NDFC and knows the product provide some guidance. Hopefully, I can find someone. Thanks in advance!
Scenario:
First I thought of having 2 separate fabrics and 2 separate NDFC clusters in each site and treating them separately. Then someone mentioned multisite as an alternative. My understanding was that multisite is deployed when the VxLAN fabric needs to be extended across sites, meaning L2 adjacency across sites - VLAN 10/VxLAN 10010 can be in both sites and belong to the same subnet. But I may be wrong. If so, what are the determining factors that would point to a MSD vs separate fabrics?
Questions:
In NDFC parlance and constructs, you have a Border Leaf (I think its just called border today) and Border Gateways (BGW). Can I collapse the functionality of the border leaf pair and the BGW pair? It seems like a colossal waste of money to dedicate 2 leaf switches as a BGW simply to connect 1 port to the INS... am I missing something? I would like to connect our edge devices (FWs, routers, DMZ devices, etc.) to the collapsed border leaf pair, along with the single port to connect to the INS.
Then, do I really need an INS? Its just a router and I do believe it can be managed by NDFC. Why can't I just connect the provider circuit directly to the BGW at both sites and be done with it? So, you'll have BGW to BGW eBGP peering connectivity without the INS. The BGW can advertise all the management routes and underlay routes and the BGWs can also act as the VxLAN bridges, meaning the BGW does IFC and L3out.
Solved! Go to Solution.
02-24-2025 03:39 PM
Hi @ex-engineer , we have deployed MSD vis BGWs , you can have BL and BGW on the same leaf, the only thing you might need to consider is that the vlans which you will extend you will deploy in BGWs so the capacity is one thing you need to consider. But in your case not much vlans are extended as second fabric is DR so you can have BL and BGws on the same leaf. You can terminate the Dark fiber directly to the BGWs and it will work. the NDFC will automatically discover the connectivity via CDP and push the config for BGP underlay.
Thanks,
Deepak Bansal
02-24-2025 10:20 PM
Yes, in a separate fabric design, you would need an NDFC cluster at each site since NDFC manages each fabric independently. This ensures that Main and DR sites function autonomously without dependency on a centralized controller... While it introduces probably operational overhead in managing 2 NDFC instances, it aligns better with your current L3-isolated design.
Next, regarding MSD without an INS, it is not supported due to both NDFC template restrictions and EVPN control plane requirements. NDFC mandates an INS when configuring MSD because, in this architecture, BGWs do not peer directly; instead, they form eBGP sessions with the INS, which acts as an EVPN RR to facilitate inter-site route exchange. Without an INS, BGWs wouldn’t learn inter-site prefixes dynamically, breaking L2 extension and L3 EVPN control-plane functions. While you could technically configure direct BGW-to-BGW eBGP peering for L3-only interconnectivity, this bypasses EVPN’s capabilities, making MSD unnecessary.
So, if your requirement is only L3 inter-site routting, then MSD is not needed at all. You can achieve the same outcome by directly peering the BGWs over eBGP and handling inter-site route advertisements manually. MSD only makes sense when you require automated inter-site VxLAN bridging and unified policy management.
Given your current setup with L3 isolation and no L2 stretch, from my point of view, keeping separate fabrics with direct BGW peering is the simpler and more efficient approach.
02-24-2025 01:45 PM
Hello @ex-engineer
Since you don’t require VXLAN extension today, you can keep two separate fabrics with an L3 interconnect and use a collapsed BL + BGW model to simplify infrastructure. You do not need an INS unless you adopt MSD.
02-24-2025 02:00 PM
Thank you. So, in the scenario of separate fabrics, I would have an NDFC cluster at each site, right?
Also, I can't have MSD without the INS? Is it an NDFC template restriction, or does something actually break from an R/S perspective?
02-24-2025 03:39 PM
Hi @ex-engineer , we have deployed MSD vis BGWs , you can have BL and BGW on the same leaf, the only thing you might need to consider is that the vlans which you will extend you will deploy in BGWs so the capacity is one thing you need to consider. But in your case not much vlans are extended as second fabric is DR so you can have BL and BGws on the same leaf. You can terminate the Dark fiber directly to the BGWs and it will work. the NDFC will automatically discover the connectivity via CDP and push the config for BGP underlay.
Thanks,
Deepak Bansal
02-24-2025 10:20 PM
Yes, in a separate fabric design, you would need an NDFC cluster at each site since NDFC manages each fabric independently. This ensures that Main and DR sites function autonomously without dependency on a centralized controller... While it introduces probably operational overhead in managing 2 NDFC instances, it aligns better with your current L3-isolated design.
Next, regarding MSD without an INS, it is not supported due to both NDFC template restrictions and EVPN control plane requirements. NDFC mandates an INS when configuring MSD because, in this architecture, BGWs do not peer directly; instead, they form eBGP sessions with the INS, which acts as an EVPN RR to facilitate inter-site route exchange. Without an INS, BGWs wouldn’t learn inter-site prefixes dynamically, breaking L2 extension and L3 EVPN control-plane functions. While you could technically configure direct BGW-to-BGW eBGP peering for L3-only interconnectivity, this bypasses EVPN’s capabilities, making MSD unnecessary.
So, if your requirement is only L3 inter-site routting, then MSD is not needed at all. You can achieve the same outcome by directly peering the BGWs over eBGP and handling inter-site route advertisements manually. MSD only makes sense when you require automated inter-site VxLAN bridging and unified policy management.
Given your current setup with L3 isolation and no L2 stretch, from my point of view, keeping separate fabrics with direct BGW peering is the simpler and more efficient approach.
02-26-2025 01:36 PM - edited 02-26-2025 01:38 PM
Thanks to the two of you
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide