cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1559
Views
0
Helpful
6
Replies

L2 Flooding Limitations

dm2020
Level 1
Level 1

Hi All,

 

Is there a practical limitation to the number of IP Pools with L2 Flooding enabled/or L2 Only pools that can be enabled within a fabric site? When testing L2 flooding in our deployment, I've noticed that all fabric edge switches join the same underlay multicast group 239.0.17.1 for all pools that have L2 flooding enabled which result in broadcasts, unknown unicasts etc, being flooded to all fabric edge switches regardless if the fabric edge switches have any endpoints connected or not.

 

At the moment we are planning a migration and have ~ 50 pools that need L2 flooding enabled. This is to support a range of BMS systems and scenarios where we need to stretch L2 between various ports within our fabric that need to support things like DHCP so we need to use L2 Only. 

 

Are there any enhancements in any future releases of DNAC/SDA where the flooding of broadcasts etc will be limited to only the fabric edge switches that need it? I've looked at features such as zoning which allows you to select which VNs/IP pools are enabled on a specific fabric edge/range of fabric edges within a fabric site, however all fabric edges still join the same underlay multicast group for L2 flooding so they all receive the same broadcasts regardless if they have the associated VN/IP pool enabled or not.

6 Replies 6

Take a look in PIM Source-Specific Multicast (PIM-SSM) and PIM Any-Source Multicast (PIM-ASM) both a supported and the first might be what you are looking for if you worry about multicast. But, from all I´ve been reading, in order to L2 flooding to work, an underlay musticast group will formed automatic in ASM mode.

 

The configuration pushed by DNAC will be like:

 

instance-id XXXX

remote-rloc-probe on-route-change

service ethernet

eid-table vlan XXX

broadcast-underlay 239.0.0.10 //VLAN XXX  part of underlay multicast group database-mapping mac locator-set xxx

exit-service-ethernet

exit-instance-id

 

Which means, it is limited to vlan.

Hi Flavio,

 

Thanks for the reply, however the config you posted above is outdated. As of DNA Center 1.3.3.X and above, DNAC allocates a unique PIM ASM multicast group per fabric site for L2 Flooding starting with 239.0.17.1 for the first fabric site and up. For our fabric site, all VLANs with L2 flooding enabled use the same multicast group of 239.0.17.1

 

 

Parthiv Shah
Cisco Employee
Cisco Employee

If you are running anything post DNAC 1.3.3 release then check in Fabric site whether you see any banner to apply pim configuration. If so, apply that banner which will push additional configuration requires to mitigate this problem. If you have access to Bug tool kit, send me private message.

Hi Parthiv,

 

We are currently running DNAC 2.2.2.8 with IOS-XE 17.3.4 and have applied all recommendations to the fabric, however none of these change the behaviour. As an example we have VLAN 1021 (VNI 8188) that has L2 flooding enabled that is pushed to all 50 switches within our network and we only have VLAN 1021 endpoints connected to switches 1 and 2. When running a packet capture on switch 50, which has no endpoints connected, we can see broadcasts messages carried within VXLAN VNI 8188 that originate from the endpoints connected to switches 1 and 2 to it would seem that broadcasts, link-local multicasts etc are flooded to all switches within a fabric site regardless if endpoints are connected to those switches or not. In a traditional network we could prune VLANs off trunks to avoid network wide flooding, however that doesn't seem possible in SDA.

 

What I want to understand is if this is the expected behaviour when L2 Flooding is enabled and if there will be any future enhancements to make this more efficient?

Can you check whether there is IP PIM Passive configured?

 

There is plan to prune the VLAN if there is L2 bordre handoff with flooding towards the uplink but no plan to prune within fabric.

 

Yes PIM passive is configured under all anycast SVIs on all fabric edge switches, however I dont see how this changes the behaviour as L2 only networks dont use this. As a test I shutdown all SVIs on one fabric edge switch and the switch still joins underlay multicast group 239.0.17.1 and receives broadcasts etc from all other fabric edge switches within the network.

 

Based on your last comment, it doesn't sound like L2 flooding will be pruned within the fabric so the behaviour that we are seeing is normal.

 

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: