cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
827
Views
5
Helpful
2
Replies

Broadcast Storm Limitation Layer

hburton
Level 1
Level 1

This will hopefully be interesting enough to get some good input from you guys.

I'm curious where (in the stack) the problem occurs when you have a bloadcast storm scenario. Is it more likely to be at the hardware (network interface) level, or at the software stack level on the host? I assume that most modern network interfaces could handle a very large amount of broadcast traffic, but the OS will have trouble at the filtering level handling it, but I don't know enough about things at that level, so that's basically a "feeling." 

 

Take a scenario where we have a beefy virtual host, but it's limited to a single network interface. Say we have a single guest vm on each of 20 active host vlans (say 18 24bit subnets, and 2 20bit subnets). If we hook that single vlan up to a switchport configured as a trunk, carrying all 20 vlans, is that interface likely to have trouble handling all the broadcast traffic?

 

If that isn't interesting enough && the limiter IS in software && anyone is still reading this, because it's pertinent to the project that has raised this question for me, what about a scenario where Hyper-V is the virtual host, and the system could potentially use LACP aggregation over 3 interfaces (1G though I don't think that's all that relevant)? Is the Hyper-V virtual network stack going to be a limiting factor?

 

Thanks for any insights some of you seasoned geeks may have for me.

1 Accepted Solution

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame
With broadcasts, the problem is, mostly, at the software level (regardless of platform).

This because, unlike unwanted unicast, or even unwanted multicast, generally the host's network hardware cannot filter frames based on destination address. Since broadcast, logically, is directed to all hosts, the host needs to examine the contents of the frame to determine if the broadcast is something the hosts wants/needs to know.

Beside the host's processing broadcast frames problem, the network needs to forward broadcast to all hosts. Effectively, for broadcasts, you no longer have a switched network, you have a shared media network. Modern switches, also avoid unnecessary transmission of multicast, to individual ports, using IGMP snooping.

BTW, you don't need a broadcast storm to be a problem. Even "normal" broadcast can be a problem. This is why flat networks don't scale beyond some size and also why some protocols try to limit their usage of broadcasts (often by using a cache, e.g. ARP).

View solution in original post

2 Replies 2

Joseph W. Doherty
Hall of Fame
Hall of Fame
With broadcasts, the problem is, mostly, at the software level (regardless of platform).

This because, unlike unwanted unicast, or even unwanted multicast, generally the host's network hardware cannot filter frames based on destination address. Since broadcast, logically, is directed to all hosts, the host needs to examine the contents of the frame to determine if the broadcast is something the hosts wants/needs to know.

Beside the host's processing broadcast frames problem, the network needs to forward broadcast to all hosts. Effectively, for broadcasts, you no longer have a switched network, you have a shared media network. Modern switches, also avoid unnecessary transmission of multicast, to individual ports, using IGMP snooping.

BTW, you don't need a broadcast storm to be a problem. Even "normal" broadcast can be a problem. This is why flat networks don't scale beyond some size and also why some protocols try to limit their usage of broadcasts (often by using a cache, e.g. ARP).

Thank you Joseph.
That is what I was thinking. I may have used the term Broadcast Storm incorrectly. What I was referring to is the situation where there are so many hosts on a network that the broadcast traffic becomes too much for the hosts to process. Broadcast Storm I guess generally refers to a situation like what is created by a loop?
So, it sounds like, with the number and size of the subnets that I'm dealing with, it's likely to be problematic at the VM Host Virtual Network level.

I did a small test, where I setup a desktop class PC with your average Intel NIC to use a virtual interface for each subnet, and it did surprisingly well. I was able to arping across all subnets, and it didn't lose as much as I expected. However, logging in via SSH, you could really feel that it was struggling. Once connected, it felt fine, but the login would take 10-15 seconds to come up at times. CPU/RAM/etc never looked bad, which is what made me wonder if it may actually be a hardware problem. I didn't have too much time to devote to that test, so I didn't dig much deeper. It's likely that the kernel was restricting the resources consumed by the stack.
Thanks again.

Review Cisco Networking for a $25 gift card