I'm having this occasional problem wherein when someone's uploading/downloading a big file, I will see all the ports on that same switch gets flooded with traffic. I have shown a screenshot of "show interface summary" for the switch and you'll notice that the all ports have very high TX rate (in red). The user that's uploading a big file is on port GE1/0/21 as shown in blue.
The big question is what is causing that? I checked the Broadcast and Multicast packets and no significant increase during this period. I also tried to connect a laptop running Wireshark on one of the ports and unless I'm missing anything, I don't see packets that are suspicious.
Any ideas? In Wireshark what kind of packets should I be looking for? I have 6 of this 2960 switch and they all have the same when users try to access big files on the network.
Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
* GigabitEthernet1/0/12 0 0 0 2176 0 0 59535000 5080 0
* GigabitEthernet1/0/13 0 0 0 2246 39000 10 59834000 5107 0
* GigabitEthernet1/0/14 0 0 0 2324 15000 10 59793000 5101 0
* GigabitEthernet1/0/18 0 0 0 13337151 0 0 15488000 1385 0
* GigabitEthernet1/0/21 0 0 0 14274 88623000 7587 427000 285 0
* GigabitEthernet1/0/24 0 0 0 22936 14765000 3452 73535000 7813 0
* GigabitEthernet1/0/26 0 0 0 11821 781000 428 66964000 6290 0
* GigabitEthernet1/0/27 0 0 0 2165 0 0 59535000 5080 0
* GigabitEthernet1/0/28 0 0 0 2151 9000 4 59582000 5092 0
* GigabitEthernet1/0/29 0 0 0 13337477 1000 1 15621000 1399 0
* GigabitEthernet1/0/32 0 0 0 13349459 17000 9 15522000 1394 0
* GigabitEthernet1/0/33 0 0 0 1978 0 1 59534000 5080 0
* GigabitEthernet1/0/36 0 0 0 13339111 0 0 15485000 1386 0
* GigabitEthernet1/0/37 0 0 0 2184 0 0 59537000 5081 0
Here's a sample config of the switch port:
switchport access vlan 10
switchport mode access
no logging event link-status
storm-control broadcast level 1.00
storm-control multicast level 1.00
spanning-tree bpduguard enable
This is probably a bit late now since I cleared the counters yesterday while I was troubleshooting. It doesn't happen every time but if there's another incident similar to this, I will try those 2 commands. What should I be looking for?
I've mistaken, I haven't restarted the counter for this switch yet. Here they are:
GigabitEthernet1/0/21 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 0c85.2505.e615 (bia 0c85.2505.e615)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:01, output hang never
Last clearing of "show interface" counters 20:44:04
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 239
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1372000 bits/sec, 133 packets/sec
5 minute output rate 156000 bits/sec, 112 packets/sec
21564581 packets input, 26642855067 bytes, 0 no buffer
Received 781 broadcasts (781 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 781 multicast, 0 pause input
0 input packets with dribble condition detected
12350676 packets output, 8129134325 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
sh controllers ethernet-controller gigabitEthernet 1/0/21
Transmit GigabitEthernet1/0/21 Receive
1479046709 Bytes 2334504133 Bytes
10626713 Unicast frames 3699282053 Unicast frames
822222214 Multicast frames 565533 Multicast frames
812603415 Broadcast frames 159749 Broadcast frames
0 Too old frames 2097853643 Unicast bytes
0 Deferred frames 224174236 Multicast bytes
0 MTU exceeded frames 12476254 Broadcast bytes
0 1 collision frames 0 Alignment errors
0 2 collision frames 0 FCS errors
0 3 collision frames 0 Oversize frames
0 4 collision frames 0 Undersize frames
0 5 collision frames 0 Collision fragments
0 6 collision frames
0 7 collision frames 990079478 Minimum size frames
0 8 collision frames 1090212793 65 to 127 byte frames
0 9 collision frames 614688895 128 to 255 byte frames
0 10 collision frames 51127410 256 to 511 byte frames
0 11 collision frames 49590484 512 to 1023 byte frames
0 12 collision frames 904308275 1024 to 1518 byte frames
0 13 collision frames 0 Overrun frames
0 14 collision frames 0 Pause frames
0 15 collision frames
0 Excessive collisions 0 Symbol error frames
0 Late collisions 0 Invalid frames, too large
0 VLAN discard frames 0 Valid frames, too large
0 Excess defer frames 0 Invalid frames, too small
2288964066 64 byte frames 0 Valid frames, too small
904812321 127 byte frames
817753879 255 byte frames 0 Too old frames
1284022034 511 byte frames 0 Valid oversize frames
288853527 1023 byte frames 0 System FCS error frames
356013811 1518 byte frames 0 RxPortFifoFull drop frame
0 Too large frames
0 Good (1 coll) frames
0 Good (>1 coll) frames
The so-called "flooding" are not caused by re-tries or collisions. I'm noticing a small amount of "Total output drops". What is the switch in question and what is the downstream client? Is this a server?
Are PC's and servers involved in a single subnet or in different subnets?
If different subnets, can you give us a tracert from client to server, and server to client?
Do you have multiple subnets, and run HSRP? There is a possibility the server uses one HSRP router to get to the client subnet, and the client uses another one to get to the server. In this case, it's possible the gateway the server uses doesn't actually see packets directly from the client, as they come in another path (if client/server are in different subnets). In this case, there can be apparent flooding of the unicast traffic either to or from the client, as a switch on the path has not recently seen a packet from that destination (as it always came in from the other gateway).
Making one HSRP gateway primary for all VLANS will resolve the issue; or the MAC aging-time set much higher so the occasional broadcasts from clients will keep the MAC address table complete.
Also looking at Unicast Flooding as a possible reason. Both the source host and destination host are on the same VLAN but on different switch. We do have HSRP in between for the Gateway but since they are on the same subnet, it shouldn't be an issue, right?
For setting the MAC aging-time, what's the recommended value?
mac-address-table aging-time 14400
Oh, you already have 14400? On core and on access level switches?
If they're both in the same vlan/subnet, then the only thing I can think of is something on the path between the switches might prefer the server-to-ws path through one set of switches; and the return path via another.
Have you tried taking a port in the same vlan, and run a PC with wireshark, running a capture (i.e. 2 minutes) of what is being "unicast flooded"? After filtering out the standard junk (arp requests, dns, etc.) what high-volume packets are being seen? In a perfect world, you wouldn't see much; but is it server-to-ws or ws-to-server on one session (the large transfer) only?
One time I *did* see a complete VoIP stream in one direction only - it had hit the "cross subnet" situation I described earlier, being a legacy phone with a non-optimal gateway, flooding whenever the phone was used.
Just curious - do you see the same behavior with multiple servers, or just the one? Is it possible that server has multiple NIC cards to the same subnet, and with some quirk in the server settings may be receiving packets on one interface, and sending them out on another?
If you DO have multiple NIC's on the server, then have a look at the stats of all the network ports - look for any that have an unusual imbalance of in/out packets.
Also if it happens to be a Windows server with multiple NIC cards, is it possible NetBios/AD happens to have advertised the server through another interface? From a workstation, ping the UNQUALIFIED server name and verify the IP is the one you expect.
We're clutching at straws here - we have little pieces of information, but not a complete picture.
Thanks for the replies. I guess I can't answer the questions now, not until I have a repeat. Anyway I'm also planning to add this command on the switchports:
switchport block unicast
Have anyone tried using this on a switch port connected to a DHCP/DNS server?
It should't affect your DHCP/DNS sever because of 2 points:
1. If there was a request to a server then switch should know a MAC address of request sender.
2. If server want to send packet to anybody it is sending broadcast ARP request first. And if there is response than switch should learn MAC address of responder.
Please check MAC address table for 2 possible problems:
1. Is there a MAC address of your server (if it's inside the same LAN as client) or your gateway IP. The default action for packets with dst MAC address that is not in MAC address table is sending to all ports.
1.1. Check that there is no flood of MAC addresses. The resault of this attack is the same.
2. Check if there is broadcast/multicast addressess in your table. Maybe your server/gateway has a broadcast/multicast MAC address.
It's most probably unicast flooding. But I thought Cisco would send traps (or at least show on the logs) automatically when this happens? My SNMP trap settings in the switch is working fine for other trap messages but I don't see one for unicast flooding.