With this information, it is very hard to say what is wrong ?
You need to do basic test as i suggest here :
1. What is the status at the time drop on Physical Port traffic also Drop ? ( NMS can only get information from the device and plot it)
2. what you see the Logs that time on the device ?
3. what is the Load on that on time when it dropped ? is CPU is ok ?
4. what device is this connected, the interface you mentioned in the Post ?
I want to add some suggestions.
First I would collect information like:
1. What type of information passes through this switch
1.1. for example: it could be the natural behavior of your network, or something that comes from a routing
configuration or QOS configuration.
2. You have attached one interface, I recommend looking at the whole picture and also the other interfaces involved in this phenomenon
In addition, if there is traffic from an external source to the switch, it can very well be that
The drop you see originates in other networking equipment.
Maybe even there you can see the same drop.
To do this, the issue needs to be researched in more depth
what device is this exactly ? I only know of a BPX 8680, no Catalyst switch. Either way, the screenshot looks like MRTG or something similar, make sure it is not actually the graphing software itself that creates these drops...
This is just a guess, but it sounds like there is an application that has a burst every 4 hours that is causing the drops. That is where I would start to look.
Firstly, when you note "drops", you don't mean drops as in discarded packets but, from your stats, a temporary drop in overall transmission rate, correct?
There can be many reasons for this, like @Elliot Dierksen, my first guess would be it's due to some application that has cyclic volume. I.e. some application which pauses its somewhat busy transmission rate for some short time period.
From your graph, I notice the overall drop in transmission rate is not exactly every four hours and the new day started cycling after a bit longer than the prior day and also seemed to have slightly longer time periods too.
As the other posters have noted additional analysis, of other interface stat, and other device stats, might provide additional clues. For instance, do you have similar graph for the other side device for this trunk port? Does any other port, or ports, on the same device, show a similar drop in ingress rate, etc.?
Do you have any stats, like Netflow, and/or packet captures, for when the traffic is at its "normal" rate and during the transitory reduction?
When I saw the drop in transmission rate, I also thought that can be an indicator of global sync'ed FIFO drops, but from your stats, I believe the reduction period is too long. Yet, again, more stats from that interface, like actual packet drops, if any, during the same time periods, would be helpful.
adding to Leo's comments - out drops are so high ( clear the counters also observe is that massive increasing )? is the same case on the other side switches ten 5/13?
The Main switches are in VSS ? Do you have a Graph for all the devices? when you compare the same with connected device south side, is the same results ? on your uBR ? do you have any config which disconnects the DOCSIS clients ? (if they are online Longer ?)
Looks like your process switching!
sh ip cef
sh cef interface x/x brief
sh ip cef switching drops
sh ip cache
Hello @dfce ,
you have never cleared the counters on interface tengiga5/13 or other interfaces as well.
you have on int ten5/13 Switch1:
24590823623889 packets output
Total output drops: 718711960
this leads to an output drop probability of:
718711960 / (718711960+ 24590823623889 ) = 2,92259814445342e-5 = 30 packets dropped evey million of packets output
on interface ten5/14 we see:
Total output drops: 692423108
25222535355016 packets output
692423108 /( 692423108 +25222535355016) = 2,745180411051656e-5 = 27 packets dropped every million of packets output
For Switch 2:
! again counters have been never cleared
Total output drops: 3703
13539330346760 packets output
no need to compute here it is very low
Total output drops: 557070898
18579754563138 packets output
this leads to a drop probability of :
557070898 / ( 557070898 + 18579754563138) = 2,99817844270997e-5 = 30 packets dropped every million packets output
this can be acceptable for user traffic.
what type of linecard is in slot 5 of your Catalyst 6880 ?
Can you post a show module ?
a 16 tengiga modules can work in oversubscription ratio or in dedicated mode ( using less ports however)
About your initial questions why you see those "holes" in traffic every 4 hours I think they are not real but related to how you plot the graphs.
What SNMP variables are you using to plot the graphs ?
64 bit byte counters contain the HC in the name.
In any case what is the ARP timeout on your Cat6880 is it 4 hours for SVI or routed interfaces too ?
This is the only 4 hours timers we see in show. commands.
Hope to help
Both links from Switch 1 to Switch 2 has large amounts of Total Output Drops.
Only one link from Switch 2 to Switch 1 have large amounts of Total Output Drops.
Both links are "adjacent" to each other. And as @Giuseppe Larosa said, I do not believe Over-Subscription is enabled.