10-25-2018 01:14 AM - edited 03-08-2019 04:28 PM
I know there are lots of SNMP RESPONSE DELAYED bugs around, but none of them affect the ciscoFlashFileEntry as far as i could find. We have this problem on our 3850 stack. We do run firmware 16.6.4 since 6 weeks but the error just appeared 6 days ago:
Oct 18 22:06:33.105: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (5979 msecs) Oct 18 22:07:43.688: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (6824 msecs) Oct 19 22:06:28.302: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (6573 msecs) Oct 19 22:07:35.670: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (6662 msecs) Oct 20 22:06:28.142: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (6697 msecs) Oct 20 22:07:37.708: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (6861 msecs) Oct 21 22:06:25.983: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (6206 msecs) Oct 21 22:07:34.135: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (6043 msecs) Oct 22 22:06:28.325: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (7123 msecs) Oct 22 22:07:38.038: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (7031 msecs) Oct 23 22:06:28.435: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (6586 msecs) Oct 23 22:07:38.767: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (6296 msecs) Oct 24 22:06:29.117: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.35 (6561 msecs) Oct 24 22:07:34.942: %SNMP-3-RESPONSE_DELAYED: processing GetNext of ciscoFlashFileEntry.2.2.1.37 (6640 msecs)
strangely, these error come at roughly the same time every day. Our prime infrastructure switch inventory job starts at 22:00 o'clock and takes around 19min to complete. So I assume this event happens as the prime user is logged on to the switch doing its stuff. We dont see any errors on our monitoring software during this time. So i assume the timeouts must be very high.
There was no configuration changed on the switch for the last 30 days. I checked our logs. Does anyone else has experience with this issue?
10-25-2018 01:50 AM
Hello,
SNMP by default has low priority, so in case of e.g. high CPU utilization, SNMP traffic could be delayed or dropped.
Try and configure:
Switch(config)#snmp-server ip precedence 7
Or as an alternative increase the snmpwalk timeout on whatever management system you have...
10-29-2018 02:30 AM
Dear Georg
The timeout on our monitoring software is not the issue. As i wrote, there is no error visible on our monitoring tool. The command you wrote just marks these types of packets with the highest IP precedence, so it will be prioritized by other networking devices that have QoS configured. The delay from gathering the Info and sending these SNMP responses is still present. This isn't an issue per se, but it gets reported as error (3) to our syslog server and we have to solve these cases as they create a ticket in our environment. So i was wondering if any of the fixes for the other SNMP-3-RESPONSE_DELAYED would apply here. We do inventories with prime for years now and the problem popped up just now.
If this post doesn't lead to anywhere, we will have to do a remapping of the event from error (3) to information (6)
10-29-2018 03:15 AM
Hello,
so basically, you just want to get rid of the syslog messages on the devices ? Do you get the same message when you manually do an snmpwalk for the OIDs shown in the syslog ?
10-29-2018 05:37 AM
Hey Georg
When i do a SNMP walk, there is no error. I'd have to do a SNMP walk at some minutes after 22:00 o'clock as prime is doing its inventories at that time. I tried to run down the specified OIDs and maybe change the inverval of this SNMP request by our monitoring tool. This is what i've come up so far:
.1.3.6.1.4.1.9.9.10.1.1.4.2.1.1.2.2.1.35 ccaAcclDSASignings The number of times DSA signature has been generated by this module, counted since the last time this module assumed 'active' status. .1.3.6.1.4.1.9.9.10.1.1.4.2.1.1.2.2.1.37 ccaAcclOutboundSSLRecords The number of combined outbound hash/encrypt SSL records processed by this module, counted since the last time this module assumed 'active' status.
We only monitor interface counters of 10 different interfaces and stack status on this device. So i am assuming that prime is requesting the content of this OID.
10-29-2018 07:18 AM
OK i just disabled all SNMP polling from our monitoring software and did a manual sync for this switch from within prime and the error occured again. We will be defusing the errorlevel of this message as this seems to be just a warning or information that an snmp response took longer than expected.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide