cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4545
Views
0
Helpful
16
Replies

Cisco PI 2.1 snmp failure with Cat6506VSS

Stefan Sawluk
Level 1
Level 1

Hello Community.

 

I have a problem with a Catalyst 6500 VSS. I get in Prime all the time a snmp failure, only for this device. SNMP is configured like on all other devices. 

 

Collection Status Failed feature(s)


Failures

Impact

Possible Cause

Unable to collect the Flash and Image related information from the Device. Software image management. SNMP request timed out. Verify device credentials and SNMP response speed from device.
 
 
When  i start a Report dto check the device credentials, everything is fine and i get no failure. 
 
 
On the Cat 6500 i get every time when i start a discovery Job or i sync the device an error in the log:
 
5823: Jul 11 22:02:44: %SNMP-SP-3-INPUT_QFULL_ERR: Packet dropped due to input
queue full
205824: Jul 11 22:02:44: %SNMP-SP-3-INPUT_QFULL_ERR: Packet dropped due to input
queue full
205825: Jul 11 22:02:44: %SNMP-SP-3-INPUT_QFULL_ERR: Packet dropped due to input
queue full
205826: Jul 11 22:02:44: %SNMP-SP-3-INPUT_QFULL_ERR: Packet dropped due to input
queue full
205827: Jul 11 22:02:44: %SNMP-SP-3-INPUT_QFULL_ERR: Packet dropped due to input
queue full
205828: Jul 11 22:02:44: %SNMP-SP-3-INPUT_QFULL_ERR: Packet dropped due to input
queue full
 
Does anybody know this?
 
Regards
 
Stefan
16 Replies 16

AFROJ AHMAD
Cisco Employee
Cisco Employee

Hi Stefan,

try to do o a warm restart of the SNMP server by using the following Commands on one of the devices:
 
# no snmp-server
 # snmp-server community (Community String Name) RO
 
 This will warm restart your SNMP Server.

 

Also , check if the device is experiencing HIGH CPU ?

 

Thanks-

Afroz

***Ratings Encourages Contributors***

Thanks- Afroz [Do rate the useful post] ****Ratings Encourages Contributors ****

Hi Afroz.

 

I have tested to warm restart the snmp server but it does not help. I also have no high cpu when i discover the device. 

 

Any other idea?

 

Stefan

 

Hi Stefan,

It looks like SNMP polling on this device is very frequent.

If are using other NMS server then try to increase the polling interval on them .

 

Also try to increase the SNMP timeout and retry on PI :

Administration > System Settings > SNMP Settings

Reachability Retries=3      Reachability Timeout=10-14

 

Thanks-

Afroz

***Ratings Encourages Contributors ****

 

Thanks- Afroz [Do rate the useful post] ****Ratings Encourages Contributors ****

Hi Afroz.

Thank you for your reply. Now i have created a tac. When i have a solution i will let you know. 

 

Regards Stefan

 

Hi Stefan,

has you received a feedback from TAC?

I have the same problem with the exact same infrastructure.

best, wim

Hello guys,

 

the problem is solved but i don´t have a explanation. We deleted and readded the device a view times and than it works. 

 

Regards Stefan

I found that on 6880s running VSS the snmp engineID ends up being the same ID on different devices.  That number needs to be unique across devices or it causes the connectivity errors you are describing.  You can validate if this is your issue by running "show snmp engine".  If you get something like this you have an issue.

Device1#          sh snmp engine
Local SNMP engineID: 800000090300000000000001
Remote Engine ID          IP-addr    Port

Device2#sho snmp engi
Local SNMP engineID: 800000090300000000000001
Remote Engine ID          IP-addr    Port

My work around was to specify the snmp engineID number.  I just picked the mac address of one of the interfaces and doubled it.  Then repeated across all other distribution switches to avoid having the issue pop up again.

Hi

 

I've been having this exact same problem with our 6509 VSS's and I wondered if you could clarify your workaround please.

Did you basically just use the command 'snmp-server engineID local 10bd18e4018010bd18e40180' (I chose the SVI MAC for VLAN2 which was 10bd18e40180)

Have you had this issue on any other switches?? I also seem to be getting it frequently on 3750 Stacks too???

Hi BlueyVIII,

Before doing anything confirm that the SNMP EngineID is in fact identical on more than one switch, using the commands shown above. The bug where this behavior is documented (CSCuj55749) shows it to only affect Supervisor Engine 2T in VSS configuration running IOS 15.1SY train.

It's however not impossible that the same issue exists on other combinations. Just make sure that you're applying a fix to the right problem.

The command is: snmp-server engineID local engineid-string

The engine ID can be any 24 characted long string.

 

Sigurbjartur

Thanks Sigurbjartur,

That's really helpful as we're running SUP2T's with VSS and an affected version of IOS so I'll arrange to get these upgraded.

As a workaround for the immediate future I removed the VSS from PI2.2 and then used the IOS command mentioned above to change the local SNMP Engine ID. However, now when I try to add the VSS back into Cisco Prime won't verify the SNMP credentials (the CLI credentials verify OK).

Are there any other commands I need to do on the VSS as a result of the SNMP EngineID changing?

Yes!

If you're using SNMPv3.

When you create a SNMPv3 user, the hashes for the authentication and encryption password are generated by, among other things, using the SNMP engineID. Therefore you must recreate the SNMPv3 users.

Thanks for quick reply, but we're only using V2..

However, since I made the EngineID change I've noticed the 2 lines below now appear in the config..

snmp mib community-map  public engineid 800000090300000000000000

snmp mib community-map  write engineid 800000090300000000000000

​The numbers at the end of these commands correlate with the previous EngineID (before I made the change).