cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1405
Views
15
Helpful
7
Replies

Catalyst 4500 Issue

tkatsiaounis
Level 1
Level 1

Hello all.

I have a catalyst 4507 R-E switch with supervisor engine V ,two 48 gigabit port RJ45 modules and one 48 port gigabit SFP module.

The switch has been configured almost six months ago. Changes to the config haven't been implemented for a month now.

Sudenly this afternoon i had the following problem. Some users complained about connectivity and when i investegated i found that the switch was "dissapearing".

I started a continuous ping to the switch SVI and although the pings started with good times (<1ms) as packets were flowing times increased (1ms then 5ms then 20ms and so on) until the switch stopped responding. After two to five minutes the switch "reappeared" only to dissapear again after another five mnutes. That went on even thoug i restarted the switch a couple of times and made sure that it passed all the diagnostics with no error messages.

I then used another switch (same model, same interfaces) which was bought to become a failover pair but was off up until now . With the new switch everything seemed to work fine. Also when i divided the interfaces to the two switches both seemed to work perfectly well. The utilization lights on the first switch did not exceed 1% in anytime so i am wondering what might be going on.

Should i do an RMA and change the Supervisor engine???  Is it another error?? An upgrade in software maybe???

Both switches are running GLBP, VTP and spanning tree which all had no errors except from glbp which was continuously changing states between the two swithces.

Thanks a lot for any help.

2 Accepted Solutions

Accepted Solutions

garhall
Level 1
Level 1

Hi,

Ideally we would want to get some output while the issue is occuring, is the device reachable via telnet/ssh while the pings are failing?

If access to the CLI over telnet/ssh/console is possible during the issue at the minimum you could check:

#sh interface

#sh logg

#sh interfaces trunk

#sh spann de ac | i exe|Number of top|from

#sh proc cpu | e 0.00

In addition what IOS version is this device running?

You could replace the Sup, the problem is we don't really know the root cause so this would not guarantee there won't be further issue.

If the issue happens again get the above output as well as 'sh tech' prior to reloading.

HTH,

/gary

View solution in original post

Mohamed Sobair
Level 7
Level 7

Hi,

You can Only figure out the cause of this problem when it happens. it could be broadcast storm , However you mentioned it happened for all vlans which made me think as a user vlan should not affect Managment or Server vlan. Try to limit broadcast storms using Storm Control command at the interface level.

Could you please monitor it closely and execute the following through a console connection:

1- show processes cpu | exc 0.00%

2- show logging

Note:

The Supervisor engine is a switching processor, if its defected, then it should be vital always not oftenly. You dont need to change the supervisor engine here until you notice whats the issue.

View solution in original post

7 Replies 7

garhall
Level 1
Level 1

Hi,

Ideally we would want to get some output while the issue is occuring, is the device reachable via telnet/ssh while the pings are failing?

If access to the CLI over telnet/ssh/console is possible during the issue at the minimum you could check:

#sh interface

#sh logg

#sh interfaces trunk

#sh spann de ac | i exe|Number of top|from

#sh proc cpu | e 0.00

In addition what IOS version is this device running?

You could replace the Sup, the problem is we don't really know the root cause so this would not guarantee there won't be further issue.

If the issue happens again get the above output as well as 'sh tech' prior to reloading.

HTH,

/gary

The problem is that the issue happoens to all SVI's on the switch. Access to the switch at the time is only through console ssh/telnet are dropped.

Now the switches work fine but only user vlans/connections are on the problem switch the server connections are on the other 4507R-E. I hope tomorrow morning that the users will return the switch won't "dissapear" again.

I run some of the commands and the output is on the txt attached.

Mohamed Sobair
Level 7
Level 7

Hi,

You can Only figure out the cause of this problem when it happens. it could be broadcast storm , However you mentioned it happened for all vlans which made me think as a user vlan should not affect Managment or Server vlan. Try to limit broadcast storms using Storm Control command at the interface level.

Could you please monitor it closely and execute the following through a console connection:

1- show processes cpu | exc 0.00%

2- show logging

Note:

The Supervisor engine is a switching processor, if its defected, then it should be vital always not oftenly. You dont need to change the supervisor engine here until you notice whats the issue.

Hi,

Mohamed is correct, especially with the device reloaded it's not possible to find root cause. However as said I'd prepare a console connection to quickly gather the output mentioned already.


Since it was affecting all SVIs most likely there was high CPU and Input queue drops on the switch, the commands mentioned will give a starting point. If possible opening a high sev TAC case during the issue would also be helpful so we can take a look over a Webex session.

Hopefully the output isn't needed (i.e. it doesn't happen again if it does please gather it from the console, thanks.

/gary

Leo Laohoo
Hall of Fame
Hall of Fame

Sounds like a duplicate IP address.

  Sounds like you got a bridging loop somewhere  .  Could be looped ports somewhere .

Hi all.

Finally the problem was solved. As i found out a colleague of mine found a good idea to add the "log" keyword after each and every access-list on the final explicit  deny any any statement.

After a research on cisco's site for the cpu loads i found out it might be the problem. I removed the "log" keyword and the cpu traffic went back to normal.

Thanks anyway to everyone for your quick and VERY helful retries.

Review Cisco Networking for a $25 gift card