cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
30350
Views
15
Helpful
49
Replies

Catalyst 2960S problem

allen_huang
Level 1
Level 1

Hi,

I have 4 Catalyst 2960S, and they have the same problem.

The CPU usage is 60% before I telnet them.

It fell to 20% when I using telnet to login. And raised to above 60% after logout.

2010-06-18_095534.png

This is the output of 'show proc cpu' when I login.

2010-06-18_100251.png

This is the output of 'show proc cpu' 2 mins later.

2010-06-18_100804.png

And the traffic chart will like this.

2010-06-18_102131.png

How can I to solve this problem?

The hardware & software version is:

WS-C2960S-48TS-L   12.2(53)SE1           C2960S-UNIVERSALK9-M

Many thanks for any comments.

49 Replies 49

·         The model revision numbers seem to play a big part in this bug CSCth24278 from my results below:

          Switch WS-C2960S-48TS-L

·         Model revision number: A0 <

·         Motherboard revision number: A0

·         IOS Image - c2960s-universalk9-mz.122-55.SE1.bin

·         Result is 30% CPU << This is good compared to the below switches

      

·        Switch WS-C2960S-48TS-L

          Model revision number: B0 <

          Motherboard revision number: A0

   ·      IOS Image - c2960s-universalk9-mz.122-55.SE1.bin

          CPU usage result 60-70%

·         Switch WS-C2960S-48TS-L

          Model revision number: B0 <

·         Motherboard revision number: B0 <

   ·      IOS Image - c2960-lanbasek9-mz.122-25.SEE2.bin <

·         CPU usage result is 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5% <

          Although older this image seems to be a work around: c2960-lanbasek9-mz.122-25.SEE2.bin for CPU usage (going backwards I know) across the range of Model revision numbers B0,C0,D0 etc

       Can someone confirm ETA for IOS release 12.2(58) ?

       Thankyou

leam_hall
Level 1
Level 1

We are having a similar issue on a 3750 stack. Oddly, after looking at the queues and turning off console logging, things seem to have quieted down. This is not a "cosmetic bug", as far as I can tell; we were having throughput and backup failure errors during the elevated CPU times.

The commands to change the logging were:

  • no logging console
  • logging buffered 128000

These came from a Cisco doc "Troubleshooting High CPU Utilization".

Leam

atrin.saghebfar
Level 1
Level 1

Dear Allen

did you find a way to solve this problem

i have this prblem with 15 2960s-TS too

IOS Version: c2960s-universalk9-tar.122-55.SE2

Thanks,

Atrin

Eric Olinger
Level 1
Level 1

I am troubleshooting very similar issues with a client who has several stacks of 2960S deployed. Today I checked and the 12.2(58)SE release was available. We are loading the first stack tonight and wil see if it resolves the issues for us. I would agree - reported cosmetic - but users are reporting phones rebooting due to lack of network connectivity and varous PC issues. I'll post as soon as I can confirm with the users.

Loaded 12.2(58)SE into two stacks of 2960S and so-far-so-good.

Excellent, please keep us updated of any problems.  We'll be going to 58 soon.

Thanks to whoever gave me the ratings.

If anyone wants to upgrade their 3560E/3560X or 3750E/3750X then HOLD IT.

I've tried just "pumping" the IOS  (from 12.2(55)SE) to a 3750E stack and I nearly got a stroke.  The management connection to the switch STOPPED.  But the switch was still continuing to do it's job.  No link failure.  No packet drops.  Nothing.  So I'm going to do a few more test to find out what happened.

So far so good with our site. Users feel that there is better performance. The first stack went fine. I appreciate the post about CSCto62631. We are going to upgrade the remainder of the campuses soon.

      333343333333333333333433333333333333433333333333333333333333333333333333
      753616444345534555364555569753442456044365656654364476543444554433475584
  100                                                                      
   90                                                                      
   80                                                                      
   70                                                                      
   60                                                                      
   50                      *                                               
   40 ** ***     **  *** * ********     ***   *******  *  ***     **     ***
   30 ######################################################################
   20 ######################################################################
   10 ######################################################################
     0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
               0    5    0    5    0    5    0    5    0    5    0    5    0 
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

atrin.saghebfar
Level 1
Level 1

Dear All

yesterday i upgrade the IOS of my 2960s to c2960s-universalk9-tar.122-58.SE

my problem with High CPU Usage solved,

but i'm still testong it

there is only one thing, today i read the Release Notes for this IOS and Cisco said that:

CSCth24278 (Catalyst 2960-S switches)
The CPU utilization on the switch remains high (50 to 60 percent) when the switch is not being
accessed by a telnet or a console session. When you telnet or console into the switch, the CPU
utilization goes down.
There is no workaround.

but in my test it was OK and no High CPU Usage

Hello,

it seems that cisco has pulled this software 12.2.58 from their site because there is a following very serious bug present CSCto62631

I found a document which states:

Cisco IOS Release 12.2(58)SE images for all platforms have been removed from Cisco.com because of
a severe defect, CSCto62631. The solution for the defect will be in Cisco IOS Release 12.2(58)SE1, to
be available the week of May 9, 2011.

Meanwhile all you who have upgraded to this particular version, can either test this bug in your environment or downgrade or implement

a workaround

regards,

Robert

Leo Laohoo
Hall of Fame
Hall of Fame

Right.  12.2(58)SE1 has been released, as scheduled.

s.steenkamp
Level 1
Level 1

Hi there,

I don't fully agree with Cisco that this is just a cosmetic bug.

Been getting STP trap alerts that just does not make any sense. After my tshoot and investigation I come to the following conclusion.

Stack details:

Switch Ports Model              SW Version            SW Image

------ ----- -----              ----------            ----------

*    1 52    WS-C2960S-48LPS-L  12.2(53)SE2           C2960S-UNIVERSALK9-M

     2 52    WS-C2960S-48TS-L   12.2(53)SE2           C2960S-UNIVERSALK9-M

     3 52    WS-C2960S-48TS-L   12.2(53)SE2           C2960S-UNIVERSALK9-M

     4 52    WS-C2960S-48TS-L   12.2(53)SE2           C2960S-UNIVERSALK9-M

My stack has 2 fibre link back to the two core switches for triangulations.

The fibre links terminate in separate stack members.

STP is running.

Have been getting the following message in the log.

002484: May 10 22:07:33.548: %XDR-6-XDRIPCNOTIFY: Message not sent to slot 4 because of IPC error timeout. Disabling linecard. (Expected during linecard OIR)

Some info on IPC and XDR.

PLATFORM _IPC Messages

This section contains the Inter-Process Communication (IPC) protocol messages. The IPC protocol handles communication between the stack master switch and stack member switches.

XDR Messages

This section contains eXternal Data Representation (XDR) messages.

When I compare the time stamp of the sh log message above to my monitoring I noticed the following.

80% high CPU utilization.

Now here's what I thing is happing,

At times when the CPU hit high it causes an IPC issue, that then prevents the BPDU's from being send between the stack members. STP thinks the link is down and fails over to the backup root. When the stack returns to normal then STP changes back.

What do you guys think...?

Leo Laohoo
Hall of Fame
Hall of Fame

Right.  12.2(58)SE1 has been released, as scheduled.

Release Notes for the Catalyst 3750, 3560, 2960-S, and 2960 Switches, Cisco IOS Release 12.2(58)SE1

Cisco IOS Release 12.2(58)SE1 and later does NOT support all the Catalyst 3750 and 3560 switches. The models listed below are NOT supported in this release. For ongoing maintenance rebuilds for these switches, use Cisco IOS Release 12.2(55)SE and later (SE1, SE2, and so on).

• WS-C3560-24TS

•WS-C3560-24PS

•WS-C3560-48PS

•WS-C3560-48TS

•WS-C3750-24PS

•WS-C3750-24TS

•WS-C3750-48PS

•WS-C3750-48TS

•WS-3750G-24T

•WS-C3750G-12

•WS-C3750G-24TS

•WS-C3750G-16TD

Just to inform you guys, we have this problem on 3 2960s too (we don't have more yet).

They all had the c2960s-universalk9-mz.122-53.se2 image.

I've updated one to the c2960s-universalk9-tar.122-58.SE1 image and the problem is solved. This one is just one access switch. The other two are stacked but show exactly the same behaviour. I will update them when it is possible.

We have never expirienced any packet drop or loss or something else. So it seems to be just a cosmetic bug. But the stories above get me conserned so I desided to upgrade the switches.

Review Cisco Networking for a $25 gift card