cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3469
Views
0
Helpful
13
Replies

SCE 8000 with High CPU running SCOS 3.6.5

mtenorio
Level 1
Level 1

Hi Guys,

We have 1 SCE 8000 out of 20 that suddenly started showing high CPU

SCE_VINA02#>show processes cpu sorted

CPU utilization for five seconds:  99%/  0%; one minute:  99%; five minutes:  99%

PID   Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

6572  1743157450   2912773          0 41.79% 48.95% 42.00%   0 (scos-main)    

6592   377205520   2822045          0 25.47% 22.96% 26.37%   0 (scos-main)    

3773 -1131308800   2912909          0 22.09% 22.05% 22.19%   0 RucMain        

20676   249912990   1895163          0 17.51% 15.86% 18.65%   0 (scos-main)    

4058  1241380640   2839209          0  8.76%  9.06%  9.00%   0 PartyDBManager 

4038    58347790   1778224          0  3.18%  3.40%  3.63%   0 controlRpcExtServer

4042   199502330   2718111          0  1.99%  2.03%  2.06%   0 tFormatter     

6594    80405570   2558527          0  1.99%  2.59%  2.80%   0 (scos-main)    

20677    48485580   1405062          0  1.99%  2.28%  2.48%   0 (scos-main)    

4595   309332240   2912876          0  1.39%  2.13%  2.12%   0 HwChk          

4040   112684050   2736243          0  1.00%  1.12%  1.14%   0 qReaderExt     

3639   264215460   2848244          0  1.00%  0.94%  0.91%   0 (scos-main)    

4041    69509430   2868791          0  0.80%  0.74%  0.75%   0 qReaderInt     

6573   235121090   2801654          0  0.80% 16.28%  5.13%   0 (scos-main)    

6574    24244400    624919          0  0.60%  0.96%  3.70%   0 (scos-main)    

3771    53398760   2912883          0  0.40%  0.38%  0.39%   0 CpuUtilMonitor 

4037    32594910   2537807          0  0.40%  0.36%  0.38%   0 tLogger        

3712    16686250   1627766          0  0.20%  0.11%  0.11%   0 fanControlTask 

3778    46339600   2912908          0  0.20%  0.27%  0.28%   0 MsgQuMess-rcv  

4028     8361600    734410          0  0.20%  0.07%  0.06%   0 tCCWatchdog    

4059     5358870    353706          0  0.20%  0.04%  0.02%   0 SNMP agentx Task

2971     2518240    166310          0  0.20%  0.05%  0.01%   0 (nfsd)         

The CPU is STUCK at 99%, even when the traffic is minimal, we are seeing that the scos main process is hogging the CPU

When i do the show interface line 0 counter cpu-utilization it shows me a different value than the first command, which is more reasonable according tho the quantity of traffic that is flowwing through the box at this moment.

SCE_VINA02#>show interface LineCard 0 counters cpu-utilization

Traffic Processor CPU Utilization:

==================================

TP 1 CPU Utilization: 17%

TP 2 CPU Utilization: 16%

TP 3 CPU Utilization: 23%

TP 4 CPU Utilization: 17%

TP 5 CPU Utilization: 17%

TP 6 CPU Utilization: 15%

TP 7 CPU Utilization: 18%

TP 8 CPU Utilization: 16%

TP 9 CPU Utilization: 16%

TP 10 CPU Utilization: 15%

TP 11 CPU Utilization: 18%

TP 12 CPU Utilization: 18%

SCE_VINA02#>   

When i log under linux in the box, it shows me another value different than what the other first two commands showed me

[root@SCE_VINA02 ~]#>top

top - 14:26:47 up 169 days,  7:46,  1 user,  load average: 3.86, 3.68, 3.31

Tasks:  64 total,   1 running,  63 sleeping,   0 stopped,   0 zombie

Cpu(s): 59.5%us,  2.2%sy,  0.0%ni, 36.4%id,  1.3%wa,  0.2%hi,  0.5%si,  0.0%st

Mem:   2028668k total,  1914156k used,   114512k free,    47392k buffers

Swap:        0k total,        0k used,        0k free,   320416k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                               

3639 root      20   0 1353m 1.3g  44m S  134 65.9 355517:28 scos-main                                                             

   23 root      15  -5     0    0    0 S    0  0.0  24:59.23 scos-dump                                                             

2973 root      20   0     0    0    0 S    0  0.0  39:56.53 nfsd                                                                  

3455 root      20   0 16188 2620 1756 S    0  0.1 122:21.52 scos-sys-cmd-se                                                       

26549 root      20   0  2476 1116  896 R    0  0.1   0:00.05 top                                                                   

    1 root      20   0  1776  616  540 S    0  0.0   3:17.32 init                                                                  

    2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd                                                              

    3 root      RT  -5     0    0    0 S    0  0.0   0:58.98 migration/0                                                           

    4 root      15  -5     0    0    0 S    0  0.0   0:42.82 ksoftirqd/0                                                           

    5 root      RT  -5     0    0    0 S    0  0.0   2:25.35 watchdog/0                                                            

    6 root      RT  -5     0    0    0 S    0  0.0   0:59.95 migration/1                                                           

    7 root      15  -5     0    0    0 S    0  0.0   0:02.60 ksoftirqd/1                                                           

    8 root      RT  -5     0    0    0 S    0  0.0   0:18.66 watchdog/1                                                            

    9 root      15  -5     0    0    0 S    0  0.0 161:33.60 events/0                                                              

   10 root      15  -5     0    0    0 S    0  0.0  46:47.38 events/1                                                              

   11 root      15  -5     0    0    0 S    0  0.0   0:00.00 khelper                                                               

   12 root      15  -5     0    0    0 S    0  0.0   9:55.55 kblockd/0                                                             

   13 root      15  -5     0    0    0 S    0  0.0   3:26.99 kblockd/1                                                             

   16 root      15  -5     0    0    0 S    0  0.0   0:03.08 kswapd0                                                               

   17 root      15  -5     0    0    0 S    0  0.0   0:00.00 aio/0                                                                 

   18 root      15  -5     0    0    0 S    0  0.0   0:00.00 aio/1                                                                 

   19 root      15  -5     0    0    0 S    0  0.0   0:00.00 nfsiod                                                                

   20 root      15  -5     0    0    0 S    0  0.0   0:00.00 mtdblockd                                                             

   21 root      15  -5     0    0    0 S    0  0.0  50:36.28 skynet                                                                

   22 root     -86  -5     0    0    0 S    0  0.0 280:11.75 hw-mon-regs                                                           

   24 root     -91  -5     0    0    0 S    0  0.0  22:36.48 wdog-kernel                                                           

   25 root      15  -5     0    0    0 S    0  0.0   0:00.00 rpciod/0                                                              

   26 root      15  -5     0    0    0 S    0  0.0   0:00.00 rpciod/1                                                              

   80 root      15  -5     0    0    0 S    0  0.0   0:00.17 kjournald                                                             

   84 root      15  -5     0    0    0 D    0  0.0  17:05.31 kjournald                                                             

   92 root      15  -5     0    0    0 S    0  0.0   1:36.60 kjournald                                                             

  246 root      15  -5     0    0    0 S    0  0.0  36:09.67 bond0                                                                 

  952 root      20   0     0    0    0 S    0  0.0   0:00.01 pdflush                                                               

1032 root      20   0     0    0    0 S    0  0.0   3:12.76 pdflush                                                               

1430 bin       20   0  2024  500  388 S    0  0.0   0:00.01 portmap                                                               

1459 root      20   0  1884  644  536 S    0  0.0   0:22.97 syslogd                                                               

1467 root      20   0  1772  408  336 S    0  0.0   0:00.04 klogd                                                                 

1469 root      20   0  1764  512  448 S    0  0.0   0:00.00 getty                                                                 

1710 root      20   0  3308 1048  488 S    0  0.1  73:02.73 scos                                                                  

1711 root      20   0  2476  568  484 S    0  0.0   0:00.19 tee                                                                   

1733 root      20   0  3308  980  420 S    0  0.0   0:00.00 scos                                                                  

1735 root      20   0  2948 1280 1080 S    0  0.1   0:00.00 sh                                                                    

[root@SCE_VINA02 ~]#>

We have a TAC opened but is being more than a 2 weeks and the engineer hasnt being able to find the source of the issue, she even has a ticket open with the BU

Please if anybody out there might be facing a similar issue

13 Replies 13

usujlana
Cisco Employee
Cisco Employee

Hi,

What is the output of the following :

show interface linecard 0 subscriber amount anonymous

show snmp MIB cisco-service-control-tp-stats | include cscTpTotalHandledFlows

Thanks,

~Upinder

Hi,

Here are the outputs:

SCE_VINA02#>show interface linecard 0 subscriber amount anonymous

There are 4050 anonymous subscribers.

SCE_VINA02#>  

SCE_VINA02#>

cscTpTotalHandledFlows.37 = 2975600609

cscTpTotalHandledFlows.38 = 2935843990

cscTpTotalHandledFlows.39 = 2984980667

cscTpTotalHandledFlows.40 = 3020322217

cscTpTotalHandledFlows.41 = 2906945288

cscTpTotalHandledFlows.42 = 3181599627

cscTpTotalHandledFlows.43 = 2946590523

cscTpTotalHandledFlows.44 = 2893724253

cscTpTotalHandledFlows.45 = 3099297004

cscTpTotalHandledFlows.46 = 2811601346

cscTpTotalHandledFlows.47 = 2844216285

cscTpTotalHandledFlows.48 = 3057487524

SCE_VINA02#>

SCE_VINA02#>show interface LineCard 0 subscriber db ?

  counters  Show subscriber data-base counters

SCE_VINA02#>show interface LineCard 0 subscriber db counters

Current values:

===============

Subscribers: 26913 used out of 999999 max.

Introduced/Pulled subscribers: 22825.

Anonymous subscribers: 4088.

Subscribers with mappings: 26913 used out of 249999 max.

Single non-VPN IP mappings: 30881.

Non-VPN IP Range mappings: 0.

IP Range over VPN mappings: 0.

Single IP over VPN mappings: 0.

VLAN based VPNs with subscribers: 0 used out of 4095.

Subscribers with open sessions: 20337.

Subscribers with TIR mappings: 0.

Sessions mapped to the default subscriber: 0.

Peak values:

============

Peak number of subscribers with mappings: 40965

Peak number occurred at: 22:36:44  CHI  TUE  October  25  2011

Peak number cleared at: 06:51:39  CHI  FRI  October  7  2011

Event counters:

===============

Subscriber introduced: 0.

Subscriber pulled: 132806395.

Subscriber aged: 21314694.

Pull-request notifications sent: 199815174.

Pull-request by ID notifications sent: 0.

Subscriber pulled by ID: 0.

State notifications sent: 0.

Logout notifications sent: 19396028.

Subscriber mapping TIR contradictions: 0.

SCE_VINA02#>                                               

SCE_VINA02#>

Seems like a false postive to me. could you also post the output of the following for the Traffic processors 40, 42 , 45, 48 :

debug slot 0 ppc func RUC_PrintTpCpuUtilization

show snmp MIB cisco-process | include cpmCPUTotal1minRev

show snmp MIB cisco-process | include cpmCPUTotal5minRev

show snmp MIB cisco-service-control-tp-stats | include cscTpFlowsCapacityUtilization

Also is this persistent or does the 99% ever change to a lower value ? did a reboot happen recently or more subscibers have been migrated to this SCE 8k ? is this setup in MGSCP or cascade ?

Thanks,

~Upinder Sujlana

Hi, There is no command "RUC_PrintTpCpuUtilization"

SCE_VINA02#>debug slot 0 ppc 40 func ?

  CLS_CAM_DumpTuples              Dump all Tuples of the Classifier CAM

  CLS_SDRAM_DumpTuples            Dump all Tuples of the Classifier SDRAM

  CLS_ShowCounter                 Display Classifier counters

  cps                             Clear RuC party statistics

  d                               Display memory

  da                              Disable Aging

  devs                            List devices

  drv                             List system drivers

  ea                              Enable Aging

  FCRate_getRate                  Displays the Flow Context rate measured in the system

  fd                              List file descriptor names

  FF_ShowCounter                  Display Flow Filter counters for a given port

  i                               Tasks tCBs summary

  ir                              Initialize the general RuC statistics

  irdp                            Initialize the RuC dropped packets statistics

  paa                             Update aging time to immidiate for all parties

  RUC_DumpAllSubtasks             Show the RUC subtasks running on this cpu. Argument = DumpLevel (0/1/2)

  RUC_HWCNTR_PrintClsCntrs        Display Classifier counters

  RUC_HWCNTR_PrintMasterCntrs     Display DP and Gmac counters for a given port

  RUC_HWCNTR_ResetMasterClsCntrs  Reset Classifier counters

  RUC_HWCNTR_ResetMasterCntrs     Reset DP and Gmac counters for a given port

  RUC_LUT_DumpCacheLutsInfo       Dump all the Cache LUTs' info

  RUC_LUT_DumpLutEntries          Dump a specific LUT's (ID) Entries

  RUC_LUT_DumpLutStats            Dump a specific LUT's (ID) Stats

  RUC_LUT_Help                    Display the RUC_LUT (and cache LUT) debug help

  RUC_LUT_ResetLutEntries         Reset a specific LUT's entries

  RUC_ResetAllSubtasks            Reset all the RUC subtasks running on this cpu.

  RUC_ShowSubtaskTables           Shows the names of the subtasks tables currently registered on this cpu.

  SANB_MemShow                    System memory information

  sd                              Display the test-end and Line Card state output

  sps                             Display RuC party statistics

  sr                              Display the general Ruc statistics

  srdp                            Display RuC dropped packets statistics

  sysUpdateFlash                  Update the bootrom (WARNING! EXECUTE THIS COMMAND ONLY IF YOU ARE SURE)

  td                              Delete task

  telnetShell                     Start a telnet deamon for remote target shell (SCE2000 only)

  ti                              Task TCB complete information

  tr                              Resume task

  ts                              Suspend task

  tt                              Task trace

  uaa                             Update Aging t.o/immed. of all FCs (*100 mili)

  wsr                             Display the Ruc WAP traffic statistics

  STRING                          Name of any other function that doesn't appear on the above list

SCE_VINA02#>debug slot 0 ppc 40 func RUC_Pr

Here are the other commands

SCE_VINA02#>show snmp MIB cisco-process | include cpmCPUTotal1minRev

cpmCPUTotal1minRev.36 = 99

cpmCPUTotal1minRev.37 = 25

cpmCPUTotal1minRev.38 = 27

cpmCPUTotal1minRev.39 = 26

cpmCPUTotal1minRev.40 = 27

cpmCPUTotal1minRev.41 = 28

cpmCPUTotal1minRev.42 = 28

cpmCPUTotal1minRev.43 = 29

cpmCPUTotal1minRev.44 = 25

cpmCPUTotal1minRev.45 = 26

cpmCPUTotal1minRev.46 = 28

cpmCPUTotal1minRev.47 = 32

cpmCPUTotal1minRev.48 = 25

SCE_VINA02#>

SCE_VINA02#>show snmp MIB cisco-process | include cpmCPUTotal5minRev

cpmCPUTotal5minRev.36 = 99

cpmCPUTotal5minRev.37 = 25

cpmCPUTotal5minRev.38 = 29

cpmCPUTotal5minRev.39 = 26

cpmCPUTotal5minRev.40 = 28

cpmCPUTotal5minRev.41 = 28

cpmCPUTotal5minRev.42 = 28

cpmCPUTotal5minRev.43 = 28

cpmCPUTotal5minRev.44 = 26

cpmCPUTotal5minRev.45 = 27

cpmCPUTotal5minRev.46 = 28

cpmCPUTotal5minRev.47 = 31

cpmCPUTotal5minRev.48 = 25

SCE_VINA02#>

SCE_VINA02#>

cscTpFlowsCapacityUtilization.37 = 13

cscTpFlowsCapacityUtilization.38 = 14

cscTpFlowsCapacityUtilization.39 = 14

cscTpFlowsCapacityUtilization.40 = 14

cscTpFlowsCapacityUtilization.41 = 14

cscTpFlowsCapacityUtilization.42 = 14

cscTpFlowsCapacityUtilization.43 = 14

cscTpFlowsCapacityUtilization.44 = 13

cscTpFlowsCapacityUtilization.45 = 13

cscTpFlowsCapacityUtilization.46 = 15

cscTpFlowsCapacityUtilization.47 = 14

cscTpFlowsCapacityUtilization.48 = 13

SCE_VINA02#>

From the content of the posts, I believe that the troubleshooting steps is wrong.

it is the process CPU that should be looking at and not the traffic CPU.

Currently we are also facing the same issue , I believe there is still not conclusion on what the cause of the high process CPU.

it is for sure something problematic with your Control Processor,

because the first indexed entity in the SNMP processor utilization lists are control processors.

So how are rdr formatter statistics ? (show rdr-formatter statistics)

This type of problems can occur because of Control Card utilizing tasks, for example Quota operations or CPA querying.

Can you check that your category 4 destinations , are they able to handle your requests ?

check if there are queued RDRs int rdr formatter statistics:

then if there are queued ones drop the queue via;

conf

no service RDR-formatter

int linecard 0

silent

Then look at the "sh proc cpu sorted" ?

conf

service RDR-formatter

int li 0

no silent

Then look at the rdr formatter statistics ? if nothing is queued , you can try an application reinstall only (pqi).

Also try to get a debuglog and send here the end of the file (summary of error/fatal/warn histogram)

(debug dbg interpret output-file dbglog.log)

HTH.

here is the RDR statistics and debug summary log.

SUBSCE8K01#>show RDR-formatter statistics

RDR-formatter statistics:

=========================

Category 1:

  sent:               1733914918

  in-queue:           0

  thrown:             0

  format-mismatch:    0

  unsupported-tags:   0

  rate:               404 RDRs per second

  max-rate:           1984 RDRs per second

Category 2:

  sent:               919837769

  in-queue:           0

  thrown:             0

  format-mismatch:    0

  unsupported-tags:   0

  rate:               0 RDRs per second

  max-rate:           3657 RDRs per second

Category 3:

  sent:               0

  in-queue:           0

  thrown:             0

  format-mismatch:    0

  unsupported-tags:   0

  rate:               0 RDRs per second

  max-rate:           0 RDRs per second

Category 4:

  sent:               156867972

  in-queue:           0

  thrown:             0

  format-mismatch:    0

  unsupported-tags:   0

  rate:               105 RDRs per second

  max-rate:           163 RDRs per second

Destination:     10.123.2.25 Port: 33000 Status: up   Active: yes

  Sent: 1733915274

  Rate:   396  Max:  1984

  Last connection establishment: 3 weeks, 6 days, 20 hours, 45 minutes, 27 seconds

Destination:     10.123.2.26 Port: 33000 Status: up   Active: no

  Sent:          0

  Rate:     0  Max:     0

  Last connection establishment:  0 hours, 6 minutes, 14 seconds

Destination:       127.0.0.1 Port: 33001 Status: up   Active: no

SUBSCE8K01#>

SUBSCE8K01#>

SUBSCE8K01#>sh proc cpu sorted

CPU utilization for five seconds:  96%/  0%; one minute:  97%; five minutes:  99%

PID   Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

4929   293498890    475542          0 99.00% 21.55% 16.40%   0 (scos-main)

4555   573866000    478707          0 25.65% 25.89% 25.96%   0 RucMain

4918   625477080    478701          0 25.45% 29.72% 33.67%   0 (scos-main)

1810       88530       192          0  7.96% 14.82%  9.38%   0 (scos-main)

4927   172451130    474889          0  7.56% 14.99%  9.97%   0 (scos-main)

1809       97510       192          0  7.16% 17.72% 12.00%   0 (scos-main)

5359   122862320    476098          0  6.96% 13.29%  6.39%   0 (scos-main)

4817    83463610    476198          0  6.36%  6.31%  6.29%   0 controlRpcExtServer

5358   137770620    476591          0  5.97%  5.68%  6.14%   0 (scos-main)

5135   107450270    475407          0  4.38%  4.86%  6.47%   0 (scos-main)

4840   584104440    476935          0  4.37% 11.73%  5.28%   0 PartyDBManager

4821   152431380    478706          0  3.58%  3.40%  3.37%   0 tFormatter

4919   249427970    331960          0  2.98%  0.92%  0.92%   0 (scos-main)

4819    33178220    462486          0  1.59%  1.55%  1.39%   0 qReaderExt

5516    51751940    478645          0  1.39%  2.17%  2.15%   0 HwChk

4820    22420960    461376          0  0.99%  0.96%  0.96%   0 qReaderInt

3345    42535190    476936          0  0.99%  0.87%  0.84%   0 (scos-main)

4816    16505300    474586          0  0.80%  0.78%  0.72%   0 tLogger

4560     9043340    478707          0  0.40%  0.34%  0.34%   0 MsgQuMess-rcv

4521     7505470    476057          0  0.40%  0.36%  0.36%   0 nsm-to-sm-med-dispatcher

4520     8472310    475729          0  0.40%  0.38%  0.37%   0 sm-to-nms-high-collector

4551    10427120    478707          0  0.40%  0.45%  0.44%   0 CpuUtilMonitor

SUBSCE8K01#>

Summary of debug file.

************** CURRENT APPLICATION INFORMATION ***************

Application file: /apps/data/scos/release_.sli

Application name: Engage SML Version 3.6.5 build 41 (Protocol Pack 26 build 15)

Using Lib - PL_V3.6.5_B1

Using Lib - Classifier_V3.6.5_B1

Application help: Entry point of Engage

Original source file: /auto/srbu-proj1-bgl/apps/users/aninatar/autoBuild/App/SML/Engage/V3.6.5/src/com/pcube/AppTemplate/Main/template_app_main.san

Compilation date: Thu, October 20, 2011 at 23:30:08

Compiler version: SANc v3.20 Build 14 built on: Tue 08/04/2009 06:58:22.;SME plugin v1.1

Capacity option used: 'EngageDefaultSCE8000'.

************** DBG LOG FILE MESSAGES HISTOGRAM ***************

MOD ID : MSG ID | SEVERITY  | AMOUNT  |   DROPS    | STRING

----------------+-----------+---------+------------|-------

Total number of different <> messages is 0

Total number of different <> messages is 0

[0x0502:0x000a] | | 0120529 | 0000001056 | Party DB: PartyDB::waitForState(pcid=%d) called, maxPCIDs=%d.

[0x0102:0x0013] | | 0024763 | 0000000000 | Application Subscriber Context: Breach counter is negative for subscriber service counter=%d

[0x0ce1:0x0010] | | 0000027 | 0000000000 | SnmpMibs: %s %s %d %d.

[0x0505:0x0194] | | 0000005 | 0000000000 | Party DB Manager: (anonymous) createUnmappedParty - rcid = %lu, pid=%lu, ipAddress = 0x%x - came from ruc=%u, selected ruc=%u wrong ruc selected.

[0x0300:0x004d] | | 0000013 | 0000000000 | Formatter: RdrVXConnection: Closed existing socket to port %d. (%s)

[0x0910:0x002c] | | 0000002 | 0000000000 | RuC Interrupt Handler: %s.

[0x0920:0x000a] | | 0000002 | 0000000000 | Logger: %s

Total number of different messages is 7

[0x0a10:0x0004] |     | 0000027 | 0000000000 | System Message: System Info message:

%s

[0x0aa2:0x001e] |     | 0000013 | 0000000000 | SNMP Agent: %s

[0x0300:0x0059] |     | 0000013 | 0000000000 | Formatter: Formatter connection closed: address %s, port %d.

[0x0aa2:0x001c] |     | 0000008 | 0000000000 | SNMP Agent: %s

[0x0300:0x0056] |     | 0000008 | 0000000000 | Formatter: No Formatter connections is open on category %d.

[0x0aa2:0x001d] |     | 0000013 | 0000000000 | SNMP Agent: %s

[0x0300:0x005a] |     | 0000013 | 0000000000 | Formatter: Formatter connection opened: address %s, port %d.

[0x0aa2:0x001b] |     | 0000008 | 0000000000 | SNMP Agent: %s

[0x0300:0x0057] |     | 0000008 | 0000000000 | Formatter: Formatter active connection opened : address %s, port %d, category %d.

[0x0505:0x029e] |     | 0000001 | 0000000000 | Party DB Manager: notifyPullRequestByPid() - %lu calls for notifySpawn() - failed in the last hour.

[0x0920:0x0008] |     | 0000004 | 0000000000 | Logger: %s

[0x0743:0x0000] |     | 0000001 | 0000000000 | TELNET_SSH_SERVER: A %s session from %s was established.

[0x0730:0x0010] |     | 0000016 | 0000000000 | CLI: Executing CLI command: '%s'.

[0x0309:0x002c] |     | 0000001 | 0000000000 | CmdlFFS: File system operation : del  %s

Total number of different     messages is 14

hi,

no queued rdr s on the rdr counters so we eliminate the rdr/rdr-collector side.

i doubt there can be high cpu because of the http-get detection parameter set higher than 1.

can you try to decrease the value to 1 if it is different than this.

also how many subscribers are passing through the SCE ?, if number of anonymous subscribers are relatively high it can be the reason for the high cpu usage, in our setup we have an SCE constantly see ~%50+ cpu having 20K subs all of them are anonymous not using SM for another reason.

Hey Guys,

I ended up opening a TAC and after doing some tourbleshooting with the engineer said the issue was related to a cosmetic bug

CSCti78964http://wwwin.cisco.com/ios/cets/pdi/cbms/cdets/legend.shtml    show process cpu CLI shows unrealistic CPU utilization”

Release -note

Symptom:

Show process cpu CLI may show a unrealistic CPU utilization sometimes.

SCE8000-2#>sh processes cpu | include CPU

CPU utilization for five seconds: 1403%/ 0%; one minute: 692%; five minutes: 210%

Conditions:

+ SCE8000

Workaround:

None. But this should be a cosmetic issue.

Thanks for the info.

When anonymous subscribers is above 44K, the process CPU shoot to 99%.

Will it help if the anonymous subscribers is being reduced?

Hi,

Actualy the value of the anonymous users is never more than 10K.

At this moment the CPU is @ 99% and the anonymous is no more than 3K.

CPU is at 99%, 100% of the time, even when there is little traffic (1Gig)

SCE_VINA02#>show interface LineCard 0 subscriber db counters

Current values:

===============

Subscribers: 17901 used out of 999999 max.

Introduced/Pulled subscribers: 14699.

Anonymous subscribers: 3202.

Subscribers with mappings: 17901 used out of 249999 max.

Single non-VPN IP mappings: 20497.

Non-VPN IP Range mappings: 0.

IP Range over VPN mappings: 0.

Single IP over VPN mappings: 0.

VLAN based VPNs with subscribers: 0 used out of 4095.

Subscribers with open sessions: 11546.

Subscribers with TIR mappings: 0.

Sessions mapped to the default subscriber: 0.

SCE_VINA02#>

SCE_VINA02#>show processes cpu

CPU utilization for five seconds:  99%/  0%; one minute:  99%; five minutes:  99%

PID   Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process

    1      306110     22802          0  0.00%  0.00%  0.00%   0 (init)         

    2           0         0          0  0.00%  0.00%  0.00%   0 (kthreadd)     

    3       77540      7754          0  0.00%  0.00%  0.00%   0 (migration

I dont know, i believe is just a cosmetic issue since no services are being affected and the TAC engineer mentioned the Bug. The funny thing is that we have more than 30 SCE in our network and only this one is showing the high CPU issue

it will help but, if youre running the device without using an SM it is normal to have high control processor values.