cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5715
Views
5
Helpful
14
Replies

Increased CPU utilization of 7206VXR

e.shik_i-teco
Level 1
Level 1

Hello everybody!
The problem has appeared on 7206VXR after upgrading IOS from 12.4(11)T to 12.4(22)T2 - CPU utilization has increased into 2 or 3 times. This problem exists on 7206VXR with multilink PPP configuration on PA-MCX-8TE1 port adapter. It was supposed that this increasing of CPU utilization was caused by fast or process switching of this port adapter traffic and not cef switching. In the output of show cef interface it was:
Serial5/0:0 is up (if_number 55)
Corresponding hwidb fast_if_number 55
  Corresponding hwidb firstsw->if_number 55
  Internet Protocol processing disabled
  Interface is marked as point to point interface
  IPv4 packets switched to this interface are punted to the next slow path: PPP - unsupported interface
  Hardware idb is Serial5/0:0
  Fast switching type 7, interface type 13
  IP CEF switching enabled
  IP CEF switching turbo vector
  IP CEF turbo switching turbo vector
  IP prefix lookup IPv4 mtrie 8-8-8-8 optimized
  Input fast flags 0x0, Output fast flags 0x0
  ifindex 22(22)
  Slot Slot unit 0 VC 0
  Transmit limit accumulator 0x0 (0x0)
  IP MTU 0
Besides it was constant increasing number of packet  of Unsupported CEF features.
IPv4 CEF Packets passed on to next switching layer
Slot  No_adj No_encap Unsupp'ted Redirect  Receive  Options   Access     Frag
RP         0       0     6765185      608  9145433        0        0     2052

But this case the change of IOS led to changes in CPU load. Is it a bug of IOS or the problem in something else?

1 Accepted Solution

Accepted Solutions

Hello,

>> The most interesting thing that there are no problem with CPU utilization on the other router with the same configuration and the same hardware but with old version 12.4(11)T...

unfortunately newer does not always mean better.

I would consider to go back to that IOS release unless there is a new feature you need or any security concern

Hope to help

Giuseppe

View solution in original post

14 Replies 14

marikakis
Level 7
Level 7

In the output you posted from the interface the following appears:

"IPv4 packets switched to this interface are punted to the next slow path: PPP - unsupported interface"

This doesn't look good. It rather looks like a bug to me at least.

I guess the "IP prefix lookup IPv4 mtrie 8-8-8-8" is too "optimized" in this IOS version!

Thank you for your reply!

I supposed this's a bug too because I don't see any cpu proccess more than 2%.

CPU utilization for five seconds: 46%/41%; one minute: 48%; five minutes: 50%
PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
  41       51876        3451      15032  1.03%  1.00%  1.00%   0 Per-Second Jobs 
  83       50128      332469        150  0.55%  0.68%  0.70%   0 IP Input        
141         488      789006          0  0.31%  0.35%  0.35%   0 HQF Shaper Backg
309       10596       45633        232  0.15%  0.17%  0.16%   0 IP-EIGRP: HELLO 
304       10492       45726        229  0.15%  0.17%  0.17%   0 IP-EIGRP: HELLO 
278         424      201233          2  0.15%  0.18%  0.18%   0 HSRP Common     
257        7612       10777        706  0.15%  0.13%  0.12%   0 NHRP            
  37        4292        3724       1152  0.15%  0.10%  0.11%   0 Net Background  
307        1924       19130        100  0.07%  0.06%  0.07%   0 IP-EIGRP: HELLO 
305        1956       14342        136  0.07%  0.06%  0.07%   0 IP-EIGRP: HELLO 
297        2820       18153        155  0.07%  0.04%  0.05%   0 IP-EIGRP: PDM   
294        5216       20707        251  0.07%  0.02%  0.05%   0 IP-EIGRP: PDM   
261         224      102001          2  0.07%  0.08%  0.07%   0 PPP manager     
258          48        3283         14  0.07%  0.00%  0.00%   0 IGMP Input      
303        1504       23103         65  0.07%  0.08%  0.07%   0 IP-EIGRP: HELLO 
300       10020       52651        190  0.07%  0.09%  0.12%   0 IP-EIGRP: PDM   
298        4076       20893        195  0.07%  0.04%  0.05%   0 IP-EIGRP: PDM   
295       10048       53605        187  0.07%  0.09%  0.12%   0 IP-EIGRP: PDM   
279        1400       10319        135  0.07%  0.02%  0.00%   0 HSRP IPv4       
251          60       32633          1  0.07%  0.03%  0.01%   0 CCPROXY_CT      
  49          44         659         66  0.07%  0.01%  0.00%   0 Compute load avg
  68         744       20923         35  0.07%  0.10%  0.09%   0 Voice PA Proc   
317        2328        2314       1006  0.07%  0.00%  0.00%   4 Virtual Exec    
108         120        4888         24  0.07%  0.03%  0.02%   0 CEF: IPv4 proces
310         296        2918        101  0.07%  0.01%  0.00%   0 OSPF-1 Hello    
259         184       65554          2  0.07%  0.07%  0.07%   0 PIM Process     
262         720      102393          7  0.07%  0.06%  0.07%   0 PPP Events      
  27         116        1920         60  0.00%  0.00%  0.00%   0 EEM ED Syslog   
  28           0           2          0  0.00%  0.00%  0.00%   0 Serial Backgroun
  26           0           5          0  0.00%  0.00%  0.00%   0 Entity MIB API  
  31           0           2          0  0.00%  0.00%  0.00%   0 SMART           
  29           0           1          0  0.00%  0.00%  0.00%   0 RO Notify Timers
  25           4          39        102  0.00%  0.00%  0.00%   0 DDR Timers      
  34           0           1          0  0.00%  0.00%  0.00%   0 SERIAL A'detect 
  24           0           1          0  0.00%  0.00%  0.00%   0 Policy Manager  
  23           0           1          0  0.00%  0.00%  0.00%   0 AAA_SERVER_DEADT
  30           0           1          0  0.00%  0.00%  0.00%   0 RMI RM Notify Wa
  32           4        3273          1  0.00%  0.00%  0.00%   0 GraphIt         
  39          20        1178         16  0.00%  0.00%  0.00%   0 Logger          
  40          20        3268          6  0.00%  0.00%  0.00%   0 TTY Background  
   4           4           1       4000  0.00%  0.00%  0.00%   0 EDDRI_MAIN      
  33           0           2          0  0.00%  0.00%  0.00%   0 Dialer event    
  43          20         302         66  0.00%  0.00%  0.00%   0 IF-MGR event pro
  22           0           2          0  0.00%  0.00%  0.00%   0 AAA high-capacit
  45           0           1          0  0.00%  0.00%  0.00%   0 IKE HA Mgr      
  46           0           1          0  0.00%  0.00%  0.00%   0 IPSEC HA Mgr    
  47           4           4       1000  0.00%  0.00%  0.00%   0 rf task         
  35           0           2          0  0.00%  0.00%  0.00%   0 XML Proxy Client
  36           0           1          0  0.00%  0.00%  0.00%   0 Critical Bkgnd  
  50        1644          74      22216  0.00%  0.01%  0.00%   0 Per-minute Jobs 
   9           0           2          0  0.00%  0.00%  0.00%   0 ATM VC Auto Crea
  52          12         708         16  0.00%  0.00%  0.00%   0 HC Counter Timer
  38           0           3          0  0.00%  0.00%  0.00%   0 IDB Work        
  42          24          73        328  0.00%  0.00%  0.00%   0 IF-MGR control p
  55           0           1          0  0.00%  0.00%  0.00%   0 SONET alarm time
  56           0           1          0  0.00%  0.00%  0.00%   0 CSP Timer       
  57           0           1          0  0.00%  0.00%  0.00%   0 FPD Management P
  58           0           1          0  0.00%  0.00%  0.00%   0 FPD Action Proce
  59           4          18        222  0.00%  0.00%  0.00%   0 VNM DSPRM MAIN  
  60          32          65        492  0.00%  0.00%  0.00%   0 VMI Background  
  61           0           1          0  0.00%  0.00%  0.00%   0 RF_INTERDEV_DELA
  62           0           1          0  0.00%  0.00%  0.00%   0 RF_INTERDEV_SCTP
  63           0        3276          0  0.00%  0.00%  0.00%   0 ISA Common Helpe
  44           0           1          0  0.00%  0.00%  0.00%   0 Inode Table Dest

But one thing confused me. It was loaded a new version 12.2(24)T2 with no effect.  Is it possible the same bug persists in different images?

No cpu proccess more than 2% is not very uncommon and doesn't look like a bug to me necessarily. It is definitely possible for a bug to persist between versions. It is also possible for a bug to re-introduce itself if source code management hasn't been done properly. In any case, I don't know which bug this might be. Could be a CEF issue with a multilink, could be CEF not working at all (what do you see in other interfaces?) or CEF not working with the your specific software/hardware/configuration. Consider contacting cisco about it.

Hello,

please consider the following document about high cpu usage caused by interrupts, not only processes use it.

http://www.cisco.com/en/US/products/hw/routers/ps359/products_tech_note09186a00801c2af0.shtml

>> CPU utilization for five seconds: 46%/41%; one minute: 48%; five minutes: 50%

the second figure 41% should be the amount of cpu used by interrupts-

The considerations reported in the document are similar to those provided by Maria.

Hope to help

Giuseppe

Thank you very much Giuseppe and Maria!

According to cisco recommends I captured cpu profile but I can't interpret memory addresses as to understand what interrupts are being triggered so much?

System Total     = 000142622
Interrupt Total  = 000052341 (36 percent)
Sched Total      = 000087534 (61 percent)

Interrupt [03] = 000048919 (34 percent)
Interrupt [04] = 000000527 (00 percent)
Interrupt [05] = 000003422 (02 percent)

As I told before, I see large number of Unsupported frame in the output "sh cef drop"

IPv4 CEF Drop Statistics
Slot  Encap_fail  Unresolved Unsupported    No_route      No_adj  ChkSum_Err
RP             0           0      983264           0           0           0

I can't say that CEF isn't working at all, the results of "sh int stats":

Serial5/0:0
          Switching path    Pkts In   Chars In   Pkts Out  Chars Out
               Processor         34       2822    1132625  685354754
             Route cache    1221707  182975521    2281941  843222791
                   Total    1221741  182978343    3414566 1528577545
Serial5/1:0
          Switching path    Pkts In   Chars In   Pkts Out  Chars Out
               Processor         34       2822    1096124  661955212
             Route cache    1221896  182961478    2320722  863436856
                   Total    1221930  182964300    3416846 1525392068

Multilink11
          Switching path    Pkts In   Chars In   Pkts Out  Chars Out
               Processor     151563   14063852     163578   17765152
             Route cache    1409196  335357110    3562446 3036089470
                   Total    1560759  349420962    3726024 3053854622

and the results of "sh ip cef switching statistics"

Path   Reason                                         Drop       Punt        Punt2Host
RP LES Packet destined for us            17    1325368          0
RP LES Encapsulation resource             0     873083          0
RP LES Incomplete adjacency            1144          0          0
RP LES TTL expired                                  0          0       1721
RP LES Fragmentation failed                165          0        415
RP LES Unclassified reason               56       1712          0
RP LES Neighbor resolution req           11          1          0
RP LES Total                                     1393    2200164       2136

RP PAS Packet destined for us             0     319908          0
RP PAS Incomplete adjacency              62          0          0
RP PAS TTL expired                               0          0     130878
RP PAS Routed to Null0                      5893       0      11698
RP PAS IP redirects                                 0          0        132
RP PAS Unclassified reason            19596       1712          0
RP PAS Neighbor resolution req          200          2          0
RP PAS Total                                    25751     321622     142708

All    Total                                             27144    2521786     144844

The most interesting thing that there are no problem with CPU utilization on the other router with the same configuration and the same hardware but with old version 12.4(11)T...

Hello,

>> The most interesting thing that there are no problem with CPU utilization on the other router with the same configuration and the same hardware but with old version 12.4(11)T...

unfortunately newer does not always mean better.

I would consider to go back to that IOS release unless there is a new feature you need or any security concern

Hope to help

Giuseppe

Totally agree with Giuseppe. And the older I get, the more I agree that newer doesn't always mean better!

Hello,
I have the same issue with 12.4(24)T version on my router 7206VXR
On the Cisco web site we don't find the 12.4(11)T .
Have you always got this version ?
Thanks.

recommend you use 12.2 SRE, made for ISP, and properly tested.

Hello,

I don't used this IOS  because I have a NPE400 (cisco 7206 VXR) with only 512 DRAM .

it must have a Minimum Memory:  DRAM: 512 MB  for 12.2 SRE (c7200-adventerprisek9-mz.122-33.SRE.bin)

Consequently, I will try this IOS as below  (Minimum Memory:  DRAM: 256 MB  )
Routers > Cisco 7206VXR Router > IOS Software > 12.4.20T
ADVANCED ENTERPRISE SERVICES  > fichier c7200-adventerprisek9-mz.124-20.T.bin
Thanks

T versions are not right for 7200 in ISP. Too buggy.

ok but

Can I update a 12.2 SRE version on my Routeur 7206VXR NPE400 with 512 RAM max ?

Because it must be 512 DRAM minimum on Cisco web site.

Thanks you

Cisco says minimum 512MB, you have 512MB, I leave the conclusion to you ?

Several memory leaks in SRE1 but stable so far, SRE2 released soon, hope to see memory leaks fixed.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card