取消
显示结果 
搜索替代 
您的意思是: 
cancel
589
查看次数
0
有帮助
5
回复

3825 issues after IOS upgrade (cpu, netflow)

Nikolaos Milas
Level 1
Level 1

Hello,

I have just upgraded two routers 3825 from Version 12.4(9)T2 to 12.4(24)T8 (Advanced Enterprise).

Things seem to be running smoothly after upgrade (no reported serious issues) but CPU load has increased significantly. On one of the routers it went up from 15-25% average, to 60-70% average (in the few hours we are watching it).

Can someone suggest if this is expected and normal behavior or not?

After the upgrade, I noticed that both routers have issues with netflow exporting.

My configuration was:

ip flow-cache timeout active 1
ip flow-export source Loopback0
ip flow-export version 9
ip flow-export interface-names
ip flow-export destination 195.251.xxx.xxx 9995
ip flow-top-talkers
 top 10
 sort-by bytes
 cache-timeout 3600000

After the upgrade, the line:

ip flow-export interface-names

has been removed automatically and if I add it manually, it still is not displayed.

Router interface netflow configuration is using only "ip flow ingress" statements on all interfaces (as is usually suggested).

After upgrade, the automatically-modified (during upgrade) running-configuration file has removed from all interfaces the statement:

ipv6 flow ingress

No ipv6 reporting is being made since then. (I tried to enter the same statement again manually, but it did not accept it.)

Can you please suggest a reason for this behavior and indicate possible solutions?

Here follow some diagrams (upgrade point is clearly visible) from one of the two routers:

Main interface (connection to ISP):

Some more interfaces:

Please advise.

Nick

5 条回复5

Nikolaos Milas
Level 1
Level 1

Let me add the following info to the high CPU problem:

1. High interrupt percentage (without any particularly significant network load):

noa-pen#show processes cpu sorted
CPU utilization for five seconds: 78%/31%; one minute: 73%; five minutes: 73%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process
 114     3031176     4953163        611 44.00% 40.95% 40.79%   0 IP Input         
  42      101512        5694      17827  1.83%  1.79%  1.78%   0 Per-Second Jobs  
 178       10772     1160649          9  0.47%  0.45%  0.46%   0 HQF Shaper Backg
  51        1780          96      18541  0.31%  0.03%  0.00%   0 Per-minute Jobs  
   2         544        1134        479  0.07%  0.03%  0.02%   0 Load Meter       
 153         836       26245         31  0.07%  0.03%  0.02%   0 TCP Timer        
 306        4864       27319        178  0.07%  0.07%  0.07%   0 IPv6 Input       
 179         720       55993         12  0.07%  0.02%  0.02%   0 RBSCP Background
 307        2008       34419         58  0.07%  0.07%  0.07%   0 IPv6 ND          
 141        2388        8692        274  0.07%  0.08%  0.07%   0 CEF: IPv4 proces
  19        5960       29324        203  0.07%  0.07%  0.07%   0 ARP Input        
 228         796       17104         46  0.07%  0.02%  0.00%   0 OSPF-120 Hello   
  12          28        5544          5  0.00%  0.00%  0.00%   0 IPC Deferred Por
  11          72        5544         12  0.00%  0.00%  0.00%   0 IPC Periodic Tim
  10           0           1          0  0.00%  0.00%  0.00%   0 IPC Zone Manager
...

Never in the past did we have such loads:

The arrow shows the upgrade time.

Here is the primary (connection to ISP) interface config, for your reference. (This config is similar to other interfaces config.):

interface GigabitEthernet0/0.102
 encapsulation dot1Q 102
 ip address 62.xxx.xxx.xxx 255.255.255.254
 ip access-group 100 in
 ip access-group 101 out
 ip verify unicast reverse-path 103
 no ip redirects
 no ip unreachables
 no ip proxy-arp
 ip nbar protocol-discovery
 ip flow ingress
 ip nat outside
 ip virtual-reassembly
 no ip mroute-cache
 ipv6 address 2001:648:2FFD:xxx:xxx::2/126
 ipv6 enable
 no keepalive
 no cdp enable
 crypto map vpn

Please advise.

Thanks,
Nick

With regard to the IPv6 issues, I found this nice article:

http://commandline.ninja/2013/10/11/ipv6-flow-export-with-flexible-netflow/

which revived our IPv6 netflow export.

I also found two interfaces (with many subinterfaces each) having CEF disabled ("no ip route-cache cef"). Interestingly, when I enabled CEF on these interfaces, it fixed both the CPU load problem AND the IPv4 netflow export problem!!

I can understand why this fixes CPU load, but not why it fixes IPv4 netflow export as well!

If someone can provide an explanation/insight on this, I would appreciate it!

All the best,
Nick

In the end I found that IPv4 netflow did not produce correct output (despite my initial impressions) with the configuration described earlier.

I'll have to switch to Flexible IPv4 netflow and see if thinks work properly this way.

I'll update this thread with any new information.

(I have started questioning my decision to upgrade these routers, even though their IOS was 9 years old. Argh.)

Nick

I am still struggling with this.

I temporarily switched to IPv4 FNF (flexible netflow) but I obviously (as I found later) ran into this problem:

https://supportforums.cisco.com/discussion/11686271/switching-flexible-netflow-increases-cpu-load

After our routers suffered 100% load, I had to switch back to normal IPv4 netflow (with CEF fully enabled) and IPv4 netflow seems to be working correctly, but currently I am testing with:

ip flow ingress
ip flow egress

...configured ONLY on the ISP interface.

However, IPv6 FNF exporting (the only available IPv6 netflow export method on this IOS version) does not seem to be working properly. It magnifies traffic and packets by an estimated factor of about 100K. That is, IPv6 traffic that is really around 100packets/sec and 100 Kbps appears as: 1 G packets / sec and 10Gbps respectively.

I have been obliged to stop (at least temporarily) IPv6 recording, because it distorts real traffic graphs (data is collected and graphs are generated by nfdump/nfsen).

Until now, I have not been able to find a solution. If someone can advise, I will appreciate it.

       flow exporter IPv6
         destination 195.251.xxx.xxx
         source Loopback0
         transport udp 9995
       !
       flow monitor IPv6
         record netflow ipv6 original-output
         exporter IPv6
         transport udp 9995

And on the interfaces:

       ipv6 flow monitor IPv6 input

Perhaps the "record" statement should be configured differently?

What may be wrong?

Thanks in advance,
Nick

It finally worked (IPv6 FNF). I had to configure a user-defined record, because the pre-defined records would not produce correct results, as I described in my earlier posts, for reasons I have not been able to find.

Here is the working configuration:

   flow exporter IPv6
     destination 10.10.10.10
     source Loopback0
     transport udp 9995
   !
   !
   flow record ipv6_record
     match ipv6 protocol
     match ipv6 source address
     match ipv6 destination address
     match transport source-port
     match transport destination-port
     match flow direction
     collect routing source as
     collect routing destination as
     collect ipv6 dscp
     collect ipv6 source mask
     collect ipv6 destination mask
     collect transport tcp source-port
     collect transport tcp destination-port
     collect transport tcp flags
     collect interface input
     collect interface output
     collect counter bytes
     collect counter packets
     collect timestamp sys-uptime first
     collect timestamp sys-uptime last
   !
   !
   flow monitor IPv6
     record ipv6_record
     exporter IPv6
   !

   and on all IPv6-enabled router interfaces:

    ipv6 flow monitor IPv6 input

All the best,
Nick

Review Cisco Networking for a $25 gift card