Showing results for 
Search instead for 
Did you mean: 

Switching to flexible netflow increases cpu-load

Dear all

We have a couple of 7206VXR NPE-G1 running IOS Version 12.4(24)T6.

They were doing IPv4 netflow exporting all the time with this simple configuration:

ip flow-export source Loopback0
ip flow-export version 5 origin-as
ip flow-export destination x.y.z.1 2190
ip flow-export destination x.y.z.2 2191

int range Gi0/1 - 3

ip flow ingress

Everything was fine. But now we wanted to have netflow accounting for IPv6 as well and I realized that I have to deal with the new flexible netflow configuration method in order to get IPv6 netflows.

Well, it turned out that this is indeed very flexible and not really hard to do.

So we have configured exporters, records and monitors and activated

ipv6 flow monitor IPv6-NETFLOW input

on the three interfaces. Since there is almost no IPv6 traffic yet on the router in question this configuration did not affect the cpu-load.

When we were save with our flow record configuration and everything ran fine with ipv6 we did the analogous configuration for ipv4, switched off the old "ip flow-export" and enabled

ip flow monitor IPv4-NETFLOW input

instead on the three interfaces.

It works as expected BUT the cpu-load increased dramatically from 20 to 40 percent. And where we had 25% peak load so far we are now seeing almost 70%.

So, are there any issues with flexible netflow which we should know about? Anything one can do about the cpu-load or do we have to switch back to the old flow-export style at least for IPv4?

Any suggestions?



Additional observation:

We have almost constant traffic araound 120MBit/s during the last 6 hours. But the cpu load increased from 45% to 70% within this period. And it looks like it is still increasing.


I have switched back the IPv4 netflow configuration to the old "ip flow-export" method and the cpu-load went down to the usual 20% right away.

Flexible netflow is still active for IPv6 though.... May I guess what might happen once IPv6 traffic becomes noticeable?


I know this is a very old post but I recently ran into the exact same situation.  I have 2 routers that are basically identical.  However, one is moving more traffic than the other.  I needed to use the flexible netflow to start looking at ipv6 traffic.  I set it up just like you did above.  The load jumped from an average of 20% to pegging the router at 100%.  It got so bad that it actually caused bgp sessions to fail etc...  Performance was terrible.  I even added sampling as suggested to control cpu load.  It did help but the load held at about 70 to 80 percent.  That was not acceptable.  I have gone back to the original netflow and the load is once again hovering around 20%.  The other router that is not as hevily loaded seemed to handle the flexible netflow by just jumping a couple percent load.  I imagine it is just a matter of time before that one starts to have problems.  Did you ever find a solution for this?

Here is the config I used:

flow exporter nfsen
 source GigabitEthernet0/2.12
 transport udp 9992
 template data timeout 60
flow monitor IPv6-nfsen
 record netflow ipv6 original-input
 exporter nfsen
 cache timeout active 60
flow monitor IPv4-nfsen
 record netflow ipv4 original-input
 exporter nfsen
 cache timeout active 60


sampler Sam1in100
 mode random 1 out-of 100


And then in the interfaces:

ip flow monitor IPv4-nfsen sampler Sam1in100 input
ipv6 flow monitor IPv6-nfsen sampler Sam1in100 input






Unfortunately we have never found a solution for this. Up to now we have kept it like it was with traditional configuration for IPv4 and flexible configuration for IPv6.

It's a pity that this support community provide answers/hints/discussions only in rare cases.



I`m found one interesting bug: "high cpu observed while enabling flexible net flow - CSCtx50771".

We try to upgrade our 7206VXR to 15.2.4S6 and post results here later.


I can confirm that after upgrade to 15.2.4S6 cpu usage with FNF (ipv4) increased to 60% from 40% (not to 100% from 40%).

Jeff Orr

I agree with all the comments here. I have been running into this on my datacenter routers. We are running GetVPN for MPLS WAN encryption. With traditional netflow, I lose all flows relating to encrypted traffic. When switching to FnF, we see a massive CPU spike which is still occuring as of 15.3(2)T2.

For now I am trying a very shrunk down flow record to see if that helps.

flow record FNF-v9
 match ipv4 tos
 match ipv4 protocol
 match ipv4 source address
 match ipv4 destination address
 match transport source-port
 match transport destination-port
 match interface input
 match flow sampler
 collect routing source as
 collect routing destination as
 collect routing next-hop address ipv4
 collect ipv4 dscp
 collect ipv4 source mask
 collect ipv4 destination mask
 collect transport tcp flags
 collect interface output
 collect counter bytes
 collect counter packets
 collect timestamp sys-uptime first
 collect timestamp sys-uptime last
 collect application name


Looking forward to any comments from anyone who knows of a solution.