When dealing with high CPU on the box in case that the CPU cycles are spent on processing the interrupts there is always a question what kind of packets are sent to the CPU in order to be looked at. In this case we need a way to look into these packets to see what is the reason for sending them to the CPU to be processed.
On these platforms there is a great tool that can be used in troubleshooting the high CPU under interrupts, it is “debug netdr”.
This tool can be used even on very busy box, with 100% CPU if the access is possible.
This tool will capture the packets going to and from CPU in the circular buffer (which can store 4K packets) and it will not cause any additional overhead to the system.
There are too many posible reasons for the high CPU under interupts so here I will just give you a short example on how this toll can be used and what kind of informations we can get from it.
First of all once we determent that the CPU is high under interrupts we can check the output of “show ibc”, this will give us the inbound statistics. The main thing that we can see here is the number of packets going to and leaving the CPU.
Interface IBC0/0(idb 0x1CC84028)
5 minute rx rate 14000 bits/sec, 21 packets/sec
5 minute tx rate 91000 bits/sec, 82 packets/sec
From here we can get an information if there are too many packets coming in but not so many leaving the box, or if there is some ration 1:1 or 1:2… this can give us some indication what is happening. If ration is close to 1:1 most likely those are packets that needs to be processed switched for some reason. If the ration is 1:2 thenmaybe we are doing some fragmentation and so on.
Next step is to start the capture with “debug netdr capture”
Just type in “debug netdr capture” and wait for 1 or 2 sec and you can stop the capture with “undebug netdr capture”
To display the captured packets use “show netdr captured-packets”
Once we capture the packets we will see something like this
tcp src 56662, dst 2525, seq 3178192631, ack 0, win 16384 off 7 checksum 0x327 syn
In this output there are couple of things that we can look at.
First we see if this is incoming or outgoing packet based on the line
------- dump of incoming inband packet -------
Then we can see source and destination address, this will give us information on what flow is going to the CPU and then we can check the routing for the destination in order to see if all the routing information’s are there.
We can also see what is the incoming interface and VLAN for these packets then we can see if the incoming interface have some configuration on it that would cause the packets to be processed switched.
We can see if the flood bit is set and if this is the reason for having the packets send to the CPU.
Then we can check ttl value to see if the packets are punted to the CPU due to TTL=1 value. If we have too many packets with TTl=1 punted to the CPU then we need to see what is the reason for this but we can protect the device by putting the rate limiter for this kind of packets. We can configure this limiter with “mls rate-limit all ttl-failure 100 10” chose the values that you think are appropriate for your network.
In case that this is the MPLS packet we will have the payload showed in the output as well, then you need to decode the hex values to see how many labels are there, then we can check the label forwarding table to see if all forwarding informations are present on the box. We can also see the TTL value of those packets.
These are only few general things that we can look into but this command in general is an excelent starting point for troubleshooting high CPU under interupts on 7600 and 6500 platform.
Use the options under debug netdr command to limit the scope of captured packets, it will make it easier to analyze.
lan-7600-1#debug netdr capture ?
acl(11) Capture packets matching an acl
and-filter(3) Apply filters in an and function: all must match
destination-ip-address(10) Capture all packets matching ip dst address
dstindex(7) Capture all packets matching destination index
ethertype(8) Capture all packets matching ethertype
interface(4) Capture packets related to this interface
or-filter(3) Apply filters in an or function: only one must match
rx(2) Capture incoming packets only
source-ip-address(9) Capture all packets matching ip src address
srcindex(6) Capture all packets matching source index
tx(2) Capture outgoing packets only
vlan(5) Capture packets matching this vlan number
When capture is complete use the “pipe” in order to filter the output based on various informations that you need
Show netdr captu | i ttl
This will give you only the line with ttl and src and dest addresses. In this way you can quickly check if the same flow is going to the CPU or there is a variety of the flows.
Show netdr capture | i interface
With this way you can see if all packets are coming from the same interface
The packets are captured in circular buffer so we will have only the latest 4K packets. If we have the CPU high under interrupts we need to run "debug netdr capture" for only 1 second in order to capture valuable information.
Hi,Previous we were using Cisco 7606 as BRAS and the issue was Router keeps on rebooted automatically.OLD Details about Cisco 7606Router 7606 #show versionCisco IOS Software, c7600rsp72043_rp Software (c7600rsp72043_rp-ADVENTERPRISEK9-M), Version 15.2(2)S...
I have successfully registered my SIP endpoints (which we have developed) with Broadsoft sandbox environment. I can make basic calls without problems. When I try to conduct feature tests like Call Waiting (CW), Call Hold (CH), and Call Transfer (CT)...
I am trying to find if there is any Pros or Cons in deploying MPLS with separate customer AS for each site as opposed to one AS. The end goal is to be able to inject default routes from two DCs and be able to make a subset of sites follow one default rout...
So I have attempted the below config: l2vpn
backup disable never
xconnect group Xconns
In one of the discuss, it was initiated to us that there is a specific DB size for example:1 GB and breaching of which will led to crash of Iedge process.I want to replicate the scenario by creating load of user by tool in the lab and track the CDM db uti...