- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2011 06:36 PM
Setup:
Testset A <GigInt------ LINK MTU 9000 ---GigInt> non-Cisco device <GigInt---- LINK MTU 4000---GigInt> ASR9K <---- LINK MTU 9000-----> Testset B
Bi directional stream going through with the packet size of (5000 bytes - fragmentation allowed) traffic is 900 MB
Stream from TestsetA is fine 900 MB is reaching to Testset B; (non-Cisco device is fragmenting 5000 byte packet due to interface MTU 4000 but transmitting traffic with no loss; low CPU usage
Stream from TestsetB is getting dropped by ASR9k egress interface (connected to non-Cisco); ASR9K is receiving all the traffic 900 MB but sending out only 70 MB to the MX. we have checked LPTS no counters incrementing.
Tested 7606 replacing ASR9k results are similar traffic around 70- 80 MB. on ES+ card
Has anyone seen this behviour? can we modify config to make it working?
thanks,
Solved! Go to Solution.
- Labels:
-
XR OS and Platforms
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2011 01:03 PM
ASR9k itself is protected by LPTS, so it can “handle” frgmentaiton by dropping traffic if the rate is more than 1000pps. But that won’t be good for your customers. In such scenarios, that is better to have the same MTU on the involved interfaces.
And reducing the MTU from 9000 to 4000 in your current setup would enormously improve the performance. Hence, in this case, smaller, but the same MTU is much better than having some interfaces with bigger MTU.
/A
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2011 09:07 PM
Hello Atif,
May you provide us with this output from the ASR in order to better understand the test
- show ins ac su
- show int
- show int
- show run
Look at the following packet drops troubleshooting guideline to identify the drop reason
https://supportforums.cisco.com/docs/DOC-15552
Regards,
/A
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2011 09:35 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2011 09:51 PM
My packet size is 5000 generated from the test set; but it allows fragmentation: no df-bit set. I am attaching the filw output as well.
Thanks
Atif
are the below counters showing anything?
RP/0/RSP0/CPU0:ios#show controllers np count np4 location 0/0/CPU0 | i FRAG
Thu Sep 15 04:48:45.364 UTC
167 IPV4_FRAG_NEEDED_PUNT 23014497 0
168 IPV4_FRAG_NEEDED_PUNT_EXCD 131973547 0
265 MPLS_FRAG_NEEDED_PUNT 2135215 1000
266 MPLS_FRAG_NEEDED_PUNT_EXCD 5697694 1910
RP/0/RSP0/CPU0:ios#show controllers np count np4 location 0/0/CPU0 | i FRAG
Thu Sep 15 04:48:45.364 UTC
167 IPV4_FRAG_NEEDED_PUNT 23014497 0
168 IPV4_FRAG_NEEDED_PUNT_EXCD 131973547 0
265 MPLS_FRAG_NEEDED_PUNT 2135215 1000
266 MPLS_FRAG_NEEDED_PUNT_EXCD 5697694 1910
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2011 01:23 AM
Thx for the info.
The fragmentation on an ASR is done using slow path, meaning a packet will be punted to LC CPU for fragmentation and then re-injected back to the egress path.
To keep the CPU utilization at a reasonable level, there is LPTS rating at 1000pps, hence exceeding packets will be dropped and that is what FRAG_NEEDED_PUNT_EXCD shows to us. This can be verified using show lpts pifib hardware static-police command
For example:
#show lpts pifib hardware static-police location 0/1/CPU0 | i FRAG
Thu Sep 15 10:15:14.231 CEST
IPV4_FRAG_NEEDED_PUNT NETIO_LO_STREAM_ID 1000 400 217621 0 Local
MPLS_FRAG_NEEDED_PUNT NETIO_LO_STREAM_ID 1000 400 0 0 Local
The rate can be increased, but we should understand that can lead to high CPU LC utilization.
For example:
!
lpts punt police location 0/1/CPU0
exception ipv4 fragment rate 5000
!
#show lpts pifib hardware static-police location 0/1/CPU0 | i FRAG
Thu Sep 15 10:13:54.795 CEST
IPV4_FRAG_NEEDED_PUNT NETIO_LO_STREAM_ID 5000 5000 217621 0 Local
MPLS_FRAG_NEEDED_PUNT NETIO_LO_STREAM_ID 1000 400 0 0 Local
/A
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2011 04:56 AM
ok. we tried to adjust this rate and CPU utilization went up but we still did not get our desired 900 MB throughput it was around 120 MB (up from 70 MB) therefore it seems unlikley that we would modify the default rate.
now in a real production scenario is this going to be a problem at any point, or we do not expect to handle fragmentation at this rate and volume? it was a test. I see that Cisco 7606 with RSP 720 and ES+ cards show the same behviour.
any insight and advice will be great.
thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2011 01:03 PM
ASR9k itself is protected by LPTS, so it can “handle” frgmentaiton by dropping traffic if the rate is more than 1000pps. But that won’t be good for your customers. In such scenarios, that is better to have the same MTU on the involved interfaces.
And reducing the MTU from 9000 to 4000 in your current setup would enormously improve the performance. Hence, in this case, smaller, but the same MTU is much better than having some interfaces with bigger MTU.
/A
