cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Bookmark
|
Subscribe
|
5532
Views
29
Helpful
6
Replies

Fragmentation causing low throughput on ASR9k

Atif Siddiqui
Level 1
Level 1

Setup:

Testset A <GigInt------ LINK MTU 9000 ---GigInt> non-Cisco device <GigInt---- LINK MTU 4000---GigInt> ASR9K <---- LINK MTU 9000-----> Testset B

Bi directional stream going through with the packet size of (5000 bytes - fragmentation allowed) traffic is 900 MB

Stream from TestsetA is fine 900 MB is reaching to Testset B; (non-Cisco device is fragmenting 5000 byte packet due to interface MTU 4000 but transmitting traffic with no loss; low CPU usage

Stream from TestsetB is getting dropped by ASR9k egress interface (connected to non-Cisco); ASR9K is receiving all the traffic 900 MB but sending out only 70 MB to the MX. we have checked LPTS no counters incrementing.

Tested 7606 replacing ASR9k results are similar traffic around 70- 80 MB. on ES+ card

Has anyone seen this behviour? can we modify config to make it working?

thanks,

1 Accepted Solution

Accepted Solutions

ASR9k itself is protected by LPTS, so it can “handle” frgmentaiton by dropping traffic if the rate is more than 1000pps. But that won’t be good for your customers. In such scenarios, that is better to have the same MTU on the involved interfaces.

And reducing the MTU from 9000 to 4000 in your current setup would enormously improve the performance. Hence, in this case, smaller, but the same MTU is much better than having some interfaces with bigger MTU.

/A

View solution in original post

6 Replies 6

Alexei Kiritchenko
Cisco Employee
Cisco Employee

Hello Atif,

May you provide us with this output from the ASR in order to better understand the test

  • show ins ac su
  • show int
  • show int
  • show run

Look at the following packet drops troubleshooting guideline to identify the drop reason

https://supportforums.cisco.com/docs/DOC-15552

Regards,

/A

ok I will go through this link you have provided.

Please have a look at the attached file, it has the show commands that you requested.

My packet size is 5000 generated from the test set; but it allows fragmentation: no df-bit set. I am attaching the filw output as well.

Thanks

Atif

are the below counters showing anything?

RP/0/RSP0/CPU0:ios#show controllers np count np4 location 0/0/CPU0 | i FRAG

Thu Sep 15 04:48:45.364 UTC

167  IPV4_FRAG_NEEDED_PUNT                               23014497           0

168  IPV4_FRAG_NEEDED_PUNT_EXCD                         131973547           0

265  MPLS_FRAG_NEEDED_PUNT                                2135215        1000

266  MPLS_FRAG_NEEDED_PUNT_EXCD                           5697694        1910

RP/0/RSP0/CPU0:ios#show controllers np count np4 location 0/0/CPU0 | i FRAG
Thu Sep 15 04:48:45.364 UTC
167  IPV4_FRAG_NEEDED_PUNT                               23014497           0
168  IPV4_FRAG_NEEDED_PUNT_EXCD                         131973547           0
265  MPLS_FRAG_NEEDED_PUNT                                2135215        1000
266  MPLS_FRAG_NEEDED_PUNT_EXCD                           5697694        1910

Thx for the info.

The fragmentation on an ASR is done using slow path, meaning a packet will be punted to LC CPU for fragmentation and then re-injected back to the egress path.  

To keep the CPU utilization at a reasonable level, there is LPTS rating at 1000pps, hence exceeding packets will be dropped and that is what FRAG_NEEDED_PUNT_EXCD  shows to us. This can be verified using show lpts pifib hardware static-police command

For example:

#show lpts pifib hardware static-police location 0/1/CPU0 | i FRAG

Thu Sep 15 10:15:14.231 CEST

IPV4_FRAG_NEEDED_PUNT   NETIO_LO_STREAM_ID     1000       400       217621               0                   Local              

MPLS_FRAG_NEEDED_PUNT   NETIO_LO_STREAM_ID     1000       400       0                   0                   Local        

The rate can be increased, but we should understand that can lead to high CPU LC utilization.

For example:

!

lpts punt police location 0/1/CPU0

      exception ipv4 fragment rate 5000

!

#show lpts pifib hardware static-police location 0/1/CPU0 | i FRAG  

Thu Sep 15 10:13:54.795 CEST

IPV4_FRAG_NEEDED_PUNT   NETIO_LO_STREAM_ID     5000       5000       217621               0                   Local              

MPLS_FRAG_NEEDED_PUNT   NETIO_LO_STREAM_ID     1000       400       0                   0                  Local  

/A

ok. we tried to adjust this rate and CPU utilization went up but we still did not get our desired 900 MB throughput it was around 120 MB (up from 70 MB) therefore it seems unlikley that we would modify the default rate.

now in a real production scenario is this going to be a problem at any point, or we do not expect to handle fragmentation at this rate and volume? it was a test. I see that Cisco 7606 with RSP 720 and ES+ cards show the same behviour.

any insight and advice will be great.

thanks.

ASR9k itself is protected by LPTS, so it can “handle” frgmentaiton by dropping traffic if the rate is more than 1000pps. But that won’t be good for your customers. In such scenarios, that is better to have the same MTU on the involved interfaces.

And reducing the MTU from 9000 to 4000 in your current setup would enormously improve the performance. Hence, in this case, smaller, but the same MTU is much better than having some interfaces with bigger MTU.

/A