cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
634
Views
1
Helpful
4
Replies

NCS-57C3 MPLS small frame thoughput

pboehmer
Level 1
Level 1

We have a simple l2vpn xconnect between a new NCS-57C3 and an existing ASR-9906 via 100G link in a single rack.  We have JDSU 5800 test units attached to both ends on 10G ports.  We have a customer requesting that their RFC2544 test be able to pass 70bit frames at 9600Mbps L1 in order to meet their requirements.  

In our test lab, we seem to max out at 8398Mbps at 70bit frames at the NCS side.  The 9906 has no problems with this rate.  Only when the bit rate is set for 350bit or higher we are able to achieve over 9600Mbps.  Stripped the config down to bare bones with no change. 

 

NCS-57C3:

interface TenGigE0/0/0/41
description ## MPLS Test
l2transport
!

l2vpn
xconnect group MPLS
p2p test
interface TenGigE0/0/0/41
neighbor ipv4 xxx.xxx.xxx.xxx pw-id 20250505
!

 

ASR-9906:
interface TenGigE0/3/0/15
l2transport
!
!
l2vpn
xconnect group MPLS
p2p test
interface TenGigE0/3/0/15
neighbor ipv4 xxx.xxx.xxx.xxx pw-id 20250505
!

 

1 Accepted Solution

Accepted Solutions

pboehmer
Level 1
Level 1

TAC was able to figure out the issue.  The fix is to create a policy-map and then apply the policy-map with a 22 byte pad:

policy-map BW-TenGig-dn
 class class-default
  queue-limit percent 3 
 ! 
 end-policy-map
! 
interface TenGigE0/0/0/41
 mtu 9126
 service-policy output BW-TenGig-dn account user-defined 22
!

View solution in original post

4 Replies 4

filopeter
Level 1
Level 1

It's just an idea, but try using l2transport subinterface with untagged or default encapsulation instead of l2transport physical port.

interface TenGigE0/0/0/41
description ## MPLS Test
!
interface TenGigE0/0/0/41.10 l2transport
encapsulation untagged
!
l2vpn
xconnect group MPLS
p2p test
interface TenGigE0/0/0/41.10
neighbor ipv4 xxx.xxx.xxx.xxx pw-id 20250505

 

No change, same results.

Is this issue solved?

MHM

pboehmer
Level 1
Level 1

TAC was able to figure out the issue.  The fix is to create a policy-map and then apply the policy-map with a 22 byte pad:

policy-map BW-TenGig-dn
 class class-default
  queue-limit percent 3 
 ! 
 end-policy-map
! 
interface TenGigE0/0/0/41
 mtu 9126
 service-policy output BW-TenGig-dn account user-defined 22
!