cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1292
Views
5
Helpful
5
Replies

traffic in A9K-2X100GE-TR

Hi,

I'm trying to do a throughput test with an A9K-2X100GE-TR in a 9006 V2 chassis with 2 x A9K-RSP880-SE prior to send it to customer and right now i'm capable of doing roughly 20Gbps traffic per port.

 

RSP's are running 32bits 6.4.2 IOS-XR and all FPD's are up-to-date.

 

 

Configuration is the most basic one:

!
interface HundredGigE0/0/0/0
mtu 9216
ipv4 address 1.1.1.2 255.255.255.0
load-interval 30
!
interface HundredGigE0/0/0/1
mtu 9216
ipv4 address 2.2.2.2 255.255.255.0
load-interval 30
!

Testing device is a viavi MTS5800 with dual 100G ports (in other platform/hardware work like a charm).

 

And this is the output of the interfaces:

RP/0/RSP0/CPU0:ios#show inter hu0/0/0/0
Mon Jul 29 23:24:42.024 UTC
HundredGigE0/0/0/0 is up, line protocol is up
Interface state transitions: 11
Hardware is HundredGigE, address is a80c.0d3e.baa6 (bia a80c.0d3e.baa6)
Layer 1 Transport Mode is LAN
Internet address is 1.1.1.2/24
MTU 9216 bytes, BW 100000000 Kbit (Max: 100000000 Kbit)
reliability 255/255, txload 45/255, rxload 254/255
Encapsulation ARPA,
Full-duplex, 100000Mb/s, LR4, link type is force-up
output flow control is off, input flow control is off
Carrier delay (up) is 10 msec
loopback not set,
Last link flapped 00:07:38
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:00, output 00:00:00
Last clearing of "show interface" counters 00:05:32
30 second input rate 99731005000 bits/sec, 1383002 packets/sec
30 second output rate 17979740000 bits/sec, 249331 packets/sec
459124521 packets input, 4138548435486 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 0 broadcast packets, 0 multicast packets
0 runts, 0 giants, 0 throttles, 0 parity
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
82772050 packets output, 746107258700 bytes, 0 total output drops
Output 0 broadcast packets, 0 multicast packets
0 output errors, 0 underruns, 0 applique, 0 resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions


RP/0/RSP0/CPU0:ios#show inter hu0/0/0/1
Mon Jul 29 23:25:05.002 UTC
HundredGigE0/0/0/1 is up, line protocol is up
Interface state transitions: 9
Hardware is HundredGigE, address is a80c.0d3e.baa7 (bia a80c.0d3e.baa7)
Layer 1 Transport Mode is LAN
Internet address is 2.2.2.2/24
MTU 9216 bytes, BW 100000000 Kbit (Max: 100000000 Kbit)
reliability 255/255, txload 45/255, rxload 254/255
Encapsulation ARPA,
Full-duplex, 100000Mb/s, LR4, link type is force-up
output flow control is off, input flow control is off
Carrier delay (up) is 10 msec
loopback not set,
Last link flapped 00:08:04
ARP type ARPA, ARP timeout 04:00:00
Last input 00:00:00, output 00:00:00
Last clearing of "show interface" counters 00:05:55
30 second input rate 99731152000 bits/sec, 1383004 packets/sec
30 second output rate 17979742000 bits/sec, 249331 packets/sec
490896704 packets input, 4424942887256 bytes, 0 total input drops
0 drops for unrecognized upper-level protocol
Received 0 broadcast packets, 0 multicast packets
0 runts, 0 giants, 0 throttles, 0 parity
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
88499902 packets output, 797738116628 bytes, 0 total output drops
Output 0 broadcast packets, 0 multicast packets
0 output errors, 0 underruns, 0 applique, 0 resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

As you can see in input is receiving almost 100G but output is roughly 20G.

 

Any hint on this one?

 

Regards,

 

 

1 Accepted Solution

Accepted Solutions

Aleksandar Vidakovic
Cisco Employee
Cisco Employee

A9K-2X100GE-TR is a Typhoon generation line card. You can find the one card architecture diagram in BRKARC-2003 from one of the Cisco Live before 2015 (i.e. before introduction of Tomahawk line cards). The available HW at the time couldn't support a single 100G flow, hence two NPs per port and two FIAs. Try to disperse the traffic across multiple flows, that should allow you to go up to 100G.

 

However, my sincere recommendation is to go for a Tomahawk line card because that's a true 100G architecture.

 

Screenshot 2019-07-29 at 18.44.36.png

View solution in original post

5 Replies 5

smilstea
Cisco Employee
Cisco Employee

On the 1 and 2x100GE cards you need a sufficient number of flows to reach linerate, the limit of a single flow is roughly 13G in some releases and 18-20 in later releases.

 

Sam

Aleksandar Vidakovic
Cisco Employee
Cisco Employee

A9K-2X100GE-TR is a Typhoon generation line card. You can find the one card architecture diagram in BRKARC-2003 from one of the Cisco Live before 2015 (i.e. before introduction of Tomahawk line cards). The available HW at the time couldn't support a single 100G flow, hence two NPs per port and two FIAs. Try to disperse the traffic across multiple flows, that should allow you to go up to 100G.

 

However, my sincere recommendation is to go for a Tomahawk line card because that's a true 100G architecture.

 

Screenshot 2019-07-29 at 18.44.36.png

Hi and thanks for the answers Aleksandar and smilstea,

 

Now i created a 10 streams traffic flow (so 10 streams of 10G) but it's doing roughly 40G:

RP/0/RSP0/CPU0:ios#show inter hu0/0/0/0
Tue Jul 30 16:27:04.631 UTC
HundredGigE0/0/0/0 is up, line protocol is up
Interface state transitions: 11
Hardware is HundredGigE, address is a80c.0d3e.baa6 (bia a80c.0d3e.baa6)
Layer 1 Transport Mode is LAN
Layer 2 Transport Mode
MTU 9216 bytes, BW 100000000 Kbit (Max: 100000000 Kbit)
reliability 255/255, txload 110/255, rxload 251/255
Encapsulation ARPA,
Full-duplex, 100000Mb/s, LR4, link type is force-up
output flow control is off, input flow control is off
Carrier delay (up) is 10 msec
loopback not set,
Last link flapped 00:06:44
Last input 00:00:00, output 00:00:00
Last clearing of "show interface" counters 01:25:19
30 second input rate 98436620000 bits/sec, 8127200 packets/sec
30 second output rate 43349086000 bits/sec, 3579020 packets/sec
59823807044 packets input, 9696844908500 bytes, 0 total input drops
68 drops for unrecognized upper-level protocol
Received 506 broadcast packets, 0 multicast packets
0 runts, 0 giants, 0 throttles, 0 parity
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
2295497214 packets output, 2596825425420 bytes, 11130 total output drops
Output 12 broadcast packets, 0 multicast packets
0 output errors, 0 underruns, 0 applique, 0 resets
0 output buffer failures, 0 output buffers swapped out
4 carrier transitions


RP/0/RSP0/CPU0:ios#show inter hu0/1/0/0
Tue Jul 30 16:28:46.186 UTC
HundredGigE0/1/0/0 is up, line protocol is up
Interface state transitions: 3
Hardware is HundredGigE, address is a80c.0d7b.55e8 (bia a80c.0d7b.55e8)
Layer 1 Transport Mode is LAN
Layer 2 Transport Mode
MTU 9216 bytes, BW 100000000 Kbit (Max: 100000000 Kbit)
reliability 255/255, txload 110/255, rxload 251/255
Encapsulation ARPA,
Full-duplex, 100000Mb/s, LR4, link type is force-up
output flow control is off, input flow control is off
Carrier delay (up) is 10 msec
loopback not set,
Last link flapped 00:08:30
Last input 00:00:00, output 00:00:00
Last clearing of "show interface" counters 01:27:01
30 second input rate 98436235000 bits/sec, 8127166 packets/sec
30 second output rate 43349618000 bits/sec, 3579064 packets/sec
3266450957 packets input, 4945406723566 bytes, 4 total input drops
0 drops for unrecognized upper-level protocol
Received 18 broadcast packets, 0 multicast packets
0 runts, 0 giants, 0 throttles, 0 parity
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
1451458789 packets output, 2197508606546 bytes, 0 total output drops
Output 0 broadcast packets, 0 multicast packets
0 output errors, 0 underruns, 0 applique, 0 resets
0 output buffer failures, 0 output buffers swapped out
3 carrier transitions


RP/0/RSP0/CPU0:ios#

 

Is there any command to check where the drop is happening (for my own understanding)?

 

Regards,

 

10 flows is not likely to give you enough variety in hash calculation.

 

Check the NP counters ("sh controllers np counters ...") and FIA stats/drops ("sh controllers fabric fia stats ...", "sh controllers fabric fia drops ingress ...").

 

Also check how well is the PHY load-balancing the flows. Connect to the LC and execute "np_perf -e 1" , look for the section starting with "RFD per-port usage" and see how balanced is the RFD usage between the four SGMII  ports.

 

What use are you planning for this line card? I would really recommend to use a Tomahawk instead of 2x100G. This 2x100G line card has reached end of service attachment date: https://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/eos-eol-notice-c51-738854.html

 

/Aleksandar

 

PS. additional comments based on a query that I have received since:

  1. RFDs are 256-byte buffers that hold the packet while it's being processed by the NP microcode.
  2. Load-balancing over the SGMII connections between MAC PHY and the NP on the 2x100G Typhoon line card is explained in the BRKSPG-2904 Cisco Live Europe 2017, section "ASR9000 Load Balancing"

Hi Aleksandar,

 

Is a customer that plan to use it for route aggregation and not in full 100G but as our aggreement (i work as a external consultant for them) i need to test everything as full linerate (if possible) before even start digging on the config side of the equation.

 

They provide the equipment to me but of course i will remark what you explain to me.

 

Thanks!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: