03-23-2023 02:57 AM
Hi All,
I have a cisco FPR 2130 i have connected eth1/13 and eth1/14 with 10G LR SFPs with a device called IXIA its a point to point connectivity. On IXIA i have an SFP called (Ixia, QSFP+ 40GBASE-PLR4 40GE pluggable optical transceiver, SMF (singlemode), 1310nm, 10 km reach) which is single 40G SFP with 4 10G breakout cables.
Firing traffic and increasing line rate only up to 6% getting 50 percent packet loss. Below is my interface configuration on FPR 2130. Please help me if i am missing any config on these 10G interfaces?
FPR-ASA-4(config)# show int eth1/13
Interface Ethernet1/13 "IXIA-INSIDE-IN", is up, line protocol is up
Hardware is EtherSVI, BW 1000 Mbps, DLY 1000 usec
Full-Duplex(fullDuplex), 10000 Mbps(10gbps)
MAC address 643a.ea7c.dab0, MTU 1500
IP address 192.168.100.1, subnet mask 255.255.255.0
5340187449 packets input, 7115448968048 bytes, 0 no buffer
FPR-ASA-4(config)# show int eth1/14
Interface Ethernet1/14 "IXIA-OUTSIDE-IN", is up, line protocol is up
Hardware is EtherSVI, BW 1000 Mbps, DLY 1000 usec
Full-Duplex(fullDuplex), 10000 Mbps(10gbps)
MAC address 643a.ea7c.dab1, MTU 1500
IP address 192.168.200.1, subnet mask 255.255.255.0
5738347179 packets input, 7509857397792 bytes, 0 no buffer
Received 6 broadcasts, 0 runts, 0 giants
1 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
03-25-2023 02:50 AM
In the first place use "show int detail" to check "no buffer" and overrun counters on interfaces, including Internal-Data interface (should be Internal-Data0/1 if I remember correctly).
03-26-2023 06:37 AM
It seems like on 10 Gig interface increased increased frame size 9000 resolving the issue. But we are running Flat 1500 MTU size across the network. I dont want to increase MTU. Interface detail is given below. On reducing the MTU size it starts increasing the packet loss.
Hardware is EtherSVI, BW 1000 Mbps, DLY 1000 usec
Full-Duplex(fullDuplex), 10000 Mbps(10gbps)
MAC address 643a.ea7c.dab0, MTU 9000
IP address 192.168.100.1, subnet mask 255.255.255.0
4453950699 packets input, 12613502869800 bytes, 0 no buffer
Received 56 broadcasts, 0 runts, 0 giants
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
0 pause input, 0 resume input
1506308220 packets output, 6731586126816 bytes, 0 underruns
0 pause output, 0 resume output
0 output errors, 0 collisions, 0 interface resets
0 late collisions, 0 deferred
0 input reset drops, 0 output reset drops
FPR-ASA-4(config)# show run interface Internal-Data0/1
^
ERROR: % Invalid input detected at '^' marker.
FPR-ASA-4(config)# show interface Internal-Data0/1
^
ERROR: % Invalid input detected at '^' marker.
FPR-ASA-4(config)# show interface Internal-Data1/1
^
ERROR: % Invalid input detected at '^' marke
03-27-2023 03:25 AM
You configured target traffic rate in IXIA in bps (or Bps), right? In this case you decrease pps rate when you increase MTU. Do you generate single stream from IXIA or multiple streams? If you generate single stream, you could hit single RX ring throughput limitation. Each individual flow always goes through the same RX ring on Internal-Data0/1 and RX ring can overflow if pps becomes high. From the "show int detail" (without arguments) you can see that there are 32 RX rings on the FPR 2130, so it is expected that single RX ring throughput may be 30 times lower than aggregated system throughput. If RX ring overflows, you should see "no buffer" drops and/or "overruns" on the Internal-Data0/1 at the interface level. Not sure if newer ASA versions can also display overrun counter per RX ring. Probably not. Internal-Data0/1 represents internal Marvell 40G switch port on FP2130 which connects frontpanel Ethernet ports to Octeon NPU running ASA.
ASA 9.12:
Interface Internal-Data0/1 "", is up, line protocol is up
Hardware is , BW 10000 Mbps, DLY 10 usec
(Unknown), (Unknown)
Input flow control is unsupported, output flow control is unsupported
MAC address 000f.b748.4800, MTU not set
IP address unassigned
233676 packets input, 22152323 bytes, 2 no buffer
Received 216337 broadcasts, 0 runts, 0 giants
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
0 pause input, 0 resume input
0 L2 decode drops, 101 demux drops
17605 packets output, 1689879 bytes, 0 underruns
0 pause output, 0 resume output
0 output errors, 0 collisions, 0 interface resets
0 late collisions, 0 deferred
0 output decode drops
0 backplane ingress drops, 0 backplane egress drops
0 input reset drops, 0 output reset drops
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
Queue Stats:
RX[00]: Packets: 215209 Bytes: 13967796
Blocks free curr/low: 4000/3945
RX[01]: Packets: 530 Bytes: 201981
Blocks free curr/low: 4000/3999
RX[02]: Packets: 580 Bytes: 221126
Blocks free curr/low: 4000/3999
RX[03]: Packets: 506 Bytes: 188350
Blocks free curr/low: 4000/3999
RX[04]: Packets: 559 Bytes: 211901
Blocks free curr/low: 4000/3999
RX[05]: Packets: 532 Bytes: 203893
Blocks free curr/low: 4000/3999
RX[06]: Packets: 521 Bytes: 188460
Blocks free curr/low: 4000/3999
RX[07]: Packets: 528 Bytes: 199173
Blocks free curr/low: 4000/3999
RX[08]: Packets: 561 Bytes: 211325
Blocks free curr/low: 4000/3999
RX[09]: Packets: 539 Bytes: 210733
Blocks free curr/low: 4000/3999
RX[10]: Packets: 2113 Bytes: 1120722
Blocks free curr/low: 4000/3999
RX[11]: Packets: 556 Bytes: 198858
Blocks free curr/low: 4000/3999
RX[12]: Packets: 560 Bytes: 211263
Blocks free curr/low: 4000/3999
RX[13]: Packets: 517 Bytes: 190507
Blocks free curr/low: 4000/3999
RX[14]: Packets: 498 Bytes: 188680
Blocks free curr/low: 4000/3999
RX[15]: Packets: 568 Bytes: 215725
Blocks free curr/low: 4000/3999
RX[16]: Packets: 557 Bytes: 205023
Blocks free curr/low: 4000/3999
RX[17]: Packets: 550 Bytes: 202377
Blocks free curr/low: 4000/3999
RX[18]: Packets: 556 Bytes: 210597
Blocks free curr/low: 4000/3999
RX[19]: Packets: 539 Bytes: 197072
Blocks free curr/low: 4000/3999
RX[20]: Packets: 573 Bytes: 221117
Blocks free curr/low: 4000/3999
RX[21]: Packets: 536 Bytes: 193295
Blocks free curr/low: 4000/3999
RX[22]: Packets: 547 Bytes: 204191
Blocks free curr/low: 4000/3999
RX[23]: Packets: 536 Bytes: 201581
Blocks free curr/low: 4000/3999
RX[24]: Packets: 605 Bytes: 218365
Blocks free curr/low: 4000/3999
RX[25]: Packets: 549 Bytes: 208241
Blocks free curr/low: 4000/3999
RX[26]: Packets: 586 Bytes: 218149
Blocks free curr/low: 4000/3999
RX[27]: Packets: 548 Bytes: 206361
Blocks free curr/low: 4000/3999
RX[28]: Packets: 473 Bytes: 177782
Blocks free curr/low: 4000/3999
RX[29]: Packets: 583 Bytes: 223191
Blocks free curr/low: 4000/3999
RX[30]: Packets: 542 Bytes: 203540
Blocks free curr/low: 4000/3999
RX[31]: Packets: 517 Bytes: 196116
Blocks free curr/low: 4000/3999
Control Point Interface States:
Interface number is 2
Interface config status is active
Interface state is active
ASA 9.8 (information not available before 9.12 -- CSCvh31365)
Interface Internal-Data0/1 "", is up, line protocol is up
Hardware is , BW 10000 Mbps, DLY 10 usec
(Unknown), (Unknown)
Input flow control is unsupported, output flow control is unsupported
MAC address 000f.b748.4800, MTU not set
IP address unassigned
5659639895 packets input, 4565384664350 bytes, 0 no buffer
Received 26604312 broadcasts, 0 runts, 0 giants
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
0 pause input, 0 resume input
0 L2 decode drops, 1615 demux drops
1260327135 packets output, 917719098599 bytes, 0 underruns
0 pause output, 0 resume output
0 output errors, 0 collisions, 0 interface resets
0 late collisions, 0 deferred
0 output decode drops
0 backplane ingress drops, 0 backplane egress drops
0 input reset drops, 0 output reset drops
input queue (blocks free curr/low): hardware (0/0)
output queue (blocks free curr/low): hardware (0/0)
Control Point Interface States:
Interface number is 2
Interface config status is active
Interface state is active
03-28-2023 05:17 AM
Thank you so much excellent explanation, previous i created single stream which was causing RX ring throughput overrun. Later I created a multiple stream which was causing issue with 1500 MTU but with 9000 MTU seem good with no loss.
Now after multiple tweaks in my testing I clicked the option called "split rate evenly among ports" (screenshot attached) seem to bring down the loss on 1500 MTU to 0% when line rate is 90%. Previous i was using "Apply rate on all ports" which was causing 1500 MTU to cause packet loss.
03-27-2023 03:48 AM
FPR-ASA-4(config)# show int eth1/13 <<- this show is not complete
Interface Ethernet1/13 "IXIA-INSIDE-IN", is up, line protocol is up
Hardware is EtherSVI, BW 1000 Mbps, DLY 1000 usec
Full-Duplex(fullDuplex), 10000 Mbps(10gbps)
MAC address 643a.ea7c.dab0, MTU 1500
IP address 192.168.100.1, subnet mask 255.255.255.0
5340187449 packets input, 7115448968048 bytes, 0 no buffer
03-27-2023 02:10 AM
Hi,
Are you sure that IXIA is not seding packets larger than 1500?
I'm asking you this because you've tested it using IXIA and you can easily mess things up when doing a test and not know that you're actually doing. I've used it in the past and sometimes it's very confusing.
Same goes for BreakingPoint.
BR,
Octavian
03-28-2023 05:20 AM
After multiple tweaks in my testing I clicked the option called "split rate evenly among ports" (screenshot attached) seem to bring down the loss on 1500 MTU to 0% when line rate is 90%. Previous i was using "Apply rate on all ports" which was causing 1500 MTU to cause packet loss. Need to do some reading on how split rate evenly solved the issue.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide