ā07-31-2023 07:09 AM
Hi all,
We have a Cisco 892 and we are experiencing some performance issues.
When testing using Iperf from LAN to WAN with speeds of 750Mbit/sec (packet size 1024K) we see packet loss when pinging the LAN ip address (Vlan 100) of the router.
I was told the throughput would be around 1.6Gbps.
Below the config of the router. Any help on how to enhance the performance?
Best Regards,
Guy
!
! Last configuration change at 13:59:49 UTC Mon Jul 31 2023 by syntigo
!
version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname Router
!
boot-start-marker
boot system flash:c800-universalk9-mz.SPA.154-3.M3.bin
boot-end-marker
!
aqm-register-fnf
!
logging buffered 65536 informational
!
no aaa new-model
memory-size iomem 25
!
!
!
!
!
!
!
!
!
!
!
!
!
!
ip cef
no ipv6 cef
!
!
!
!
!
multilink bundle-name authenticated
!
!
!
!
!
!
!
!
cts logging verbose
license udi pid C892FSP-K9 sn FCZ214992FF
!
!
username cisco privilege 15 secret 5 $1$mWFD$bAdqxlwLF8fIY6Ug1xz.U/
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
interface GigabitEthernet0
description LAN1
switchport access vlan 100
no ip address
!
interface GigabitEthernet1
description LAN1
switchport access vlan 100
no ip address
!
interface GigabitEthernet2
description *** unused ***
no ip address
shutdown
!
interface GigabitEthernet3
description *** unused ***
no ip address
shutdown
!
interface GigabitEthernet4
description *** unused ***
no ip address
shutdown
!
interface GigabitEthernet5
description *** unused ***
no ip address
shutdown
!
interface GigabitEthernet6
description *** unused ***
no ip address
shutdown
!
interface GigabitEthernet7
description *** unused ***
no ip address
shutdown
!
interface GigabitEthernet8
description *** unused ***
no ip address
shutdown
duplex auto
speed auto
!
interface GigabitEthernet9
description WAN
ip address 1.1.1.1 255.255.255.252
duplex auto
speed auto
!
interface Vlan1
description *** unused ***
no ip address
shutdown
!
interface Vlan100
description LAN1
ip address 2.2.2.254 255.255.255.0
!
ip forward-protocol nd
no ip http server
no ip http secure-server
!
!
ip route 0.0.0.0 0.0.0.0 1.1.1.2
!
!
!
control-plane
!
!
mgcp behavior rsip-range tgcp-only
mgcp behavior comedia-role none
mgcp behavior comedia-check-media-src disable
mgcp behavior comedia-sdp-force disable
!
mgcp profile default
!
!
!
!
!
!
banner motd ^C^C
!
line con 0
exec-timeout 5 0
login local
no modem enable
line aux 0
access-class 2 in
exec-timeout 5 0
login local
line vty 0 4
exec-timeout 5 0
login local
transport input all
line vty 5 15
exec-timeout 5 0
login local
transport input all
!
scheduler allocate 20000 1000
ntp access-group peer 10
ntp access-group serve-only 11
ntp access-group query-only 11
ntp server time.google.com
!
!
!
end
ā07-31-2023 08:11 AM - edited ā07-31-2023 08:12 AM
- Use latest advisory software version for the 892 ; check if that can help ,
M.
ā07-31-2023 08:23 AM
show interface g0 and g1 <<- share this
ā07-31-2023 09:49 AM
Who told you throughout should be 1.6Gbps?
What do the CPU stats look during this test?
ā07-31-2023 10:45 AM
Hi @gwijnants
I attached a document where they run very interesting tests on this router and I believe this can be useful for you. But I believe your expectation for this router is a bit high.
ā07-31-2023 11:56 AM
Interesting results, but limited by using interfaces running at only 100 Mbps. (?)
I've attached a Cisco white paper, which also measures 890 series throughtput performance under various conditions.
Table 1 shows a max throughput rate of 1.4 Gbps for 1500 byte packets. As this particular test, appears to be the best possible throughput rate, reason I asked about source of 1.6 Gbps possible for 1K sized packets.
At the end of this Cisco white paper, Cisco only recommends (see figure 1) the 890 series for WANs circuits supporting up to 15 Mbps (that would be for duplex).
Much like Flavio's reference, this Cisco white paper shows impact of packet size and services.
Software based routers, are generally limited by the performance/capacity of their CPU (which is why I also asked about CPU stats during the test).
Pinging the router, itself, will also consume its CPU.
ā08-01-2023 03:09 AM
Hi all,
Thank you so much for the help already.
About the 1.6Gbps I found the info on the Cisco Forum somewhere.
When testing with Iperf and doing 1 stream of 1Gbps I have no packet loss when pinging the LAN (vlan 100) gateway from the LAN client. When doing for instance 4 streams of 250Mbit I have packet loss. All using Frame size of 1500.
CPU load using 4 streams of 250Mbit
Router#sh processes cpu history
Router 09:37:08 AM Tuesday Aug 1 2023 UTC
333333333333333333333333333333333333333333333333333333333333
444888885555544444555554444455555555558888833333444443333355
100
90
80
70
60
50
40 ********** ***** *************** **
30 ************************************************************
20 ************************************************************
10 ************************************************************
0....5....1....1....2....2....3....3....4....4....5....5....6
0 5 0 5 0 5 0 5 0 5 0
CPU% per second (last 60 seconds)
Router#
Interfaces output
Router#sh int gi9
GigabitEthernet9 is up, line protocol is up
Hardware is PQ3_TSEC, address is 005d.7326.1c51 (bia 005d.7326.1c51)
Description: WAN
Internet address is 1.1.1.1/30
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 165/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 1Gbps, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 125310
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 3661000 bits/sec, 7584 packets/sec
5 minute output rate 648808000 bits/sec, 53585 packets/sec
10261262 packets input, 621444604 bytes, 0 no buffer
Received 10171 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 9 multicast, 54459 pause input
72783142 packets output, 110154696794 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
8 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
1 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
Router#sh int gi0
GigabitEthernet0 is up, line protocol is up
Hardware is Gigabit Ethernet, address is 005d.7326.1c3f (bia 005d.7326.1c3f)
Description: LAN1
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 167/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:13:46, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 654980000 bits/sec, 53950 packets/sec
5 minute output rate 3934000 bits/sec, 7658 packets/sec
73144774 packets input, 111005224199 bytes, 0 no buffer
Received 471 broadcasts (1350 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
10279741 packets output, 660173999 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
2 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
1 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
Router# sh int vlan 100
Vlan100 is up, line protocol is up
Hardware is EtherSVI, address is 005d.7326.1c3e (bia 005d.7326.1c3e)
Description: LAN1
Internet address is 2.2.2.254/24
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 167/255
Encapsulation ARPA, loopback not set
Keepalive not supported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 658711000 bits/sec, 54402 packets/sec
5 minute output rate 3377000 bits/sec, 7755 packets/sec
73765710 packets input, 111654175313 bytes, 0 no buffer
Received 1344 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
10385959 packets output, 565021337 bytes, 0 underruns
0 output errors, 1 interface resets
465 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
Router#
Is the overhead to big?
Best Regards,
Guy
ā08-01-2023 08:36 AM
You CPU stats looks good, as CPU appears to not be overloaded.
Your WAN interface shows drops. As you have two active LAN interfaces, you can overrun your WAN interface. It might help to increase the queue size for the WAN interface. (40 is pretty small for a gig interface.)
Would suggest, you clear interface stats at beginning of a test, and see what stats are at the end of the test. (You can see if drops decrease as you increase egress queue size. Try, 64, 128, 256, 512, 1024. Once drops appear to dramatically drop, don't increase further.)
You might also reduce your interface load average counters to the minimum value of 30 seconds.
Really like to better understand exactly what you're doing.
You're using iPerf to generate with 1 stream or 4 streams, both now (earlier you were using 1K size?) using 1500 bytes packets, from LAN to WAN, correct? UDP or TCP? Destination of the stream(s)?
Whether 1 stream or 4, you get about the same max throughput, correct?
While iPerf test is running, if you're only using 1 stream, you can ping the LAN interface (from LAN port?) without ping drops, but when running 4 streams you see ping drops, correct?
So far, not seeing a glaring problem that would explain your ping issue.
Do keep in mind, Cisco network devices deal with pings as a low priority function and will sometimes consider a bunch of pings as a DoS attack and just not respond to some of the pings. (Cannot say that's what's happening here.)
I also looked up your IOS version, 15.4.3M3. It's 8 years old and an ED release, M4 and later are MD,. The last release, in that train, was 15.4.3M10. I would suggest updating to that version, if you can, or consider updating to one of the two releases Cisco recommends:
Lastly, if you're ping the router, suggest you, instead, ping something "though" the router, to determine if transit traffic is being impacted.
ā08-04-2023 04:18 AM
Hi Joseph,
Thank you for your feedback, much appreciated.
I understand your feedback regarding the WAN interface but the problem already persists on the LAN side.
When I have a laptop on the LAN side sending 4 streams of 250mbit each the laptop has ping loss when pinging its default gateway (2.2.2.254). With 1 stream of 1Gbps there is no ping loss.
I know Cisco deals with pings as low priority, but is this only for its own interfaces or also ping towards external ip adresses (8.8.8.8 for example).
All testing of iperf has been done using packet size 1500 TCP.
To increase the queue size (I think its the number of packets?)
I use following config?
policy-map Queue-size
class 1
queue-limit 512
random-detect
and then link it to the interface.
Pinging destinations through the router also gives ping loss.
Best Regards,
Guy
ā08-04-2023 08:10 AM
On its "face", unclear why 4 streams of 250 Mbps would cause ping drops when a single stream of 1 Gbps does not.
Are the iPerf streams always sourced from the same laptop?
If so, there's many factors that can cause this difference, including how accurately or how exactly your laptop generates four 250 Mbps streams vs. a single gig stream. Especially if they're doing TCP rather than UDP streams.
Although your question/concern is valid, getting to the bottom of the root cause of what you're seeing, might be very time consuming, and may lead to an explanation (unfortunately) that's "just the way it works" i.e. expected behavior.
Heck, even how iPerf works, can be factor. There's a reason why when you read professional network device performance reviews they use specialized test equipment.
Or, further consider, is what you're seeing even causing a real issue with your real "normal" traffic?
Cisco's recommendation for the 890 series, they're suitable for handling WAN links up to 15 Mbps for almost any situation, and in specific "usual" usage cases, their highest throughput listed was only for 64 Mbps (aggregate).
So what is your actual goal? You "need" no ping loss when pushing around 750 Mbps, using multiple flows?
Personally, I actually find why you're obtaining the results you're obtaining, interesting (and truly would like to know the "why), but unsure it's interesting enough to be worth my time. Is it worth your time?
Any way, to your question about how to increases the WAN interface's egress queue size, I had in mind the interface hold-queue # out command.
If you want to try a CBWFQ policy, perhaps:
policy-map Queue-size
class class-default
queue-limit #
random-detect
!or
policy-map Queue-size
class class-default
fair-queue
interface g#
service-policy output Queue-size
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide