05-10-2024 01:10 AM
Hi All!
I have one problem related to running GuestShell on Nexus 9000 I managed to successfully run guestshell and install iperf3. But when i start measuring channel bandwidth, it shows tiny values, the hundreds of kilobytes, while the real channel is 25g The switches run with the same software version:
Software
BIOS: version 07.69
NXOS: version 9.3(8)
BIOS compile time: 04/07/2021
NXOS image file is: bootflash:///nxos.9.3.8.bin
Hardware
cisco Nexus9000 C93180YC-EX chassis
Intel(R) Xeon(R) CPU @ 1.80GHz with 24631952 kB of memory.
Processor Board ID FDO250617ZH
GuestShell versions are the same:
show guestshell detail
Virtual service guestshell+ detail
State : Activated
Package information
Name : guestshell.ova
Path : /isanboot/bin/guestshell.ova
Application
Name : GuestShell
Installed version : 2.10(0.0)
Description : Cisco Systems Guest Shell
Signing
Key type : Cisco release key
Method : SHA-1
Licensing
Name : None
Version : None
Resource reservation
Disk : 1000 MB
Memory : 500 MB
CPU : 10% system CPU
Attached devices
Type Name Alias
---------------------------------------------
Disk _rootfs
Disk /cisco/core
Serial/shell
Serial/aux
Serial/Syslog serial2
Serial/Trace serial3
show virtual-service list
Virtual Service List:
Name Status Package Name
-----------------------------------------------------------------------
guestshell+ Activated guestshell.ova
Example of iPerf3 measuring:
[root@guestshell admin]# iperf3 -c 10.10.10.2
Connecting to host 0.10.10.2, port 5201
[ 4] local 10.10.10.1 port 29030 connected to 10.10.10.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 106 KBytes 868 Kbits/sec 36 4.24 KBytes
[ 4] 1.00-2.00 sec 45.2 KBytes 371 Kbits/sec 32 9.90 KBytes
[ 4] 2.00-3.00 sec 45.2 KBytes 371 Kbits/sec 23 4.24 KBytes
[ 4] 3.00-4.00 sec 65.0 KBytes 533 Kbits/sec 20 4.24 KBytes
[ 4] 4.00-5.00 sec 39.6 KBytes 324 Kbits/sec 17 2.83 KBytes
[ 4] 5.00-6.00 sec 41.0 KBytes 336 Kbits/sec 19 4.24 KBytes
[ 4] 6.00-7.00 sec 43.8 KBytes 359 Kbits/sec 20 4.24 KBytes
[ 4] 7.00-8.00 sec 43.8 KBytes 359 Kbits/sec 12 4.24 KBytes
[ 4] 8.00-9.00 sec 42.4 KBytes 347 Kbits/sec 16 4.24 KBytes
[ 4] 9.00-10.00 sec 46.7 KBytes 382 Kbits/sec 20 4.24 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 519 KBytes 425 Kbits/sec 215 sender
[ 4] 0.00-10.00 sec 471 KBytes 386 Kbits/sec receiver
iperf Done.
[root@guestshell admin]# iperf3 -c 0.10.10.2 -R
Connecting to host 0.10.10.2, port 5201
Reverse mode, remote host 10.10.10.2 is sending
[ 4] local 10.10.10.1 port 29032 connected to 10.10.10.2 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 70.7 KBytes 579 Kbits/sec
[ 4] 1.00-2.00 sec 43.8 KBytes 359 Kbits/sec
[ 4] 2.00-3.00 sec 46.7 KBytes 382 Kbits/sec
[ 4] 3.00-4.00 sec 28.3 KBytes 232 Kbits/sec
[ 4] 4.00-5.00 sec 58.0 KBytes 475 Kbits/sec
[ 4] 5.00-6.00 sec 36.8 KBytes 301 Kbits/sec
[ 4] 6.00-7.00 sec 58.0 KBytes 475 Kbits/sec
[ 4] 7.00-8.00 sec 38.2 KBytes 313 Kbits/sec
[ 4] 8.00-9.00 sec 46.7 KBytes 382 Kbits/sec
[ 4] 9.00-10.00 sec 48.1 KBytes 394 Kbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 506 KBytes 415 Kbits/sec 181 sender
[ 4] 0.00-10.00 sec 475 KBytes 389 Kbits/sec receiver
maybe someone has run into the same issue?
Solved! Go to Solution.
05-14-2024 07:28 AM
configuration of the switch and QoS i am talking about.
05-10-2024 01:42 AM
i do not believe you get accurate results when you using guestshell with iperf3 as per my testing.
i use external rasberry Pi or any other device server and client passing the traffic using switch got better results.
05-10-2024 02:32 AM
Sure, that's obviously, iperf3 on GuestShell isn't about ideal accuracy results. But in this case something strange happens, huge discrepancy between real (25g) and measured bandwidth (<1 Mbits)
05-10-2024 03:41 AM
its all depends on config, the maximum i am able to achieve was 1Gb as i remmember correctly.
05-10-2024 04:22 AM
could you clarify, please, which config you are talking about?
05-10-2024 07:12 AM
configuration of the switch and QoS i am talking about.
05-10-2024 07:28 AM - edited 05-10-2024 07:29 AM
Thanks for your response!
In my case, these two switches are a part of the test stand and the only traffic that runs is iperf traffic between these nexuses. So there is no competitive traffic
05-12-2024 05:34 AM
i use external box for testing p2p device rather iperf using container.
05-14-2024 03:31 AM
By the way, I managed to find the bottleneck. It was COPP policy that dropped iperf traffic to the control plane of the switches. Upon making an alteration in the policy, I managed to measure about 300 Mbits/sec tops on 25 Gig links. I suppose it's a question of policy again. For test goals it's OK, but of course this configuration isn't for production.
05-14-2024 07:28 AM
configuration of the switch and QoS i am talking about.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide