06-18-2020 08:35 AM
We have Windows VMs running in vmware on B200 M4's with fiber attached flash Pure1 storage, and 10G connectivity to our Meraki switch stack. All VMs are running vmxnet3 adapters and show 10G connection.
What I'm trying to figure out is where I may have a setting wrong somewhere in UCS (possibly) which is capping the VMs at only 1 gbps. I have run multiple iperf tests between them, even VMs on the same host, and they all hit a hard ceiling of 900-1000 mbps.
06-18-2020 08:44 AM
You need to tweak based on the VM OS you using here is good article if you using MS :
06-18-2020 08:45 AM - edited 06-18-2020 11:39 AM
If you check your VNICs/vnic template/connectivity policy, do you have any QOS policies attached to these?
Did a test in the lab, and noticed that my linux guestVMs would list about 9.86Gb between each other, but only about 3.5Gb with a windows2019 server, that had 8vCPU, 16GB ram, rss enabled.
Kirk...
06-18-2020 11:22 AM
Is `iperf` using default TCP?
Or did you change `iperf` to UDP?
If you change to UDP and get near wire speed then likely packet loss (or maybe VM OS buffers) is slowing down TCP connections.
On the Pure storage tests are you doing read tests or write tests? (That will indicate where to look for packet loss.)
Check ESXi rx_no_buf counters before/during/after an `iperf` or Pure storage test to know if ESXi is dropping the packets:
/usr/lib/vmware/vm-support/bin/nicinfo.sh | egrep "^NIC:|rx_no_buf"
If those rx_no_buf counters are incrementing during testing, then increase queues and ring size and enable RSS and be sure OS power settings are set to full power:
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: