ā12-15-2011 03:19 PM - edited ā03-01-2019 10:11 AM
We have a new UCS C200 M2 server that is running ESXi 5.0.0 and will only neg a speed of 100MB - tried connecting it to a GLC-T on a 3750 and also Copper Port on 2960G - same result.
Trying to hard set speed to 1000 in vSphere Client produces an error:
Call "HostNetworkSystem.UpdatePhysicalNicLinkSpeed" for object "networkSystem" on ESXi "172.19.99.103" failed.
Operation failed, diagnostics report: Forcing 1000Mbps or faster may not be supported on this network device, please use auto-negotiation instead Call "HostNetworkSystem.UpdatePhysicalNicLinkSpeed" for object "networkSystem" on ESXi "172.19.99.103" failed.
Operation failed, diagnostics report: Forcing 1000Mbps or faster may not be supported on this network device, please use auto-negotiation instead
Adapter is Intel 82576 Gigabit Network Connection
Have just used HUU to version 1.4 - no improvement.
Thoughts, suggestions?
Solved! Go to Solution.
ā12-16-2011 06:50 AM
Gordon,
Can you please share the output of following ESXi commands
vmware -v
esxcfg-nics -l
ethool -i vmincX
Do you observe the same behavior with other NIC port on the server ?
Padma
ā12-16-2011 06:50 AM
Gordon,
Can you please share the output of following ESXi commands
vmware -v
esxcfg-nics -l
ethool -i vmincX
Do you observe the same behavior with other NIC port on the server ?
Padma
ā01-06-2012 08:02 AM
Hi Padma,
Sorry, for not replying sooner, but I was having issues getting the CLI up and running!
We appear to have resolved the issue by connecting the server to the same switchport with a shorter ethernet cable.
The previous one was a 5 meter Cat5e, the new one is a 2 meter Cat5e.
Bit confused as to why that would make a difference but happy it's now running at a Gigabit.
Thanks,
Gordon
ā01-06-2012 08:18 AM
Hello Gordon,
Glad that you responded back on how the issue got resovled in your scenario.
We are never short of surprises on solutions that resolves a problem :-)
Padma
ā01-12-2012 05:45 AM
Hi Padma,
it would be interesting to have the further treating of this kind of issue because we have similar issues.
We already have installed several C200 M2 with CUCM or CUC and CUCX on it, and we have had also the issue that in the esxi we were not able to go from 100 to 1000 Mbps on the network settings from the VMs. The host has 1000 Mb configured but the UC VM's has speed issues and are staying on 100 Mbps.
Do we have differences between the CICM interface Versions ?
Or do we need to have static 1000 Mb settings on into the CUCM/CUC/CUCX CLI before we can have the right connexion speed on the esxi side ??
Jacky
ā01-12-2012 09:16 AM
Hello Jacky,
Do you have issue with configuring speed for virtual machine nics or the physical NICs as reported by Gordon ?
I do not have much knowledge about UC applications but just a quick thought on it. Most likely it is will be using flexible ( i.e which acts as pcnet32 or vmxnet ) which has it's own limitations.
Since this virtual adapter is dedicated for the VM machine traffic, 100 mbps would be sufficient for the application from a "virtual NIC " interface. Physical NIC parameters and total VM load would be more crucial in design considerations.
Please refer to following doc on network traffic requirements for UC VMs
http://docwiki.cisco.com/wiki/QoS_Design_Considerations_for_Virtual_UC_with_UCS#LAN_Traffic_Sizing
If you are receiving specific error message from ESXi regarding vNIC or physical configuration, feel free to start a new thread so that we can help you out.
Padma
ā05-08-2012 01:30 AM
We have the same problems with ESX 5.0.0 and a UCS C210 M2 server. Interfaces remain down after disconnect/reconnect or after boot or negotaiate to only 100 speed while switch is 1Gbps etc...Very disappointed with this. The server needs to go in production but we cannot do this while we are having this kind of reliability problems. I have a strong feeling it is related to a mismatch in communication between the BIOS of the NIC and ESX kernel itself. For example, when you boot the server and ESX is not yet running, it negotiates to 100 full by default , then when the ESX kernel loads, it is first brought up in 100 and then after a while (seconds) switches to 1G full. But when you start disconnecting/reconnecting interfaces, they sometimes don't come up anymore or at the wrong speed/duplex. We can't really yet 100% reproduce the behaviour but we are working on it. We don't really have speed problems on the VM, unless maybe that a VM can influence the speed of the physical NIC with which it is associated (although that would surprise me A LOT). We have 4x Gig production interfaces and 4 VMs. Using "port ID based" load balancing , each VM should be assigned to its own physical NIC.
ā05-08-2012 01:52 AM
Here is some more info.
Note: switches are all configured to autoneg, speed and duplex and are Gig C3750E switches.
# vmware -v
VMware ESXi 5.0.0 build-469512
# esxcfg-nics -l
Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:0b:00.00 bnx2 Up 1000Mbps Full 00:10:18:c6:70:d0 1500 Broadcom Corporation Broadcom NetXtreme II BCM5709 1000Base-T
vmnic1 0000:0b:00.01 bnx2 Up 1000Mbps Full 00:10:18:c6:70:d2 1500 Broadcom Corporation Broadcom NetXtreme II BCM5709 1000Base-T
vmnic2 0000:0c:00.00 bnx2 Up 1000Mbps Full 00:10:18:c6:70:d4 1500 Broadcom Corporation Broadcom NetXtreme II BCM5709 1000Base-T
vmnic3 0000:0c:00.01 bnx2 Up 100Mbps Half 00:10:18:c6:70:d6 1500 Broadcom Corporation Broadcom NetXtreme II BCM5709 1000Base-T
vmnic4 0000:12:00.00 igb Down 0Mbps Half cc:ef:48:b4:5c:06 1500 Intel Corporation 82576 Gigabit Network Connection
vmnic5 0000:12:00.01 igb Up 1000Mbps Full cc:ef:48:b4:5c:07 1500 Intel Corporation 82576 Gigabit Network Connection
~ # ethtool -i vmnic0
driver: bnx2
version: 2.0.15g.v50.11-5vmw
firmware-version: bc 5.2.3
bus-info: 0000:0b:00.0
~ # ethtool -i vmnic1
driver: bnx2
version: 2.0.15g.v50.11-5vmw
firmware-version: bc 5.2.3
bus-info: 0000:0b:00.1
~ # ethtool -i vmnic2
driver: bnx2
version: 2.0.15g.v50.11-5vmw
firmware-version: bc 5.2.3
bus-info: 0000:0c:00.0
~ # ethtool -i vmnic3
driver: bnx2
version: 2.0.15g.v50.11-5vmw
firmware-version: bc 5.2.3
bus-info: 0000:0c:00.1
~ # ethtool -i vmnic4
driver: igb
version: 2.1.11.1
firmware-version: 1.4-3
bus-info: 0000:12:00.0
~ # ethtool -i vmnic5
driver: igb
version: 2.1.11.1
firmware-version: 1.4-3
bus-info: 0000:12:00.1
ā05-08-2012 02:17 AM
Hello,
Please open a TAC service request with above information to further investigate this issue.
Also upload CIMC show tech, ESXi tech bundle ( vm-support ) and 370 show tech.
Padma
ā06-17-2020 02:01 AM - edited ā06-17-2020 02:09 AM
I know I'm working with a much newer unit, but it's the same issue, so I'd be happy to post anew or elsewhere if it's appropriate, or add to this one if it can help somebody. I just deployed 8 UCS M220 M5s in 4 different cities, and 7 out of the 8 1G interfaces came up without incident with default switch settings (115.2, N, 8, 1, no flow, etc.)
The one that didn't, for some reason vmnic1 is seen at 100MBPS full no matter whether we set auto-negotiate or force on the switch side. These are the outputs of the commands that Cisco required of previous posters:
[root@myserver:~] vmware -v
VMware ESXi 6.7.0 build-14320388
[root@myserver:~] esxcfg-nics -l
Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:3b:00.0 ixgben Up 1000Mbps Full d4:e8:80:47:f4:ce 1500 Intel(R) Ethernet Controller
10G X550T
vmnic1 0000:3b:00.1 ixgben Up 100Mbps Full d4:e8:80:47:f4:cf 1500 Intel(R) Ethernet Controller
10G X550T
vmnic2 0000:18:00.0 igbn Down 0Mbps Half d4:78:9b:98:91:0c 1500 Intel Corporation I350 Gigab
it Network Connection
vmnic3 0000:18:00.1 igbn Down 0Mbps Half d4:78:9b:98:91:0d 1500 Intel Corporation I350 Gigab
it Network Connection
vmnic4 0000:18:00.2 igbn Down 0Mbps Half d4:78:9b:98:91:0e 1500 Intel Corporation I350 Gigab
it Network Connection
vmnic5 0000:18:00.3 igbn Down 0Mbps Half d4:78:9b:98:91:0f 1500 Intel Corporation I350 Gigab
it Network Connection
Then the real confusing ones:
[root@myserver:~] ethtool -i vmnic0
Can not get control fd: No such file or directory
[root@myserver:~] ethtool -i vmnic1
Can not get control fd: No such file or directory
[root@myserver:~] ethtool -i vmnic2
Can not get control fd: No such file or directory
[root@myserver:~] ethtool -i vmnic3
Can not get control fd: No such file or directory
[root@myserver:~] ethtool -i vmnic4
Can not get control fd: No such file or directory
[root@myserver:~] ethtool -i vmnic5
Can not get control fd: No such file or directory
I'm guessing ethtool probably is defunct and doesn't work with this hardware/software rev. Any thoughts on where I can go with this? Thank you!
ā06-17-2020 04:10 AM - edited ā06-17-2020 04:19 AM
Greetings.
You really need to open a TAC case for this. I have seen similar cases where resolution involved anything from firmware(HUU), iXGBEN driver updates, bad cables, and even systemboard replacement for the builtin LOM ports in a few cases.
You can confirm driver with command: vmkload_mod -s ixgben
The HCL indicates the latest HUU/driver combination for 6.7U3 should be:
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide