Hello,
Have installed 2 Cisco Nexus 4000 i/o modules in IBM H Blade Center. Each 4k has 6 10G uplink ports and 14 Virtual Ethernet ports facing the hypervisor 1 for each blade. (Uplink ports connect to upstream NX5K)
Eth1/1 ESXTest1 up trunk full 1000 10g
Eth1/2 ESXTest2 up trunk full 1000 10g
Eth1/3 -- down trunk auto auto 10g
Eth1/4 -- down trunk auto auto 10g
Eth1/5 -- down trunk auto auto 10g
Eth1/6 -- down trunk auto auto 10g
Eth1/7 -- down trunk auto auto 10g
Eth1/8 -- down trunk auto auto 10g
Eth1/9 -- down trunk auto auto 10g
Eth1/10 -- down trunk auto auto 10g
Eth1/11 -- down trunk auto auto 10g
Eth1/12 -- down trunk auto auto 10g
Eth1/13 -- down trunk auto auto 10g
Eth1/14 -- down trunk auto auto 10g
Eth1/15 drnx5k1pod2 up trunk full 10G 10g
Eth1/16 drnx5k1pod2 up trunk full 10G 10g
Eth1/17 drnx5k1pod2 up trunk full 10G 10g
Eth1/18 drnx5k2pod2 up trunk full 10G 10g
Eth1/19 drnx5k2pod2 up trunk full 10G 10g
Eth1/20 drnx5k2pod2 up trunk full 10G 10g
In this example there's 2 ESXi blades in slots 1 and 2.
Problem is this I'm not seeing the 10G speeds on eth1-2. I've tried everything to get them to link at 10G but it's not happening. The Blades are running ESXi4.1. Nexus 1000v version is 4.2(1)SV1(4a).
Here's what the ports look like from the 1000v:
drnx1kbc3# sh cdp nei
Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
S - Switch, H - Host, I - IGMP, r - Repeater,
V - VoIP-Phone, D - Remotely-Managed-Device,
s - Supports-STP-Dispute
Device-ID Local Intrfce Hldtme Capability Platform Port ID
drnx4k2bc3(JAF1441CMPH)Eth3/3 155 S I s N4K-4005I-XPX Eth1/1
drnx4k1bc3(JAF1441CMNR)Eth3/4 177 S I s N4K-4005I-XPX Eth1/1
drnx4k2bc3(JAF1441CMPH)Eth4/3 150 S I s N4K-4005I-XPX Eth1/2
drnx4k1bc3(JAF1441CMNR)Eth4/4 127 S I s N4K-4005I-XPX Eth1/2
drnx1kbc3# sh int status
--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
mgmt0 -- up routed full 1000 --
Eth3/3 -- up trunk full 1000 --
Eth3/4 -- up trunk full 1000 --
Eth4/3 -- up trunk full 1000 --
Eth4/4 -- up trunk full 1000 --
I've been told that the mezzanine card in the Blade Center may be limiting the speed of these links. Has anyone else seen this? Is this a limitation of vSphere ESXi 4.x?