01-30-2012 01:16 PM
Hello,
Have installed 2 Cisco Nexus 4000 i/o modules in IBM H Blade Center. Each 4k has 6 10G uplink ports and 14 Virtual Ethernet ports facing the hypervisor 1 for each blade. (Uplink ports connect to upstream NX5K)
Eth1/1 ESXTest1 up trunk full 1000 10g
Eth1/2 ESXTest2 up trunk full 1000 10g
Eth1/3 -- down trunk auto auto 10g
Eth1/4 -- down trunk auto auto 10g
Eth1/5 -- down trunk auto auto 10g
Eth1/6 -- down trunk auto auto 10g
Eth1/7 -- down trunk auto auto 10g
Eth1/8 -- down trunk auto auto 10g
Eth1/9 -- down trunk auto auto 10g
Eth1/10 -- down trunk auto auto 10g
Eth1/11 -- down trunk auto auto 10g
Eth1/12 -- down trunk auto auto 10g
Eth1/13 -- down trunk auto auto 10g
Eth1/14 -- down trunk auto auto 10g
Eth1/15 drnx5k1pod2 up trunk full 10G 10g
Eth1/16 drnx5k1pod2 up trunk full 10G 10g
Eth1/17 drnx5k1pod2 up trunk full 10G 10g
Eth1/18 drnx5k2pod2 up trunk full 10G 10g
Eth1/19 drnx5k2pod2 up trunk full 10G 10g
Eth1/20 drnx5k2pod2 up trunk full 10G 10g
In this example there's 2 ESXi blades in slots 1 and 2.
Problem is this I'm not seeing the 10G speeds on eth1-2. I've tried everything to get them to link at 10G but it's not happening. The Blades are running ESXi4.1. Nexus 1000v version is 4.2(1)SV1(4a).
Here's what the ports look like from the 1000v:
drnx1kbc3# sh cdp nei
Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
S - Switch, H - Host, I - IGMP, r - Repeater,
V - VoIP-Phone, D - Remotely-Managed-Device,
s - Supports-STP-Dispute
Device-ID Local Intrfce Hldtme Capability Platform Port ID
drnx4k2bc3(JAF1441CMPH)Eth3/3 155 S I s N4K-4005I-XPX Eth1/1
drnx4k1bc3(JAF1441CMNR)Eth3/4 177 S I s N4K-4005I-XPX Eth1/1
drnx4k2bc3(JAF1441CMPH)Eth4/3 150 S I s N4K-4005I-XPX Eth1/2
drnx4k1bc3(JAF1441CMNR)Eth4/4 127 S I s N4K-4005I-XPX Eth1/2
drnx1kbc3# sh int status
--------------------------------------------------------------------------------
Port Name Status Vlan Duplex Speed Type
--------------------------------------------------------------------------------
mgmt0 -- up routed full 1000 --
Eth3/3 -- up trunk full 1000 --
Eth3/4 -- up trunk full 1000 --
Eth4/3 -- up trunk full 1000 --
Eth4/4 -- up trunk full 1000 --
I've been told that the mezzanine card in the Blade Center may be limiting the speed of these links. Has anyone else seen this? Is this a limitation of vSphere ESXi 4.x?
Solved! Go to Solution.
02-06-2012 06:40 AM
Thanks Cuong for the response
The Nexus 4K is a high speed I/O module that gets inserted into one of the four high speed I/O slots on the media tray (7-10). We have two 4k’s installed in I/O bay 7 and 9. Each NX4K has six 10G external uplinks and fourteen 10G internal uplinks; One for each of the 14 blades.
Each blade has its own 'Expansion' modules to connect to the media tray. The Blades have ‘expansion slots’ which you plug the expansion modules/ NIC cards into. Beware if you're going to use 4 4K's and fully populate the Blade Center you will need 4 port expansion cards.
So what you end up with is the NX4K connected to one side of the media tray and the blades connected to the opposite side of the media tray via their expansion modules forming the ‘pipe’ that gets the blade to the external networks. Because the expansion modules currently installed in the blades are only 1G capable the blades internal uplinks to the 4k will be limited to 1G. This will work but it will be limited to the slower 1G speed.
02-05-2012 01:36 PM
Hello Mark. So the "show cdp neighbors" that you ran was on the Nexus 1000V VSM? When you log into the Nexus 4000, are those ports showing 10G? The Nexus 1000V doesn't do anything on the physical speed of the port (in this case, the blade server port Eth1/1 & Eth1/2). So may I ask what mezzanine adapter you have installed on your IBM blade server? Is it a Qlogic or some other adapter? I would first login to the N4K switch itself and see if the ports are coming up 10G or not. It maybe for some reason negotiating down or if the speed was set to 1G, it will go down to 1G. Typically if you leave it set to auto, it will go to 10G. When the N1KV comes into the picture, you will bind those 10G ports to the "uplink port-profile". But at this point, it seems like you are having physical port speed issues. I would suggest looking at the N4K first (or open a Cisco TAC case) and they can help from there.
Cuong
02-06-2012 06:40 AM
Thanks Cuong for the response
The Nexus 4K is a high speed I/O module that gets inserted into one of the four high speed I/O slots on the media tray (7-10). We have two 4k’s installed in I/O bay 7 and 9. Each NX4K has six 10G external uplinks and fourteen 10G internal uplinks; One for each of the 14 blades.
Each blade has its own 'Expansion' modules to connect to the media tray. The Blades have ‘expansion slots’ which you plug the expansion modules/ NIC cards into. Beware if you're going to use 4 4K's and fully populate the Blade Center you will need 4 port expansion cards.
So what you end up with is the NX4K connected to one side of the media tray and the blades connected to the opposite side of the media tray via their expansion modules forming the ‘pipe’ that gets the blade to the external networks. Because the expansion modules currently installed in the blades are only 1G capable the blades internal uplinks to the 4k will be limited to 1G. This will work but it will be limited to the slower 1G speed.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide