cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2181
Views
0
Helpful
2
Replies

Nexus 1000v to 3125 port-channel error

craig.potter
Level 1
Level 1

Thanks to anyone in advance who can help with this. I have been strugling for some weeks now with getting this to work and a large client is about to ditch the 1000v because it just won't work. Below is my design;

HP 7000 blade server chassis with all 8 interconnect bays filled with Cisco 3125x-s blade switche.

5 x ESXi vmWare half height blade servers with 8 NICs in slots 1,3,4,5,6

1000v VEM on each server with NICs 1-7 assigned to it and portchanneled

Virtual Switch 0 has NIC 0 connected and the management VMK

Right, when I move the VMK to the VEM and move the NIC, it stays connected and manageable through vCenter for a minute or so and then I lose connectvity to my servers. It's then a bit of a struggle to get management of the server back to the standard switch (v switch 0) . Does anybody know why this happens or have any experience with this design scenario as I cannot find any reference guide anywhere detailing a set up like this. It seems kind of pointless to waste 2 NICs on leaving them on the vSwitch for management only leaving my portchannel with 6 NICs totalling 6 gig throughput, and quite frankly, thats a deal breaker for my client who wants to run HP blade switches at 10 Gig and no 1000v.

I would appreciate it if anyone has a working config or can tell me of a tip that needs to be done to make this work.

1 Accepted Solution

Accepted Solutions

felixjai
Level 1
Level 1

Hope you have already figured this out. But if not, you may want to verify your system-uplink port-profile configuration. I've used HP blade and 3120 blade switch and ESX hosts each has 4 NICs interconnect to the 3120. I moved all 4 NICs (as one etherchannel) to 1000v including vmk ports, control, packet, mgmt. You will have to use system vlan command to specify these vlans in your system-uplink port profile.

Example-

control is vlan2, packet is vlan3, mgmt vlan is vlan10. vlan100-110 are just data/server vlans.

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 2-3,10,100-110
  channel-group auto mode on
  no shutdown
  system vlan 2-3,10
  state enabled

And you said the ESX lost communicate while moving 1000V. you might want to check your control, packet and mgmt vlans are allowed in the vlan trunk on your 3125 and to the ESX that hosts your VSMs. Post your configs of your VSM and 3125 so we can point out any misconfigurations.

View solution in original post

2 Replies 2

felixjai
Level 1
Level 1

Hope you have already figured this out. But if not, you may want to verify your system-uplink port-profile configuration. I've used HP blade and 3120 blade switch and ESX hosts each has 4 NICs interconnect to the 3120. I moved all 4 NICs (as one etherchannel) to 1000v including vmk ports, control, packet, mgmt. You will have to use system vlan command to specify these vlans in your system-uplink port profile.

Example-

control is vlan2, packet is vlan3, mgmt vlan is vlan10. vlan100-110 are just data/server vlans.

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 2-3,10,100-110
  channel-group auto mode on
  no shutdown
  system vlan 2-3,10
  state enabled

And you said the ESX lost communicate while moving 1000V. you might want to check your control, packet and mgmt vlans are allowed in the vlan trunk on your 3125 and to the ESX that hosts your VSMs. Post your configs of your VSM and 3125 so we can point out any misconfigurations.

This was what I eventually did to stabilize this link. Along with some other small things like sequence of events in moving uplinks from standard vSwitch to DVS. I was also advised by TAC to add the system vlan command to the management port profile as well. Thanks for the post.

Review Cisco Networking for a $25 gift card