10-06-2011 01:49 PM
Hi all,
I have two Cisco 2960 connected each other, and two hosts ESXi 4.1 with one uplink to each switch.
With the Vmware Standard vSwitch all VMs work properly, but with nexus 1000v only the VMs that were pinned to first uplink work have access to my gateway.
I tried to change port-channel configuration to mac-pinning, sub-group manual and sub-group CDP, but I had no success.
I just attached my topology and configuration for more detailed information about the enviroment.
Tks
Rodrigo Caldeira
Cimcorp - Brazil
10-06-2011 02:38 PM
First problem is your system uplink is forwarding all your VLANs. Every VLAN can only be allowed on one uplink, never more than one or the system can't determine which uplink to use.
Also you have six uplink port profiles so you must have at least 7 NICs connected -correct? As per your diagram, you only show a pair of uplinks, which I assume are used for your "system-uplink" PP. With DMZs it makes sense for them to have their own uplinks, as they usually go to separate switches, but 6 Uplink groups is quite a bit. Hopefully this was by design or required.
I'd also suggest to use MAC-Pinning. It's the same behavior as the vSwitch handles for balacing traffic. Your "system-uplink" port profile doesn't even have a Channel-Group command - which is MUST have when more than one physical uplink is used. Add/Change the Channel group method of mac-pinning to each ethernet uplink port profile and you're covered for a single or multiple uplinks (up to 32).
Ensure your upstream Cisco 2900's switches are configured for Port Fast and BPDUfilter on every port connected to a VEM uplink. This is required as the 1000v doesn't participate in STP.
Ex.
spanning-tree portfast edge trunk
spanning-tree bpduguard enable
port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1-3967,4048-4093 <<< Can't allow any VLANs used on any other uplink. Allow only what's needed.
no shutdown
system vlan 1
state enabled
channel-group auto mode on mac-pinning <<< Add this to your port profile.
port-profile type ethernet core-up
vmware port-group
switchport mode access
switchport access vlan 2
channel-group auto mode on sub-group cdp
no shutdown
description Uplink Rede Core
state enabled
port-profile type ethernet dmz-int-up
vmware port-group
switchport mode access
switchport access vlan 19
channel-group auto mode on sub-group cdp
no shutdown
description Uplink DMZ Internet
state enabled
port-profile type ethernet dmz-spb-up
vmware port-group
switchport mode access
switchport access vlan 21
channel-group auto mode on sub-group cdp
no shutdown
description Uplink DMZ SPB
state enabled
port-profile type ethernet dmz-emp-up
vmware port-group
switchport mode access
switchport access vlan 20
switchport trunk allowed vlan 20
channel-group auto mode on sub-group cdp
no shutdown
description Uplink DMZ Empresas
state enabled
port-profile type ethernet hom-up
vmware port-group
switchport mode access
switchport access vlan 9
channel-group auto mode on sub-group cdp
no shutdown
description Uplink Rede Homologacao
state enabled
Make the changes as recommended and let me know how you go.
Regards,
Robert
10-06-2011 02:59 PM
Hi Robert,
Thank you to reply.
You are correct, the diagram only show my DMZ connections. The system-uplink, core-up and hom-up are connected to Cisco 6513, this connections are working properly. We never have problem with this connections, only with DMZ connections.
I applied your suggestions, but it didn't solve the DMZ problem.
The strange thing is that with Vmware standard switch it wors well.
Tks
Rodrigo Caldeira
10-06-2011 04:06 PM
Which Port-Profile are you having issues with exactly?
Please provide the following outputs
module vem X execute vemcmd show port (where X is the module # of an affected VEM host)
module vem X execute vemcmd show trunk
Can you also provide a "show run" of your latest config?
Thanks,
Robert
10-07-2011 06:36 AM
10-07-2011 10:14 AM
Rodrigo,
The updated config looks good. Might be a programming issue on the VEM as you say it works fine when using the vSwitch - so this indicates the upstream configuration should be correct.
Can you grab a couple more commands please - we're narrowing in on the issue here
On the VSM:
show tech-support svs
On VEM 3 & 4
vem-support all << This will create a .tgz file which I'll need you to get to me. Easy way is to copy the file to a VMFS datastore, then pull it off via the vCenter datastore browser.
Ex.
~ # vem-support all
Generated /tmp/cisco-vem-2011-1007-1711.tgz
~ # cp /tmp/cisco-vem-2011-1007-1711.tgz /vmfs/volumes/<Datastore Name>/
(Then you can access it from vCenter)
If you can gather this for VEM 3 & 4, I'll keep investigating for you.
Regards,
Robert
10-09-2011 02:45 PM
Robert,
After I updated and rebooted all vmware hosts and apply your recommendations, at this moment the uplinks are working well.
I don't know why this occur.
I'll be monitoring and executing some tests to ensure that is ok.
Thank you for your attention and work to help me.
Best Regards,
Rodrigo
10-10-2011 10:28 AM
Great to hear Rodrigo!
I think there was just stale programming on the VEMs. I can't confirm without the files above, but keep an eye on things over the next week. You should be fine going forward. I think with all the testing & changes things just got out of sync.
Let me know if you have any other issues.
Enjoy!
Robert
10-18-2011 12:58 PM
Robert,
The problem came back, my customer told me that the same symptoms are happen.
I asked him to send the output that you wanted before (vem-support all and show tech-support svs).
When I get this outputs I'll attach it for you, ok?
Tks,
Rodrigo
10-18-2011 01:13 PM
Sure thing.
Once you gather those outputs, from the CLI of the ESX host issue:
vem restart
This should also recover your host, rather than rebooting it.
Regards,
Robert
10-19-2011 07:21 AM
Robert,
I attached the output of the command "vem-support all". There are two files: one of the host Kursk that have a problem, and other of the host Kharkov that is working properly.
When the custumer send the output of "show tech-support svs" , I'll post here.
The custumer said that when he moved the vm that was working from Kursk to Kharkov and the vm continue working, but when he moved back to Kursk the vm network stop to work.
Tks for help me.
Rodrigo
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide