cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4447
Views
0
Helpful
6
Replies

vPC-HM compatibility

nvermande
Level 1
Level 1

Hi there,

Can we use vPC-HM with NX1KV connected to small catalyst switches or does the upstream switch have to be vPC compliant ?

If so, is there a compatibility matrix to use vPC-HM?

Why am asking this question:

i have 2 uplinks for vmkernel,ctrl,packet,mgmt per esx on 2 upstream switches:

here is the port profile:

port-profile system-uplink
  description: VEM_ESX_VMK_MGT
  status: enabled
  capability uplink: yes
  capability l3control: no
  system vlans: 10-11,301-302
  port-group: system-uplink
  max-ports: -
  inherit:
  config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-11,301-302
    channel-group auto mode on sub-group cdp
    no shutdown
  evaluated config attributes:
    switchport mode trunk
    switchport trunk allowed vlan 10-11,301-302
    channel-group auto mode on sub-group cdp
    no shutdown

cdp is activated everywhere on the physical interfaces and the results of "sh cdp nei" is ok.

however, when i vmotion Vms, sometimes i have a message saying that the votion network is not reachable.

Is seems that the mac address of the vmkernel interface does'nt register on the physical switch when load balancing the connections.

If i clear the mac address on the phyical switch, the the load-balancing can occur and the vmotion can start...

Is it a known issue ?

I don't know what to do ....

any help is welcomed

6 Replies 6

nenduri
Cisco Employee
Cisco Employee

Hi,

Yes, you can configure vPC-HM in the topology you mentioned below.

NOTE: when the upstream switches are clustered switch (vPC, VSS etc), do NOT configure vPC-HM on N1K.

For vPC-HM, you need both the upstream switches to be L2 connected for all the VLANs you configured on the vPC-HM.

In your example below you have 4 virtual interfaces corresponding to vmkernel, mgmt, control and packet behind VEM. With vPC-HM these ports get pinned to either left or right switches. You will see mac addresses of these ports on only one switch.

Assume that vmkernel port on HOST1 is pinned to left upstream switch and the vmkernel port on HOST2 is pinned to right upstream switch. When you try to vmotion a VM between these HOSTs, you need to have VMOTION vlan L2 connectivity between the two upstream switches. If you don't have that you will see connectivity issues.

Thanks,

Naren

Than you for your answer.

Both upstreanm switches are 4948 themselves connected to a 6509 chassis where VSS is configured  with another 6509 on another site.

VSS is not configured on the 4948.

All the vlan are only L2 connected.

So is could the problem be that the upstream switches are connected to the 6509 where VSS is configured?

What is strange is that for example if i migrate 4 machines from esx1 to esx2, the 2 first vmotions will succeed but the 2 other ones will timeout.

So when vPC-HM pin ports to a subgroup, is it in a active/passive way or in a round robin and in that case how does it decide to go to the other subgroup on the second upstream switch?

I have also another symtpom, but i don't know if it's related to the same problem:

the service console is bind to one link on a switch and to one link on the other switch, with vPC-HM. if i do a shutdown of one of these interfaces, sometimes i loose between one and 3 pings of the service console, but sometimes 7 or 8 pings, causing HA to declare the host isolated and shuting down it's VMs.....So normally is vPC-HM failover instant or is there service disruption?

thank you!!

we have a very similar issue and have opened a TAC case with VMware/cisco.  I'll let you know what we find out.  One interesting thing we have noticed is that in our case we have mostly noticed the issue on uplink port-profiles that have system-vlans configured.  it may just be that those port-profiles are more obvious when they fail because they contain our service console and vm NFS storage which as you pointed out throw HA, VMotion and other hard-to-miss alerts.

The VSS on 6509 should not be an issue. The virtual ports get pinned to the sub-groups in round robin fashion. Only when all the pNICs in a sub-group fails, all the virtual ports that were pinned to this sub-group gets re-pinned to the other sub-group. At any given time, the virtual port MACs will be learnt on single switch.

vPC-HM failover is instantaneous. It depends when will the esx host detects link down after you shutdown on the upstream switch.

nvermande
Level 1
Level 1

updated-

After a lot of tests, it seems like we must trust the documentation concerning the cdp timer.

Cisco warns that it can take up to 60s to advertise link status.

However, even when setting the cdp at the minimum timer, you can still loose 6-7 pings when a link comes up.

So i've tried to create the sub-groups manually, and then the problem disappears !!

If your uplink switch does not support etherchannel (or you don't want to configure it), choose the mac-pinning option.

Now everything is stable and seems to be very strong (i've put a vPC-HM config with 4 links over 2 etherchannels)

Thank you for your help

NV

Yes, With 'sub-gtroup cdp' option of vPC-HM, the port will not be forwarding until the cdp information of the upstream switch is available.

It is recommended to use "mac-pinning" option for vPC-HM configuration.

NOTE: Do not configure any channels on the upstream switch ports that are connected to N1K VEM for mac-pinning configuration.

thanks,

Naren

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: