cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8812
Views
0
Helpful
22
Replies

1000v VSM doesn't push config changes to vCenter

eric.bauer
Level 1
Level 1

Team,

My apologies if there are similar posts on this forum but I'm in a bit of a time crunch.

I have a situation where a client's VSM is no longer pushing any config changes to vCenter.  The following snippet from the config states 22 VLANs configured for the VM traffic uplink:

port-profile type ethernet 1000v-vm-traffic-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 32,75,120-124,253,255,259-261,263,266-268,270,74
0,745,785,797,888
  channel-group auto mode on sub-group cdp
  no shutdown
  description 1000v virtual machine traffic uplink
  state enabled

However, the vCenter vDS configuration is missing 6 of the VLANs (120-124, 270):

Capture.JPG

The VSM is connected to vCenter (sho svs conn), it can ping vCenter, and I've validated that the XML plugin matches the VSM extension key.  I've tried removing the VLANs in question and re-creating them but it did not spawn any updates to vCenter.  The control and packet VLANs are configured as system VLANs in the port profiles:

port-profile type ethernet 1000v-system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 3321-3322
  channel-group auto mode on sub-group cdp
  no shutdown
  system vlan 3321-3322
  description 1000v packet control system uplink
  state enabled

port-profile type vethernet 1000v-control
  vmware port-group
  switchport mode access
  switchport access vlan 3322
  no shutdown
  system vlan 3322
  state enabled
port-profile type vethernet 1000v-packet
  vmware port-group
  switchport mode access
  switchport access vlan 3321
  no shutdown
  system vlan 3321
  state enabled

This was working at initial install (hence the VLANs present in vCenter uplinks) but has since stopped sync'ing.  Any help would be greatly appreciated and let me know if I can provide any additional information.

Thanks in advance!

virtualEB

22 Replies 22

Sorry for the delay in response, Robert.

I tried 'switchport trunk allowed vlan except 3321-3322' on the original vm-uplink port profile but the change is not being reflected in vCenter (the VLAN IDs for that uplink port in vCenter did not change at all).  It's been 15 minutes and expected, I guess, as it's the root of my problem.

So I'm about to create the new uplinks with the mac-pinning option and not sure how to migarte a host's VEM to the new uplink when I'm done.

Eric,

Thanks for that test.   Can you try this next:

Create a new test vlan "vlan 999"

If it doesn't immediately push to VC, in the Port tab of VC, click the "Start Monitoring Port state" link in the top right and see if it refreshes your VLAN list.  Shortly after you start monitoring you can stop it.

Let me know if this helps.

Also - You're definately "allowing" the missing VLANs on the upstream interfaces and ensure these VLANs have been created on each switch correct?

Regards,

Robert

I tired adding test VLAN 999 and the uplink PP in VC doesn't refresh after the add.  The 'Start Monitoring Port state' didn't work either after about 5 minutes.  I then added a vethernet PP for VLAN 999 and it does show up in VC but the VLAN list of the uplink ethernet PPs just won't update.

I don't have access to the upstream switch configuration.  We went through the exercise with the customer last week and there were 3-4 VLANs not configured northbound (7 total missing from VC) but have been added.  I'm going to the customer site today and I will verify that they are allowed on both upstream switches.

I've also confirmed everything is configured on my end (although I don't have access to add VLAN 999 to the upstream configuration):

1. VLAN created in UCS

2. Attach VLAN to vNICs and verify UCS service profile vNICs have VLAN associated

3. Add VLAN to 1000v

4. Verify VLAN allowed on uplink PP  (switchport trunk allowed vlan except 3321-3322)

5. Add vethernet PP for VLAN 999 (shows up in VC as Port Group)

I created the new uplink PPs with the mac-pinning option and tried to remove the original PP from one of the VEM's interfaces but received an error something to the effect of  'can't de-inherit profile from an interface that is attached'.  I tried shutting down that interface, as well, but received the same error.  How do I migrate a host's VEM to the new upling PP?

Thanks,

Eric,

This is more & more looking like a L2 problem somewhere northbound with those specific VLANs.  If new PP's are showing up fine when pushed from the VSM, your VC connection is fine.

In regards to any VEMs on UCS, our suggested uplink method is mac pinning.  Once you've create the new uplink port profile you simplly go to VC, select each host one by one - go to Congifuration Tab  - Networking - vDS Tab - and select Modify Physical Uplink.  From here remove one uplink from the old uplinke port profile, and "add" it to the new one.  Click ok - and let the system process the command before repeating on any remaining links.  You'll need to do this process on each host.  Once ALL hosts have been migrated, you'll be able to delete the old unused PP.  You can't delete a PP while it's in use.  To confirm, from the VSM use "show port-profile usage".

Next time you get access to your customer's switches, trace the path from your VEM's all the way to your vCenter ensure the VLAN is both created & allowed all the way through.  Using "show mac address vlan x" will show you quickly if the switch is learning any MAC address for that VLAN.

Can you describe:

-Toplogy including all switching devices between your VSM, VEM & VC?

-Is VC a VM or physical server?

Robert

Robert,

I really appreciate your efforts on this.  And I'm starting to agree there may be a problem elsewhere.

The VC is a VM and the VSM shows no errors regarding connectivity to VC.  The northbound core is two Nexus 7000 switches.  Each fabric interconnect has one uplink to each 7000 in a port-channel.  Each ESX host is presented 8 vNICs, 6 of which are in the VC 1000v vDS and 4 of those are used for paired uplinks (1000v-system-uplink and 1000v-vm-traffic-uplink).  I configured the two uplinks with 'sub-group cdp' because of the port-channel links northbound from the fabric interconnects.

I have created the mac-pinning PPs and have a host at my disposal for testing.  I'll try moving that to see if there's any difference with my test VM's pings.  I'm meeting with the customer shortly, as well, to troubleshoot at the 7000 level.

Thanks again!

Robert,

I am happy to report that moving the VEMs to the new uplink configuration worked (mac-pinning)!!

Thanks very much for your assistance and your tenacity on resolution!

Eric

Great to hear Eric!

After you mentioned UCS in your last thread, I figured that might have contributed to the issues here.

Here's a Best practice guide for 1000v on UCS for your reference which details uplink confirgation (vPC-HM) and other recommendations.

Regards,

Robert

Robert,

Two of six hosts lost their connection to the 1000v.  The VMs that were migrated there after moving the host VMEs have 'invalid backing' for the NICs portgroup.  I've tried reverting the hosts adapters back to the original port-profile to no avail.  Any ideas?