cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7546
Views
0
Helpful
13
Replies

Multiple 1000Vs / VSM pairs on same data-centre / vcentre?

j1mbo78
Level 1
Level 1

Hi,

I have a data-centre with three clusters configured.  I have installed a Nexus 1000V with Primary / Secondary VSMs in one cluster and all is working well.


I have now installed a new 1000v and VSM on the second cluster and have successfully created an SVS connection.  When I try to add a host to the switch in vcentre it spits out an error at me (see attached).  I don't believe this is a control VLAN issue like normal as I am able to put a VMs on different hosts in the Control vlan and ping from host to host.


Can anybody confirm whether it is supported to have multiple Nexus 1000vs / VSM pairs on the same data-centre and registered with the same vcenter server?

Many Thanks,

James Smith

13 Replies 13

Robert Burns
Cisco Employee
Cisco Employee

James,

Yes you can have multiple VSM instances per VC Data Center.  You must have unique names and domain IDs for each VSM instance.

Robert

Hi Robert,


Thanks for the prompt reply.

Are you able to point me in the direction of where this is documented on CCO or the VMware website.

I need to show this to my customer as they are clearly going to need to restructure their environment a little.

Cheers,

James Smith

Hey j1mbo78 ,

Getting started guide (Cisco Nexus 1000V Getting Started Guide, Release 4.0(4)SV1(3) ), page 4-10, step 6 in the comment section to the right.

" 

 

There can be only one active connection at a time. If a previously-defined connection is up, an error message displays and the command is rejected until you close the previous connection using the

no connect command.

"

That should be the trick, it seems like the design needs to be revisited by the integrator as it seems flawed.

Let me know if you do not have a copy. I can provide if you like.

gd

Sorry James I stand corrected.  See my updated post above.

You can have multiple VSMs per Datacenter (tested this in my lab).  The VSMs and Domain IDs for each instance needs to be unique.

While we're on the topic, other than the "DMZ - Corporate Network" reason, any others you feel you would need to have multple 1000v instances in a single DC?  The whole goal of a distributed switch is to consolodate management and security.

I'm in process of getting the documentation updated with the actual limitation.  Have to confirm with development on what it actually is.

Robert

Hi Robert,

Thanks for the update, I did actually manage to add a 3rd host to the data-center so there must be some other issue with the second host.  I'm going to try uninstalling the switch on that host and putting it back on again, something strange is going on.  Any tips on the uninstall process in addition to what's in the troubleshooting guide?

In terms of reasons for having multiple 1ks in the same data center, there's definitely a requirement to isolate PCI and Non-PCI environments.

Cheers,

James Smith

James,

If you follow the troubleshooting guide you should be able to isolate the issue.  Re-installing may/may not fix the problem.  Try to find the cause first if possible.

One thing you can do which is a less impactful is to reset the VEM service.  From the VEM CLI issue "vem restart" and see if that magically clears up your issue.  IF not you have bigger issues and you'll need to isolated them by working through the t-shooting guide.

As for PCI vs non-PCI, can you elaborate/explain what you're referring to here?

Thanks,

Robert

Hi Robert,

I won't pretend to be a security expert but my take on PCI-DSS (Payment Card Industry Data Security Standard) is that it's a security framework specifying security standards for network environments used for e-commerce.

Cheers,

James Smith

Hi Robert,

Rebuilding the VSM has not solved my issue, it was a last resort and now I'm totally stumped.

I have worked through the troubleshooting guide and tried every single thing I can think of but I just cannot add a host to this particular DVS (the 2nd one on the data-centre).  I have added a third host to the 3rd DVS today with now issue, the problem is always associated with the hosts in the second cluster.

I probably should have mentioned before that the hosts are actually blades in a UCS, there is a difference between the 2nd cluster and the other two in that hosts in the second cluster have 10 vNICS provisioned, the other working hosts have 8, this is the only difference I can find.  Do you know of any limitation on vNICs provisioned to a VEM.  Just to recap, when I try to add the host to the DVS in vcentre it creates the task and then comes back after a delay and spits out an error, attached in my first post.  I also found the following today in the logs.

[2010-07-12 05:38:11.543 1651AB90 info 'TaskManager'] Task Created : haTask--vim.dvs.HostDistributedVirtualSwitchManager.createDistributedVirtualSwitch-538603247
[2010-07-12 05:38:11.543 1651AB90 verbose 'NetworkProvider'] ActionAddDVSwitch: 5b 3f 24 50 94 97 3e c0-5a 83 b0 e2 e4 4a ee 2d
[2010-07-12 05:38:11.614 1651AB90 warning 'NetworkProvider'] Error adding dvs 5b 3f 24 50 94 97 3e c0-5a 83 b0 e2 e4 4a ee 2d : Create DVSwitch failed with the following error message: SysinfoException: Node (VSI_NODE_net_create) ; Status(bad0007)= Bad parameter; Message= Instance(0): Input(3) DvsPortset-3 256 cisco_nexus_1000v
[2010-07-12 05:38:11.614 1651AB90 info 'App'] AdapterServer caught exception: vim.fault.PlatformConfigFault
[2010-07-12 05:38:11.614 1651AB90 info 'TaskManager'] Task Completed : haTask--vim.dvs.HostDistributedVirtualSwitchManager.createDistributedVirtualSwitch-538603247 Status error

12 05:38:11 Hostd: [2010-07-12 05:38:11.543 1651AB90 info 'TaskManager'] Task Created : haTask--vim.dvs.HostDistributedVirtualSwitchManager.createDistributedVirtualSwitch-538603247
Jul 12 05:38:11 Hostd: [2010-07-12 05:38:11.543 1651AB90 verbose 'NetworkProvider'] ActionAddDVSwitch: 5b 3f 24 50 94 97 3e c0-5a 83 b0 e2 e4 4a ee 2d
Jul 12 05:38:11 vmkernel: 48:18:51:18.586 cpu9:605726)Uplink: 7362: Couldn't find DvsPortset-3. Creating ps DvsPortset-3
Jul 12 05:38:11 vmkernel: 48:18:51:18.586 cpu9:605726)NetPortset: 859: activating portset #6 as DvsPortset-3 (cisco_nexus_1000v) with 256 ports, index mask is 0xff
Jul 12 05:38:11 vmkernel: 48:18:51:18.586 cpu9:605726)WARNING: Net: 1114: DvsPortset-3: can't create device: Bad parameter
Jul 12 05:38:11 Hostd: [2010-07-12 05:38:11.614 1651AB90 warning 'NetworkProvider'] Error adding dvs 5b 3f 24 50 94 97 3e c0-5a 83 b0 e2 e4 4a ee 2d : Create DVSwitch failed with the following error message: SysinfoException: Node (VSI_NODE_net_creat
Jul 12 05:38:11 e) ; Status(bad0007)= Bad parameter; Message= Instance(0): Input(3) DvsPortset-3 256 cisco_nexus_1000v
Jul 12 05:38:11 Hostd: [2010-07-12 05:38:11.614 1651AB90 info 'App'] AdapterServer caught exception: vim.fault.PlatformConfigFault
Jul 12 05:38:11 Hostd: [2010-07-12 05:38:11.614 1651AB90 info 'TaskManager'] Task Completed : haTask--vim.dvs.HostDistributedVirtualSwitchManager.createDistributedVirtualSwitch-538603247 Status error
Jul 12 05:38:11 Hostd: (vim.dvs.HostDistributedVirtualSwitchManager.DVSCreateSpec) {    dynamicType = <unset>,     uuid = "5b 3f 24 50 94 97 3e c0-5a 83 b0 e2 e4 4a ee 2d",     name = "au-pci-vsmp02",     switchIpAddress = <unset>,     uplinkPortgrou
Jul 12 05:38:11 pKey = (string) [       "dvportgroup-1170",        "dvportgroup-1168"    ],     uplinkPortKey = (string) [       "224",        "225",        "226",        "227",        "228",        "229",        "230",        "231",        "232",        "233",        "2
Jul 12 05:38:11 34",        "235",        "236",        "237",        "238",        "239",        "240",        "241",        "242",        "243",        "244",        "245",        "246",        "247",        "248",        "249",        "250",        "251",        "252"
Jul 12 05:38:11 ,        "253",        "254",        "255"    ],     modifyVendorSpecificDvsConfig = true,     vendorSpecificDvsConfig = (vim.dvs.KeyedOpaqueBlob) [       (vim.dvs.KeyedOpaqueBlob) {          dynamicType = <unset>,           key = "com.cisco.svs.switch.cd
Jul 12 05:38:11 p",           opaqueData = "status=listen",        },        (vim.dvs.KeyedOpaqueBlob) {          dynamicType = <unset>,           key = "com.cisco.svs.switch.config",           op
Jul 12 05:38:11 Hostd:
Jul 12 05:38:11 Hostd: [2010-07-12 05:38:11.808 1651AB90 info 'Vmomi'] Throw vim.fault.PlatformConfigFault
Jul 12 05:38:11 Hostd: [2010-07-12 05:38:11.808 1651AB90 info 'Vmomi'] Result:
Jul 12 05:38:11 Hostd: (vim.fault.PlatformConfigFault) {    dynamicType = <unset>,     faultCause = (vmodl.MethodFault) null,     text = "Create DVSwitch failed with the following error message: SysinfoException: Node (VSI_NODE_net_create) ; Status(b
Jul 12 05:38:11 ad0007)= Bad parameter; Message= Instance(0): Input(3) DvsPortset-3 256 cisco_nexus_1000v ",     msg = "",  }

Many Thanks,

James Smith

....one other thing, I can't verify this until tomorrow but I've been reading more forums and blogs and it seems that there can be issues with VUM and VEM installations.

I have installed all of my VEMs manually (using vihostupdate,pl from the remote CLI), including on the troublesome hosts.  I have a feeling that VUM is running on the 2nd cluster on which I cannot add hosts to the DVS.  Do you think that when I add the host (even though the VEM is already installed) there could be some sort of issue with VUM?

Cheers,

James Smith

Can you get the following output from the CLI of your host:

vemcmd show card

vem status

esxupdate query

rpm -qa | grep vmkernel | awk  -F. '{print $5}'

Robert

Hi Rob,

Any ideas on how to do that on and ESXi server?

I have Remote CLI.

Cheers,

James

You'll need Console access (KVM) or direct host access.

Alt + F1

(At the blank screen type) "unsuppported"

(Then enter your root password)

This will give you a very ripped down shell to run limited commands including the ones I listed.

Robert

Hi Robert,

Thanks for the tip.  So the only way to run VEM commands directly on an ESXi host is through a method VMware do not support?

Anyway, when we ran your suggested commands we got a strange error back -

error initialising sf_dpa_api_init

I had also noticed this in one of the VMware logs the previous day but didn't pay much attention to it.

This suggested that the VEM software and therefore the commands were there but not quite as they should be, which tied in with the VEM software having being installed with vihostupdate.pl but not functioning properly.  Anyway we migrated all of the VMs off each host, rebooted each host and then suddenly the VEMs showed up on the VSM.  I have no idea why this happened on these 2 particular hosts and not the other 6 but it seemed that the VEM software wouldn't work properly until we rebooted the host.

Incidentally, I couldn't find which directory the rpm command was in in order to run it, I checked /etc, /bin and /sbin.  Could you let me know where it is for future reference?

Thanks for all of your patient assistance!

Cheers,


James Smith

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: