I would very happy if someone can help me.
I am trying to install cisco 1000v on 5 X Esxi.
I was able to install the VSM and the VEM on each of the ESXi but I wasn't able to connect between them
Every time I am trying to add host to my nexus switch I am getting this error:
"vDS operation failed on host hostnamexxxx, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception"
I don't have any svs neighbors and my "show module" shows only the VSM.
I am using Nexus 1000V Release 4.2(1)SV2(2.1) and ESXI version 5.1
I tried to use the install app that comes with this version but when I am trying to deploy the VSM I am getting some kind of error.
Thanks in advance
Please ensure the VEM is actually installed in the VMWARE host.
If not, please manually install the VEM in the Host and check if this resolves the problem.
Great. You said you have manually installed/verified the VEM installation on the server.
If using Layer 3 connectivity, can you please verify the VMKernel mapping for the VSM-VEM connectivty?
How do I do this? How do I configure the VEM to work with the VSM ? I haven't found any configuration commands for the VEM.
Please help me with this...Thanks
In lab we have one physical server. we have created two Esxi servers on it and planning to implement nexus 1000v in HA on it. Will it work.
Thanks and regards,
Yes. I've created a doc detailing how to setup the N1K in a Nested-ESX environment.
I was able to add finally the esxi to the Nexus 1000V but now every port I am adding I have:
"port blocked by admin"
does someone knows what to do in this situation?
Do you see the VEM modules inserted into the VSM - can check using 'show module' on the VSM?
Where are you seeing port blocked by admin?
If it's for VMs, does the port-profile have a 'no shutdown' and are the interfaces enabled on the OS level?
Thanks for the fast repay.
I don't see the VEM under 'show module'
I see it in the vcenter under the port-profiles
The port-profile are with 'no shutdown' command
Well, there should be a communication between the VEM and the VSM for your ports to come up. If 'show module' does not show your VEM, then this is not up. It's a little complicated to troubleshoot this in here. To start with:
Are you using the L2 mode or the L3 mode?
If you are using the L2 mode, you need to make sure that there are no blockages on the control VLAN between the VSM's NIC 1 and the VEM uplinks
If you are using the L3 mode, you need to make sure that the vmk IP address that you chose as l3-capable should be able to talk to the VSM's interface that you are choosing as the L3 point of contact.
Can you tell me what changes you did to make the uplinks come up?
And do you see the modules showing up on the VSM?
If yes, then you should proceed looking at the Vethernet ports.
Post the output of 'show module' and 'show int br' from the VSM, 'vemcmd show port' and 'vemcmd show card' from the ESX host
Like I said, it might be hard to troubleshoot this over here. Get the following outputs and we can give it a shot:
From the VSM:
- show svs domain
- Port-profile that you are using for the uplink interface
- If L3 mode, port-profile that you are using for the l3 capable vmknic
From the VEM:
- vemcmd show card
- vemcmd show port
- esxcfg-vmknic -l -> in case of using the L3 mode
The above would be useful to see if there is an error with your configuration within the Nexus 1000v.
Try to explain the topology between the ESX host and the VSM.
Please see if you can follow the deployment guide:
If you are using the UCS: