cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8144
Views
13
Helpful
32
Replies

Problem Deploying Nexus 1000v

arikgelman
Level 4
Level 4

Hi,

I would very happy if someone can help me.

I am trying to install cisco 1000v on 5 X Esxi.

I was able to install the VSM and the VEM on each of the ESXi but I wasn't able to connect between them

Every time I am trying to add host to my nexus switch I am getting this error:

"vDS operation failed on host hostnamexxxx, An error occurred during host configuration. got (vim.fault.PlatformConfigFault) exception"

I don't have any svs neighbors and my "show module" shows only the VSM.

I am using Nexus 1000V Release 4.2(1)SV2(2.1) and ESXI version 5.1

I tried to use the install app that comes with this version but when I am trying to deploy the VSM I am getting some kind of error.

Thanks in advance

32 Replies 32

Hi Team,

I have configured  VSM and VEM on same EXS server ( 10.10.1.3) but unable to have communication between them.

I am not able to see VEM in VSM sh module command . VSM and Vcentre (10.10.1.12) are communicating properly.  VSM is in L2 mode.

Below are some outputs

~ # vemcmd show trunk

Trunk port 16 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(3972) cbl 1, vlan(3970) cbl 1, vlan(3969) cbl 1, vlan(3968)                                                                               cbl 1, vlan(3971) cbl 1, vlan(111) cbl 1, vlan(113) cbl 1,

~ # vemcmd show card info

Card UUID type  2: 44454c4c-5900-104b-804d-cac04f393253

Card name: Esxi03

Switch name: Nexus1000V

Switch alias: DvsPortset-0

Switch uuid: 7c 0d 38 50 5c 0c 2b 26-b0 d9 34 b3 9e df 29 27

Card domain: 1

Card slot: 1

VEM Tunnel Mode: L2 Mode

VEM Control (AIPC) MAC: 00:02:3d:10:01:00

VEM Packet (Inband) MAC: 00:02:3d:20:01:00

VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00

VEM SPAN MAC: 00:02:3d:30:01:00

Primary VSM MAC : 00:50:56:b8:68:58

Primary VSM PKT MAC : 00:50:56:b8:38:97

Primary VSM MGMT MAC : 00:50:56:b8:34:7b

Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff

Management IPv4 address: 10.10.1.3

Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000

Secondary VSM MAC : 00:00:00:00:00:00

Secondary L3 Control IPv4 address: 0.0.0.0

Upgrade : Default

Max physical ports: 32

Max virtual ports: 216

Card control VLAN: 111

Card packet VLAN: 113

Card Headless Mode : Yes

       Processors: 4

  Processor Cores: 4

Processor Sockets: 1

  Kernel Memory:   16767124

Port link-up delay: 5s

Global UUFB: DISABLED

Heartbeat Set: False

PC LB Algo: source-mac

Datapath portset event in progress : no

Licensed: No

~ # vemcmd show port vlans

                          Native  VLAN   Allowed

  LTL   VSM Port  Mode    VLAN    State  Vlans

   18     Eth1/2   A          1   BLK    1

~ #

~ # vemcmd show port

  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type

   18     Eth1/2   DOWN   UP    BLK       0          vmnic1

Nexus1000V# sh svs domain
SVS domain config:
  Domain id:    1
  Control vlan: 111
  Packet vlan:  113
  L2/L3 Control mode: L2
  L3 control interface:  NA
  Status: Config push to VC successful.
Nexus1000V# sh s
scheduler        ssh              svs
snmp             startup-config   switchname
sockets          stun             system
Nexus1000V# sh svs connections

connection LM_NSK:
    ip address: 10.10.1.12
    remote port: 80
    protocol: vmware-vim https
    certificate: default
    datacenter name: DC1
    admin:
    max-ports: 8192
    DVS uuid: 7c 0d 38 50 5c 0c 2b 26-b0 d9 34 b3 9e df 29 27
    config status: Enabled
    operational status: Connected
    sync status: Complete
    version: VMware vCenter Server 5.1.0 build-799731
    vc-uuid: 6643B3C4-31A9-4A62-A9D5-6B214FB62CE5
Nexus1000V#
Nexus1000V#

~ # vemcmd show trunk
Trunk port 16 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(3972) cbl 1, vlan(3970) cbl 1, vlan(3969) cbl 1, vlan(3968)                                                                               cbl 1, vlan(3971) cbl 1, vlan(111) cbl 1, vlan(113) cbl 1,
~ # vemcmd show card info
Card UUID type  2: 44454c4c-5900-104b-804d-cac04f393253
Card name: Esxi03
Switch name: Nexus1000V
Switch alias: DvsPortset-0
Switch uuid: 7c 0d 38 50 5c 0c 2b 26-b0 d9 34 b3 9e df 29 27
Card domain: 1
Card slot: 1
VEM Tunnel Mode: L2 Mode
VEM Control (AIPC) MAC: 00:02:3d:10:01:00
VEM Packet (Inband) MAC: 00:02:3d:20:01:00
VEM Control Agent (DPA) MAC: 00:02:3d:40:01:00
VEM SPAN MAC: 00:02:3d:30:01:00
Primary VSM MAC : 00:50:56:b8:68:58
Primary VSM PKT MAC : 00:50:56:b8:38:97
Primary VSM MGMT MAC : 00:50:56:b8:34:7b
Standby VSM CTRL MAC : ff:ff:ff:ff:ff:ff
Management IPv4 address: 10.10.1.3
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Secondary VSM MAC : 00:00:00:00:00:00
Secondary L3 Control IPv4 address: 0.0.0.0
Upgrade : Default
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 111
Card packet VLAN: 113
Card Headless Mode : Yes
       Processors: 4
  Processor Cores: 4
Processor Sockets: 1
  Kernel Memory:   16767124
Port link-up delay: 5s
Global UUFB: DISABLED
Heartbeat Set: False
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: No

~ # vemcmd show port vlans
                          Native  VLAN   Allowed
  LTL   VSM Port  Mode    VLAN    State  Vlans
   18     Eth1/2   A          1   BLK    1
~ #
~ # vemcmd show port
  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type
   18     Eth1/2   DOWN   UP    BLK       0          vmnic1

I am new to this technology, So just want to know whether there is communication between Vcentre and VEM.

Thanks and Regards,

Surya

Hi Team,

Below are some outputs, Can you help me to understand why first link is showing as blocking. I have configured vlan 111 as control vlan . but it's not showing.


vemcmd show port
LTL VSM Port Admin Link State PC-LTL SGID Vem Port Type

18 DOWN UP BLK 0 vmnic1
49 UP UP FWD 0 Nexus1000V-4.2...2-Primary.et

~ # vemcmd show port vlans
Native VLAN Allowed
LTL VSM Port Mode VLAN State Vlans

18 A 1 BLK 1
49 A 113 FWD 113

Thanks and regards,

surya

Updating output again as it's not showing correctly in previous comment

vemcmd show port

  LTL   VSM Port  Admin   Link    State   PC-LTL   SGID   Vem Port   Type

   18              DOWN    UP     BLK        0             vmnic1

   49                UP    UP     FWD        0           Nexus1000V-4.2...2-Primary.et

~ # vemcmd show port vlans

                          Native   VLAN    Allowed

  LTL   VSM Port  Mode    VLAN     State   Vlans

   18              A          1    BLK     1

   49              A        113     FWD     113

1. Paste your running config from the VSM

2. The upstream Switch's Interface (show run interface x/y) where the VEM uplinks connect.

Regards,

Robert

Hi Robert,

Appreciate your prompt response.

Tomarrow I will provide the running config.

Here is my topology

ESXserver(10.10.1.3/24)-----Dlink Switch---Vcenter(10.10.1.12/24)

VSM (10.10.1.160/24) and VEM on same ESX server(10.10.1.3).

I have configured Control Vlan 111, Management Vlan 1 and Packet Vlan 113.

There is no problem between VSM and Vcentre communication.

What I have query is , as VSM and VEM on same server, is it  really matter the upstream config as implemented L2 mode and what I read is VSM and VEM communicate on control vlan.

What I see in VEM is I can see Packet vlan 113 but not control vlan 111.

here are outputs

vemcmd show port

  LTL   VSM Port  Admin   Link    State   PC-LTL   SGID   Vem Port   Type

   18              DOWN    UP     BLK        0             vmnic1

   49                UP    UP     FWD        0           Nexus1000V-4.2...2-Primary.et

~ # vemcmd show port vlans

                          Native   VLAN    Allowed

  LTL   VSM Port  Mode    VLAN     State   Vlans

   18              A          1    BLK     1

   49              A        113     FWD     113

- Are you planning to use the VSM's interfaces on the Nexus 1000v or the VMWare vSwitch?

- From what it looks like, one of the interfaces is on the VEM

     - The output is truncated (for vemcmd show port)

     - Can you capture it again?

- If you are using the VSM on the Nexus 1000v, you need to create appropriate port-profiles and have 'system vlan' configured as well. Here upstream switch config does not matter since you have only one host

- If you are planning to keep it on the VMWare vSwitch, then upstream switch config matters

Thanks,

Shankar

Hi Shankar,

Thanks for prompt reply.

I want to use VSM interface.

I will share config tomorrow, as setup is in lab and for config I need to visit lab.

I have created profiles with system clan.

What I think is Vem replaces Vswitch, pls correct me if I am wrong.

Thanks

Surya

Hi Team,

Please suggest how I can do setup. I want to integrate nexus 1000v switch

In VMware.

Thanks ,

Surya

arikgelman
Level 4
Level 4

OK!

I found the problem:

Because I am using L3 mode I had to create a new vmkernel interface to add to the nexus 1000V for the VEM to VSM communication.

Now everything is working great!

  Hi Rober and Shankar,

we are running  VEM version : cross_cisco-vem-v144-4.2.1.1.5.2.0-3.0.1.vib

vCentre version 5.1.0

VSM:: 4.2(1)SV1(5.2)

As requested below find config.

Nexus1000V# sh version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained herein are owned by
other third parties and are used and distributed under license.
Some parts of this software are covered under the GNU Public
License. A copy of the license is available at
http://www.gnu.org/licenses/gpl.html.

Software
  loader:    version unavailable [last: loader version not available]
  kickstart: version 4.2(1)SV1(5.2)
  system:    version 4.2(1)SV1(5.2)
  kickstart image file is: bootflash:/nexus-1000v-kickstart.4.2.1.SV1.5.2.bin
  kickstart compile time:  8/2/2012 23:00:00 [08/03/2012 06:33:37]
  system image file is:    bootflash:/nexus-1000v.4.2.1.SV1.5.2.bin
  system compile time:     8/2/2012 23:00:00 [08/03/2012 07:34:01]


Hardware
  cisco Nexus 1000V Chassis ("Virtual Supervisor Module")
  Intel(R) Xeon(R) CPU         with 2075740 kB of memory.
  Processor Board ID T5056B8347B

  Device name: Nexus1000V
  bootflash:    1557496 kB

Kernel uptime is 0 day(s), 17 hour(s), 58 minute(s), 53 second(s)


plugin
  Core Plugin, Virtualization Plugin, Ethernet Plugin
Nexus1000V#
Nexus1000V# sh running-config

!Command: show running-config
!Time: Wed Aug 14 05:55:03 2013

version 4.2(1)SV1(5.2)
feature telnet
feature lacp

username admin password 5 $1$p4mlwmtY$R7Emo5pDDWEfbrvSzIhrr.  role network-admin

banner motd #Nexus 1000v Switch#

ip domain-lookup
ip host Nexus1000V 10.10.1.160
switchname Nexus1000V
errdisable recovery cause failed-port-state
snmp-server user admin network-admin auth md5 0x34901aae7d141546d58e617f64c9ae19 priv 0x34901aae7d141546d58e617f64c9ae19 localizedkey

vrf context management
  ip route 0.0.0.0/0 10.10.1.1
vlan 1,32,111-113
vlan 1
vlan 32
  name PROFILE_END_SYSTEM
vlan 111
  name CONTROL_111
vlan 112
  name MANAGEMNENT_112
vlan 113
  name packet_113

lacp offload
port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
  vmware port-group
  shutdown
  description Port-group created for Nexus1000V internal usage. Do not use.
  state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
  vmware port-group
  shutdown
  description Port-group created for Nexus1000V internal usage. Do not use.
  state enabled
port-profile type ethernet UPLINK
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 1,32,111-113
  channel-group auto mode active
  no shutdown
  system vlan 1,32,111-113
  state enabled
port-profile type vethernet PROFILE_END_SYSTEM
  vmware port-group
  switchport mode access
  switchport access vlan 32
  no shutdown
  system vlan 32
  state enabled
port-profile type vethernet CONTROL_111
  vmware port-group
  switchport mode access
  switchport access vlan 111
  no shutdown
  system vlan 111
  state enabled
port-profile type vethernet MANAGEMNENT_112
  vmware port-group
  switchport mode access
  switchport access vlan 112
  no shutdown
  system vlan 112
  state enabled
port-profile type vethernet packet_113
  vmware port-group
  switchport mode access
  switchport access vlan 113
  no shutdown
  system vlan 113
  state enabled

vdc Nexus1000V id 1
  limit-resource vlan minimum 16 maximum 2049
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource vrf minimum 16 maximum 8192
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 1 maximum 1
  limit-resource u6route-mem minimum 1 maximum 1
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8

interface mgmt0
  ip address 10.10.1.160/24

interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV1.5.2.bin sup-1
boot system bootflash:/nexus-1000v.4.2.1.SV1.5.2.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV1.5.2.bin sup-2
boot system bootflash:/nexus-1000v.4.2.1.SV1.5.2.bin sup-2
svs-domain
  domain id 1
  control vlan 111
  packet vlan 113
  svs mode L2
svs connection LM_NSK
  protocol vmware-vim
  remote ip address 10.10.1.12 port 80
  vmware dvs uuid "7c 0d 38 50 5c 0c 2b 26-b0 d9 34 b3 9e df 29 27" datacenter-name DC1
  max-ports 8192
  connect
vservice global type vsg
  tcp state-checks
vnm-policy-agent
  registration-ip 0.0.0.0
  shared-secret **********
  log-level


Nexus1000V#

~ # vemcmd show port

  LTL   VSM Port   Admin     Link    State    PC-LTL  SGID     Vem Port  Type

   18                     DOWN     UP      BLK        0                     vmnic1

   49                       UP         UP      FWD      0                      Nexus1000V-4.2...2-Primary.et          h2

~ #

~ # vem status

VEM modules are loaded

Switch Name         Num Ports    Used Ports    Configured Ports      MTU     Uplinks
vSwitch0                  128             9                 128                          1500       vmnic0
DVS Name            Num Ports    Used Ports     Configured Ports      MTU     Uplinks
Nexus1000V           256                13                256                        1500    vmnic1

VEM Agent (vemdpa) is running

The VSM VM would have 3 NICs

Assign the first NIC to the CONTROL_111 port-profile

Assign the second NIC to the MANAGEMNENT_112 port-profile

This should help in connectivity between the VSM and the ESX host that it is sitting on

I am assuming that you are using VLAN 112 for the VSM management here.

Thanks,

Shankar

Hi Shankar,

VSM I have configured already has 3 NIC. Still I am not able to see control vlan 111 in VEM , but I can see Packet vlan 113.

Pl's sugest.

Is the NIC connected?

Regards,

Shankar

Hi Shankar,

Are you saying about virtual nic, I am new to  VM technology.

Can you please look attached snap and guide me what I did wrong and need to get correct.

Thanks ,

Surya

                   Hi Team,

I am running VSM and VEM on same server, but I am not able to see VEM in VSM.VSM is communicating sucessfully with vCentre server.

Can any one confirm  how many physical NIC's ESX server (running VSM and VEM) should have.

Thank

Surya.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: