cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1279
Views
4
Helpful
4
Replies

Need Help to Deploy 1KV

Steven Williams
Level 4
Level 4

I think alot of my issues with deploying the 1KV is the understanding of the system vlan and maybe the control vlan.

Take a look at my Diagram...My VSM mgmt0 interface is on the same subnet as my ESXi hosts, which is VLAN202. From the ESXi hosts I can ping the VSM and from the VSM I can ping the ESXi hosts, so thats working. Now when my VSMs are on two separate hosts they dont communicate together.

Now the Control VLAN is 201 and it is created on the Nexus 5ks and is allowed on both vPC15 and vPC16. It is also created in the UCSM and is in each of the ESXi hosts service profile and being carried by vNIC 4 and 5. I have a port group on current vDS for the control traffic and tagged with VLAN201.

Capture.PNG

This is how my current vDS looks. Both VSMs are up and are working in active/failover. So when I added the Hosts to the Nexus1000v vDS, VUM added the VEMs and I verified they are there on the ESXi hosts with the "vem status" command.

"Show mod" on the 1000v does not show any VEMs. Also the "show svs neighbors" only shows the VSM.

Here is the running config of the 1000v, which is more than likely not correct.

show run

!Command: show running-config
!Time: Fri Nov  8 16:50:55 2013

version 4.2(1)SV2(2.1a)
svs switch edition essential

no feature telnet

username admin password 5 $1$pIdF9m7q$PIhIpsr//2BIkySzd5y9r.  role network-admin

banner motd #Nexus 1000v Switch#

ip domain-lookup
ip host N1KV-01 10.170.202.5
switchname N1KV-01
errdisable recovery cause failed-port-state
vem 3
  host id c4b52629-fbe7-e211-0000-000000000005
snmp-server user admin network-admin auth md5 0x7bfb0100d1a2c5faf79c77aad3c8ecec priv 0x7bfb0100d1a2c5faf79c77aad3c8ecec localizedkey
snmp-server community atieppublic group network-operator
ntp server 10.170.5.10

vrf context management
  ip route 0.0.0.0/0 10.170.202.1
vlan 1,5,201-203

port-channel load-balance ethernet source-mac
port-profile default max-ports 32
port-profile type ethernet Unused_Or_Quarantine_Uplink
  vmware port-group
  shutdown
  description Port-group created for Nexus1000V internal usage. Do not use.
  state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
  vmware port-group
  shutdown
  description Port-group created for Nexus1000V internal usage. Do not use.
  state enabled
port-profile type ethernet VM-Sys-Uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 201
  no shutdown
  system vlan 201
  state enabled
port-profile type ethernet Mgmt1-Uplinks
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 202
  no shutdown
  system vlan 202
  state enabled
port-profile type vethernet N1KV-Control
  vmware port-group
  switchport mode access
  switchport access vlan 201
  no shutdown
  system vlan 201
  state enabled
port-profile type vethernet Mgmt1
  vmware port-group
  switchport mode access
  switchport access vlan 202
  no shutdown
  system vlan 202
  state enabled
port-profile type vethernet vMotion
  vmware port-group
  switchport mode access
  switchport access vlan 203
  no shutdown
  state enabled
port-profile type vethernet Servers-Prod
  switchport mode access
  switchport access vlan 5
  no shutdown
  state enabled
port-profile type vethernet N1kV-Control
  vmware port-group
  switchport access vlan 201
  switchport mode access
  no shutdown
  system vlan 201
  state enabled
port-profile type vethernet N1kV-Mgmt
  vmware port-group
  switchport mode access
  switchport access vlan 202
  no shutdown
  system vlan 202
  state enabled

vdc N1KV-01 id 1
  limit-resource vlan minimum 16 maximum 2049
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource vrf minimum 16 maximum 8192
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 1 maximum 1
  limit-resource u6route-mem minimum 1 maximum 1

interface mgmt0
  ip address 10.170.202.5/24

interface control0
line console
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.2.1a.bin sup-1
boot system bootflash:/nexus-1000v.4.2.1.SV2.2.1a.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.2.1a.bin sup-2
boot system bootflash:/nexus-1000v.4.2.1.SV2.2.1a.bin sup-2
svs-domain
  domain id 202
  control vlan 1
  packet vlan 1
  svs mode L3 interface mgmt0
svs connection vcenter
  protocol vmware-vim
  remote ip address 10.170.5.35 port 80
  vmware dvs uuid "7a 82 10 50 a3 3c 4c fe-df 91 60 28 66 1d 6f 59" datacenter-name Nashville HQ
  admin user n1kUser
  max-ports 8192
  connect
vservice global type vsg
  tcp state-checks invalid-ack
  tcp state-checks seq-past-window
  no tcp state-checks window-variation
  no bypass asa-traffic
vnm-policy-agent
  registration-ip 0.0.0.0
  shared-secret **********
  log-level


N1KV-01#

4 Replies 4

sprasath
Level 1
Level 1

Hello,

You are using the L3 mode to communicate between the VSM and the VEM. So you need a vmkernel on the ESX host, that is chosen for this purpose. To do that, add the word 'capability l3control' under the corresponding vmkernel's port-profile.

The following deployment guide provides a lot of useful information when deploying the N1k on the UCS:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-704280.html

Thanks,

Shankar

So the need for a vmkernal would imply that the control vlan needs to have an IP address?

The two end points in your connection would be the VSM's mgmt interface and a vmkernel on the ESX host. You can use the same vmkernel that you are using for the host management or use a separate one. If you use a separate one, then you need an IP address for the new vmkernel. And this needs to be on a port-profile that has 'capability l3control' on it. And there needs to be IP connectivity between this vmkernel and the VSM's mgmt interface, in your case.

Also, whatever port-profile you are using for the L3 vmkernel, please make sure that it is in a system VLAN on both the Vethernet and the Ethernet port-profile.

Thanks,

Shankar

You can continue to have vmknic (with capability l3control) communicate with mgmt0 interface on VSM. To use the control0 interface instead, do the below before subsequently configuring an ip-address on it:

VSM(config-svs-domain)# svs mode L3 interface control0

I suppose, you can stick with retaining the L3 interface as mgmt0 unless you have specific requirements.

Review Cisco Networking for a $25 gift card