cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4497
Views
1
Helpful
12
Replies

vsm cannot see the action of vc

jiangcaixia
Level 1
Level 1

Hi all,

I got a problem when I install nexus1kv on my esx host. The vsm is successfully connected to the vc, and the status is:

acl-nexus(config)# show svs connections

connection vc:
    ip address: 10.117.8.38
    remote port: 80
    protocol: vmware-vim https
    certificate: default
    datacenter name: neuxs-acl
    DVS uuid: b9 f9 2f 50 7a 6f a4 ef-1e 55 f6 c2 9f 33 23 90
    config status: Enabled
    operational status: Connected
    sync status: Complete
    version: VMware vCenter Server 4.1.0 build-244350

And all the operations, such as creating a port profile can be pushed to vc. This means I can see the new port profile created in the vsm in vc.

However, after I add a new uplink of one host to the port profile with vc and vc can see its status turns green, but I can't see this interface with "show port profile name system-uplink" command in vsm.

acl-nexus(config-port-prof)# show port-profile name system-uplink
port-profile system-uplink
  description:
  type: ethernet
  status: enabled
  capability l3control: no
  pinning control-vlan: -
  pinning packet-vlan: -
  system vlans: 3000,3002
  port-group: system-uplink
  max ports: -
  inherit:
  config attributes:
    switchport mode trunk
    switchport trunk allowed vlan all
    no shutdown
  evaluated config attributes:
    switchport mode trunk
    switchport trunk allowed vlan all
    no shutdown
  assigned interfaces:

There is no show for the new connected uplink.

So, what's the problem? Why the actions through vc cannot be pushed to the vsm? The vem is running successfully in the host.

Any ideas? Thanks in advance!

Thanks,

Caixia

12 Replies 12

Robert Burns
Cisco Employee
Cisco Employee

Can you past the following outputs:


show ver

show run port-profile system-uplink

show int brief

Thanks,

Robert

Thanks, Robert.

The output are:

acl-nexus(config)# show ver
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2010, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php

Software
  loader:    version unavailable [last: image booted through mgmt0]
  kickstart: version 4.0(4)SV1(3) [build 4.0(4)SV1(2.175)]
  system:    version 4.0(4)SV1(3) [build 4.0(4)SV1(2.175)] [gdb]
  kickstart image file is:
  kickstart compile time:  2/24/2010 1:00:00
  system image file is:    bootflash:/nexus-1000v-mzg.4.0.4.SV1.2.175.bin
  system compile time:     2/24/2010 1:00:00 [02/24/2010 15:09:05]


Hardware
  Cisco Nexus 1000V Chassis ("Virtual Supervisor Module")
   with 1034668 kB of memory.
  Processor Board ID T5056AF0001

  Device name: acl-nexus
  bootflash:    2332296 kB

Kernel uptime is 0 day(s), 4 hour(s), 37 minute(s), 52 second(s)


plugin
  Core Plugin, Ethernet Plugin


acl-nexus(config)# show run por
port-profile    port-security
acl-nexus(config)# show run port-profile system-uplink
version 4.0(4)SV1(3)
port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan all
  no shutdown
  system vlan 3000,3002
  state enabled

acl-nexus(config)# show int brief

--------------------------------------------------------------------------------
Port     VRF          Status IP Address                     Speed    MTU
--------------------------------------------------------------------------------
mgmt0     --           up     10.117.9.186                   1000     1500

--------------------------------------------------------------------------------
Port     VRF          Status IP Address                     Speed    MTU
--------------------------------------------------------------------------------
control0  --           up     --                            1000     1500

acl-nexus(config)# show svs domain
SVS domain config:
  Domain id:    140
  Control vlan: 3000
  Packet vlan:  3002
  L2/L3 Control mode: L2
  L3 control interface:  NA
  Status: Config push to VC successful.

Best,

Caixia

How are you installing the VEM software with Update Manager or manually?

If you're able to add the Host to the DVS in VC but it doesn't show up in the VSM you have a connectivity issue.

Have you gone through the troubleshooting guide to isolate your problem?

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/troubleshooting/configuration/guide/n1000v_trouble.html


There are many posts of people having issues with people adding their hosts to the DVS fine in vCenter, but they fail to show up on the VSM.  9 times out of 10 its

an issue with the Control VLAN.  Either the Control VLAN hasn't been created on some of the upstream switches or its being blocked somewhere along the way.  The troubleshooting guide will help you determine what the problem is.

Have a read through some of these posts and see if they help:

https://communities.cisco.com/message/14655#14655

https://communities.cisco.com/message/51134#51134

If you're still stuck after reviewing the above guides & posts let me know,

Robert

Thanks, Robert,

I have read the troubleshooting and the posts, but I still have problems. My environment is a bit differnet from others. I don't need the physical switch. I want to install vem and vsm on the same host and just want to keep these two communicating with each other. I don't need other vems. So I think the configuration of upstream switch is useless.

I have configured three port groups in vswitch on the host: Control portgroup with vlan id 3000, Packet portgroup with vlan id 3002 and VM Network without vlan id.

Then create a vsm vm on this host. vsm can connect to vc successfully. Most of the information about vsm and vem have been pasted on the above.

I installed VEM manually and I can see vem is running in the host.

[root@localhost ~]# vem status

VEM modules are loaded

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         128         6           128               1500    vmnic0
DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks
acl-nexus        256         50          256               1500    vmnic9

VEM Agent (vemdpa) is running

So, what are your suggestions? Actually I have configured such an environment successfully a long time before. But now I can't fix it. So, maybe something is  wrong. I'm very appreciated if you can help me.

Thanks,

Caixia

Caixa,

The problem is that we are trying to pass all control and packet traffic over vmnic9. So without any physical switch vmnic9 is not seeing any traffic hence the VSM and VEM will never talk. Trying to get N1KV to work without any physical switch is very difficult. You can't create a standalone DVS like you can a vSwitch. It kind of goes against the whole grain of calling it a distributed virtual switch.

If you want to run everything on one host you can try running ESX on ESX. While we and VMware don't support that I know some customers have been successful in getting that to work for testing and using in a lab.

louis

Thanks louis,

Yes, it doesn't behave like a real dvs. But I think since vsm and vem are both installed in the same host, the communications between them shouldn't go outside vmnic9. The communication should go through the vswitch of the esx host itself. So, I think the outside environment is not important. And I have configured such an environment before, including running esx on esx to have those two added to the nexus1kv which you pointed out.

So, I think there maybe something missed. But I can't find it out.

Thanks,

Caixia

Caixia,


Here is the problem. You have the VSM on a vSwitch with vmnic0 as uplink. Thats fine and normal. You are adding the same ESX host to N1KV DVS via vmnic9 as uplink. The VEM module needs to communicate to the VSM control port-group on the vSwitch. When you add vmnic9 to the N1KV DVS we shove traffic through that port for the VEM and VSM to communicate.Since you have no physical switch those packets are just dropping on the line.

You need to find a way to get the vSwitch and the DVS to talk to each other. Even if you find a way to do it internally I'm not sure we even look for control packets on anything but a physical nic. I really don't think this will work at all unless you do ESX on ESX or add a phsical switch into the mix.

Louis

Thanks, Louis, I got your ideas.

I thought vem can communicate with vsm through vswitch0 before. Now I think what you mean is that the control configuration sent from vsm will go to vmnic0(vswitch0) first and then go to vmnic9, and then go to vem. vem seems bind to the nexus1kv dvs. So, the communication will need the upstream switch's support. Am I right?

And I have another question about vemcmd show card:

[root@localhost ~]# vemcmd show card
Card UUID type  2: 44454c4c-4300-1051-8047-b5c04f4e3258
Card name:
Switch name: Nexus1000v
Switch alias: DvsPortset-0
Switch uuid: 4e a6 29 50 0c e7 db 96-7c 24 74 0b d3 c1 e2 56
Card domain: 141
Card slot: 3
VEM Tunnel Mode: L2 Mode
VEM Control (AIPC) MAC: 00:02:3d:10:8d:02
VEM Packet (Inband) MAC: 00:02:3d:20:8d:02
VEM Control Agent (DPA) MAC: 00:02:3d:40:8d:02
VEM SPAN MAC: 00:02:3d:30:8d:02

Management IPv4 address: 10.117.4.181
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Max physical ports: 32
Max virtual ports: 216
Card control VLAN: 3000
Card packet VLAN: 3002
       Processors: 8
  Processor Cores: 8
Processor Sockets: 1
  System Memory:   0

There are so many new mac addrs. What's the difference between AIPC and DPA mac? They are both called Control mac. Are these mac addrs binded to nexus1kv dvs?

Thanks,

Caixia

Caixia,

There is no way for the VEM to communicate to a vSwitch without an upstream switch present. VMware vSwitches cannot communicate to each other internally in the hypervisor they have to go upstream (you can bridge through a VM but that still won't allow the VEM to see the VSM).

Now to your second question. Yes it confusing there are a lot of MAC addresses. As a general rule of thumb the MAC address for the VEM that you need to track down for connectivity is the VEM Control Agent (DPA) MAC: 00:02:3d:40:8d:02.

This is the MAC address you will see get populated in your upstream switch and on the VSM in the MAC address tables. This is the MAC you need to look for to troubleshoot connectivity issues.

Now to find the MAC on the VSM, run show svs neighbors

n1000v-MV# show svs neighbors

Active Domain ID: 1254

AIPC Interface MAC: 0050-56b1-3b90
Inband Interface MAC: 0050-56b1-198c

Src MAC           Type   Domain-id    Node-id     Last learnt (Sec. ago)
------------------------------------------------------------------------

0050-569c-116f     VSM      1254         0101     1191398.95

You want the AIPC Interface MAC . This is the MAC that should match what is displayed in vCenter and what will show up in your upstream switch. Again the other MACs are used internally

Hope this helps

louis

Louis,

Thanks for your detailed explaination. But I am sorry I still have some trouble to know what's the usage of AIPC mac address? We can see a AIPC mac addr through "vemcmd show card" command on the host, and can also see another AIPC interface MAC through "show svs neighbors" command on vsm. But these two addrs are different. What are they used for? How can I see it in vCenter?

Thanks a lot!

Caixia

While the VSM and VEM make up one virtual switch they are still two individual network components that need to talk to each other.  Think of the N1KV as a virtual switch. The VSM is the supervisor module that does control plane. The VEMs are the line cards where the servers plug into. In a normal switch the linecards and supervisor module talk via the backplane of the chassis.

The N1KV uses the network as the backplane. So  the VEM and VSM need to be able to communicate to each over the network to create a virtual backplane. Every network device needs a MAC address.

The AIPC Interface Mac from "show svs neighbors"  is the MAC address of the VSM. The VEM control Agent DPA MAC from "vemcmd show card" is the MAC address of the VEM. Two distinct mac addresses so the VEM and VSM can communicate with each other over the network to create a virtual backplane.

All the MAC addresses you see with "vemcmd show card" are MAC addresses that belong to the VEM. Some are internal and DPA MAC is the external facing MAC.

You can see the MAC address of the VSM through the VI client by right clicking on the VSM VM and "edit settings". From there click on the control NIC and it will display the MAC addresses.


Hope that helps.

louis

Thanks very much, Louis.

I have learnt a lot!

Regards,

Caixia

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: