cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7402
Views
0
Helpful
10
Replies

Nexus 1000 - VM fail to ping gateway

heri.sugianto
Level 1
Level 1

Dear All,

I had issue with nexus 1000 in vmware environment.

If I try to ping gateway from one VM, it's fail. ping intra VM in different esx host is ok.

could any one help me on this ?

Br,

Heri

10 Replies 10

Robert Burns
Cisco Employee
Cisco Employee

Heri,

We'd love to help you need to provide more details about your environment and the issue.

Let's state the following from what we understand in your statement:

Host1--VM1  <---- Can't Ping Gateway, Can Ping VM2

Host2--VM2  <---- Can Ping VM1

Without knowing your physical topology we know that VM1 and VM2 can communicate (but you don't mention if these VMs are on the same host or not).  Assuming they're in the same subnet/VLAN we know Layer 2 connectivity exists.  If my understanding is incorrect, please correct me.  The Nexus 1000v is just a Layer 2 switch.  Unless you have ACLs blocking ICMP, the fact that these VMs can reach each other means they "should" be able to reach their gateway.  If not, its possibly an ACL and/or a firewall between your hosts and the VMs, or one of your VEMs has incorrect programming.

Please answer the following questions.

1. You mentioned VM1 can't ping the gateway, but is VM2 (residing on a different host) able to?

2. You say "Intra VM Ping in different host" works, are we assuming this Intra-VM ping is between two VMs on the SAME host?

2. Can you describe your toplogy, server hardware, switches, routers etc.

3. For the VM1 which can't ping the gateway, get the VEM # of the host (use "show mod" to find the ESX hostname <--> VEM#).  Then execute the following command from on your VSM:

module vem x execute vemcmd show port  // where "x" is the VEM #

module vem x execute vemcmd show dr y // where "x" is the VEM # and "y" is the VLAN VM1 is in.

Let's start here and see what you come back.

Thanks,

Robert

Hi Robert,

Thanks for responds.

Attached are some file that you are requested, such as topology and show result.

In my environment, all the VM can’t ping to gateway. There is no ACL or firewall related.

Following table describe ping test and its result.

No

Local Device

Remote Device

Result

Device

Location

Device

Location

1

VM#1 - Test

ESX#1

VM#2 – Test02

ESX#2

Fail

2

VM#1 - Test

ESX#1

VM#2 – Test02

ESX#1

Success

3

VM#1 - Test

ESX#1

Gateway

Fail

4

VM#2 - Test

ESX#2

Gateway

Fail

All vm are in same subnet, as attached topology.

In this environment, I’m using Cisco Catalyst 4900M as gateway and HP Blade server with HPVC technology as server.

Hope you could help me on this.

Br,

Heri

Nothing attached.  Please post the outputs requested again.

Also if you could post the spreadsheet with your test results that would help also.  The format didn't post correctly.

Can you elaboroate on your HPVC setup also - Are single full 10G links or are you segmenting the links (Ex. 2GB, 4GB, 6GB etc)?

From your Host also post the following:

esxcfg-nics -l

"show running" from your VSM

Thanks,

Rob

Hi Robert,

sorry for my previous reply, because I'm using outlook.

you could find attached files all information you requested.

HPVC configuration is using segmentation, where each vnic is assign specific bandwidth.

if there is any information you need to help me, please let me know.

Br,

Heri

Heri,

A couple problems exist with your 1000v config.

1. No EtherChannel configured.  Whenever use multiple physical uplinks in the SAME uplink port profile, you must define a port channel, otherwise your pinning will get all messed up - which it is from the outputs you've supplied.

Fix: Add the following line to EACH "ethernet" uplink Port Profile: "channel-group auto mode on mac-pinning"

For more information please refer to the documentation:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/interface/configuration/guide/n1000v_if_5portchannel.html#wp1265873

2.  You have too many system vlans defined.  System vlans should ONLY be defined on port profiles for: Control, Packet, Management and IP storage (iSCSI or NFS).  Every other port profile should NOT have any system vlans define.

In your case the following are the vEthernet Port Profiles that should have a system vlan defined:

port-profile type vethernet FMgmt - NO

port-profile type vethernet FT - NO
port-profile type vethernet Internet-ASA - NO
port-profile type vethernet LTM-External - NO
port-profile type vethernet LTM-Internal - NO
port-profile type vethernet Unit - NO
port-profile type vethernet VMgmt - YES "system vlan 9"
port-profile type vethernet VSM-Control - YES "system vlan 100"
port-profile type vethernet VSM-Packet - YES "system vlan 110"
port-profile type vethernet vMotion - NO

Remove the "system vlan" config from all other port profiles I've listed above.

3. Your Ethernet Uplink port profiles are a mess.


     a. You have two Ethernet Port Profiles with the same name "Internal".   Keep in mind you have to disconnect all adapters belonging to a Port Profile before removing it.  I suggest you create a "new" Ethernet Profile with the new name, and migrate adapters from the old -> new and then remove the old one.

     b. You can ONLY allow any single VLAN on ONE uplink, not multiple. Otherwise the VSM has no idea which uplink to use.

     c. For each system VLAN in your vEthernet port profiles, you need to also ensure that VLAN is defined as a system vlan on the corresponding uplink (Ethernet port profile).


port-profile type ethernet Internal
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 9
  no shutdown
system vlan 9
  state enabled
port-profile type ethernet Internet
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 10,13
  no shutdown
system vlan 10,13  <== Delete this line completely.  Neither vlans should be a system vlan.
  state enabled


port-profile type ethernet UPLINK
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan all <== This should be "switchport trunk allowed vlan except 9-10,13"
  no shutdown
 
system vlan 9-14,16,100,110,201-202  <== This should be "system vlan 100,110"
  state enabled

Once you've corrected the above you'll be in better shape.

Regards,

Robert

Hi Robert,

Many thanks for your help. Right now I can ping the Gateway.

but, I had another problem. when I test the high availability by shuting down one of active link in core switch, packet from VM still pointing to active link (which in this condition should be down). could you help me again on this new issue

Also, if you had document how to configure nexus on HP VC, could you please share it with me.

I found in one of post ( https://communities.cisco.com/community/products/nexus1000v/blog/2009/08/27/cisco-nexus-1000v-supports-hp-virtual-connect ), but I could download the presentation file on that url.

Br,

Heri

Glad to help Heri.

I've asked the product manager to re-post that doc.

In short you want to ensure the VC is configured with "SmartLink".  This will down the server facing links if/when all uplinks fail.  This way your 1000v can failover as required.

Check to ensure this option is enabled in the VC configuraiton.

This link might be useful for some of the configuration.

http://blog.michaelfmcnamara.com/2009/08/hp-virtual-connect-smart-link/

http://blog.scottlowe.org/2009/07/09/using-multiple-vlans-with-hp-virtual-connect-flex-10/

Robert

Hii Robert,

i am heri's partner, i've ensure the smart-link is enabled in the VC

i've add channel-group auto mode on mac-pinning to each uplink

but when i shutdown the active link, the nexus still cannot failover.

br,

vendy

You'll need to explain what you're referring to by "failover".

Are you talking about Primary/Standby VSM or failover in regards to VEM uplinks?

In terms of VEM uplink failing over VM traffic from one uplink to another, the link has to actually be detected as "down" on the host.  If you're just shutting down your Virtual Connect uplinks on the upstream switch you might not accomplish anything (unless Smartlink is working as expected).

What you can check is after you shutdown all the uplinks from your Virtual Connect module to the core switch, check to see if the ESX Host see's the adapter as being down


esxcfg-nics -l

If all the vmnics show as "up" still then you have to work with HP and figure out why SmartLink isn't downing the server facing ports.

If the vmnic is showing as "down", gather the following outputs from the CLI of the VEM host for me:

vemcmd show port

vemcmd show pinning

vemcmd show pc

Thanks,

Robert

yeah,i mean to another uplink

ame    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description                  
vmnic2  0000:02:00.02 bnx2x       Up   5000Mbps  Full   1c:c1:de:17:28:8d 1500   Broadcom Corporation NetXtreme II 57711E/NC532i 10Gigabit Ethernet
vmnic3  0000:02:00.03 bnx2x       Up   5000Mbps  Full   1c:c1:de:17:28:91 1500   Broadcom Corporation NetXtreme II 57711E/NC532i 10Gigabit Ethernet
vmnic4  0000:02:00.04 bnx2x       Up   3000Mbps  Full   1c:c1:de:17:28:8e 1500   Broadcom Corporation NetXtreme II 57711E/NC532i 10Gigabit Ethernet
vmnic5  0000:02:00.05 bnx2x       Up   3000Mbps  Full   1c:c1:de:17:28:92 1500   Broadcom Corporation NetXtreme II 57711E/NC532i 10Gigabit Ethernet
vmnic6  0000:02:00.06 bnx2x       Up   1500Mbps  Full   1c:c1:de:17:28:8f 1500   Broadcom Corporation NetXtreme II 57711E/NC532i 10Gigabit Ethernet
vmnic7  0000:02:00.07 bnx2x       Up   1500Mbps  Full   1c:c1:de:17:28:93 1500   Broadcom Corporation NetXtreme II 57711E/NC532i 10Gigabit Ethernet
[root@cyb2-esx-01 ~]#

all the vmnics show as "up", and the nexus also show "up" "up".

CYB2-SWN1K-01# sho int stat

--------------------------------------------------------------------------------
Port           Name               Status   Vlan      Duplex  Speed   Type
--------------------------------------------------------------------------------
mgmt0          --                 up       routed    full    1000    --        
Eth3/3         --                 up       trunk     half    10      --        
Eth3/4         --                 up       trunk     half    10      --        
Eth3/5         --                 up       trunk     half    10      --        
Eth3/6         --                 up       trunk     half    10      --        
Eth3/7         --                 up       trunk     half    10      --        
Eth3/8         --                 up       trunk     half    10      --       

br,

vendy

Review Cisco Networking for a $25 gift card