cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4820
Views
0
Helpful
10
Replies

NFS shows inactive but iSCSI works

wpondervmware
Level 1
Level 1

From a previous post I have all the basic stuff working. Currently I have the following

ESX host A - Running the VSM - Management, Control, Packet all on VLAN 5

ESX host B - Not relevant entirely

ESX host C - D both have VEM installed. Each has 4 PNICs. One is on a vSwitch at the moment for management and vmotion. Management is vlan 1, vmotion vlan 10.

Anoher NIC is dedicated to VM traffic.

Another NIC is dedicated to iSCSI / NFS. There are two separate storage arrays both running NFS and iSCSI. At the moment I am trying to move each hosts iscsi/nfs access to the VEM. Before I started the storage was set up as a port based vlan.

IPaddress for each vmkernel interface is

192.168.2.15 ESX host C

192.168.2.20 ESX host D

The storage vlan is vlan 3

The NAS appliance is 192.168.2.6 and has both NFS and iSCSI storage available on that interface.

The VSM / VEM config is as follows.

port-profile storage-net
  vmware port-group
  switchport mode access
  switchport access vlan 3
  no shutdown
  state enabled


port-profile storage-uplink
  capability uplink
  vmware port-group
  switchport mode trunk
  switchport access vlan 3
  switchport trunk allowed vlan 3
  no shutdown
  system vlan 3
  state enabled

The pnics of the ESX hosts and NAS appliance are all part of a port based vlan on the upstream physical switch. When the NAS appliance is tagged at the physical switch none of the hosts can access the NFS based storage.

When the port is not tagged. All hosts except one, the one that is using the Nexus can access the NFS storage. What is strange to me is why the host can access the iSCSI storage on the same NAS appliance / interface with no issue.

Thanks,

WP

Once set up I migrated the old storage adapters connected to the vswitch to the VEM.

10 Replies 10

lwatta
Cisco Employee
Cisco Employee

WP,


As a point of reference whenever you define a system vlan on an uplink port-profile don't forget to also define a system vlan on the VM port-profile so add

system vlan 3 to port-profile storage-net.

If I understand correctly your physical switch is setting the vlan on the PNIC to be vlan 3? So its an access port and not a trunk port? You want to make sure your uplink port-profile matches your physical switch port configuration. If that is the case then port-profile storage-uplink needs to be an access port and not a trunk port.

I've never done it that way but I think you would do the following

port-profile storage-uplink
  capability uplink
  vmware port-group

  switchport mode access
  switchport access vlan 3
  no shutdown
  system vlan 3
  state enabled

port-profile storage-net
  vmware port-group
  switchport mode access
  switchport access vlan 3

  system vlan 3
  no shutdown
  state enabled

I'll play with it in my lab and let you know if it works or needs to be changed.

Iwatta,

I read this last night and switched it to a access port not a trunk port. I also did it since it will only be carrying one VLAN. I made two mistakes.

1.  I keep having issues with spanning tree or something in the upstream switch since I am slowly migrating rather than just cutting bait. I will get the intermitant outages when mixing tagged and non-tagged ports.

2. I chaged it to a access trunk last night but did not copy running to startup. Rebooted this morning and doh! changes lost.

I will rebuild the access link now.

WP

WP,

I'm not sure what vendor your physical switch is but I have seen issues with our switches that if the the physical switch ports to the ESX hosts are not set right Spanning tree will block a port. In the Cisco world check for loopguard

show run |include loopguard

show spanning-tree vlan "control-vlanid"

and look for ports with LOOP_inc and ports disabled.

disable loopguard or set the individual switch ports for the VSM and VEM with
spanning-tree bpdufilter enabled
spanning-tree portfast

Try to be consistent in the configuration of the virtual uplinks with the physical ports they are plugged into. If they mismatch you are going to have problems.

I tested my configuration in the last post and it worked fine.

My physical switch port was set as

interface GigabitEthernet3/10
description rtp9-cae-esx-5-vmnic6
switchport
switchport access vlan 152
switchport mode access
no ip address
spanning-tree portfast trunk
spanning-tree bpdufilter enable

My N1KV port-profiles were

port-profile vm_vlan_152
  vmware port-group
  switchport mode access
  switchport access vlan 152
  no shutdown
  state enabled
port-profile uplink-access
  capability uplink
  vmware port-group
  switchport mode access
  switchport access vlan 152
  no shutdown
  state enabled

Thanks Iwatta, mine unfortuantly is an Extreme switch at the moment. Will do some reconfigure and report back.

WP

Looking good so far.

Set it back up as an access link. Turned of tagging at the port. Have two host C/D both moved over and all the storage is working.

I will watch it now.

WP

Ok, I got everything mostly settled down except the storage. I changed my uplink and port profiles to the following.

port-profile storage-uplink
  capability uplink
  vmware port-group
  switchport mode access
  switchport access vlan 3
  no shutdown
  state enabled
port-profile storage-net
  vmware port-group
  switchport mode access
  switchport access vlan 3
  no shutdown
  state enabled

My storage is still intermitantly loosing connectivity. Again it only appears to be NFS storage. iSCSI storage to the same arrays appear to be working fine and not disconnecitng. This is only happening on the ESX hosts that have a VEM where the NIC for the storage is connected through the VEM. The other hosts still using a vSwitch are fine.

Currently, all hosts and storage are connected to the same physical switch and in a port based VLAN i.e. none of the ports are tagged at the moment. None of the vSwitches or vmk interfaces on the other hosts are currently tagging either. I have tried with just the ports tagged where the VEM  uplink nics are but that just blocks all the storage connectivity. 

Here is the spanning tree config of the Extreme switch.

configure mstp region 000496349675
configure mstp revision 3
configure mstp format 0
configure stpd s0 mode dot1d
configure stpd s0 forwarddelay 15
configure stpd s0 hellotime 2
configure stpd s0 maxage 20
configure stpd s0 priority 32768
disable stpd s0 rapid-root-failover
configure stpd s0 default-encapsulation dot1d
enable stpd s0 auto-bind vlan Default
configure stpd s0 tag 0
enable stpd s0

Specify the vlan 3 as system vlan in storage-uplink and storage-net profiles.

-Naren

Thanks Naren. I addded the system vlan 3 to both the uplink and port group and the storage still itnermitantly drops out.

port-profile storage-uplink
  capability uplink
  vmware port-group
  switchport mode access
  switchport access vlan 3
  no shutdown
  system vlan 3
  state enabled
port-profile storage-net
  vmware port-group
  switchport mode access
  switchport access vlan 3
  no shutdown
  system vlan 3
  state enabled

Do you have the setup in that state? We can have a webex session to take a look at the setup to find the root cause.

thanks,

Naren

I do. I had to roll back one of the two hosts for some other testing. I left one of them in place and I see it still is intermitantly loosing connectivity to the storage. The other host I rolled back to a vSwitch is fine now. Same port config etc.

WP

Review Cisco Networking for a $25 gift card