01-18-2021 03:19 PM
Situation: Trying to connect 3 esxi hosts to two vlans used for iscsi traffic. Can ping storage from the distro switch, but cannot ping hosts.
I replaced the network adapters in the 3 hosts and moved the new interfaces over to a 10 gig switch and have since encountered problems attaching the iscsi volumes on the 3 hosts. Vlans 20 and 21 on the switches are used for iscsi traffic, subnets 10.255.1.x and 10.255.2.x respectively. The default gateway (10.255.1.1 and 10.255.2.1) for both of these are on Gig0/0/0.20 and .21 of two routers in HSRP. The hosts can ping the default gateway, they can ping each other, they can also ping the ip address of the iscsi target. However, they cannot attach the volume in vmware, citing there was no connection to the network. Now being this is a new switch, Ive poured over the config and can find nothing wrong. the iscsi interfaces on the servers are trunk ports, the storage devices are in access ports on the correct vlans for their ip addresses/vlans. What makes me curious that there is a larger routing issue at stake here is that from the iscsi distro switch, I can ping 10.255.1.6 (the iscsi target, but I cannot ping 10.255.1.12 or 14, which are two of the hosts, the 3rd is offline for now. The vmkernels are addressed correctly, all devices have the same 9000 mtu setting. To further confuse things, both gateways CAN ping all devices. This made me wonder if it was a gateway issue with switches, but I did verify the default gateway for the switches are set. So how can the distro switch ping the storage device but not the vmkernels?
01-18-2021 04:31 PM - edited 01-18-2021 04:39 PM
Hello
disable any ip routing on the distrio switch as is sounds like it isn’t required for any L3 routing
01-18-2021 05:13 PM
Hello,
post the running config of the 10Gig switch.
So a vmkping works ?
vmkping -I vmkN -s 8972 xxx.xxx.xxx.xxx
01-18-2021 05:24 PM
ip routing on the distro is disabled. interestingly enough, i added a management IP to the vlan 20 and 21 interfaces and now, the switch can ping those addresses, but still iSCSI wont attach.
@Georg Pauwen the host can ping the the storage device. and now after adding an address to the vlan 20 and 21 interfaces, can ping the vmkernels, but still iscsi software adapter cannot find the volumes.
[root@Enfield:~] vmkping -I vmk1 -s 8972 10.255.1.6
PING 10.255.1.6 (10.255.1.6): 8972 data bytes
8980 bytes from 10.255.1.6: icmp_seq=0 ttl=64 time=0.389 ms
8980 bytes from 10.255.1.6: icmp_seq=1 ttl=64 time=0.375 ms
8980 bytes from 10.255.1.6: icmp_seq=2 ttl=64 time=0.485 ms
--- 10.255.1.6 ping statistics ---
Attached is the storage switch config
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.375/0.416/0.485 ms
01-19-2021 12:35 AM
Hell
Remove the svi 21 vlan off the distro switch and append it to the L3 device that is serving the inter-vlan routing for the network because - the distro switch is a host switch not a routing device you should then be able to attach the storage to the clients
01-19-2021 06:48 AM
i removed the svi 20 and 21 from the switch, but that made no change. One thing i did notice for the vlans is the MTU is 1500, but the system MTU is set to 9000.
============================================================================
VLAN Name Status Ports
---- -------------------------------- --------- -------------------------------
1 default active Te1/0/1, Te1/0/7
2 Network_Management active
20 iSCSI_1 active Te1/0/4, Te1/0/5
21 iSCSI_2 active Te1/0/10, Te1/0/11
1002 fddi-default act/unsup
1003 token-ring-default act/unsup
1004 fddinet-default act/unsup
1005 trnet-default act/unsup
VLAN Type SAID MTU Parent RingNo BridgeNo Stp BrdgMode Trans1 Trans2
---- ----- ---------- ----- ------ ------ -------- ---- -------- ------ ------
1 enet 100001 1500 - - - - - 0 0
2 enet 100002 1500 - - - - - 0 0
20 enet 100020 1500 - - - - - 0 0
21 enet 100021 1500 - - - - - 0 0
1002 fddi 101002 1500 - - - - - 0 0
1003 tr 101003 1500 - - - - - 0 0
1004 fdnet 101004 1500 - - - ieee - 0 0
1005 trnet 101005 1500 - - - ibm - 0 0
======================================================================
Archimedes# sh system mtu
Global Ethernet MTU is 9000 bytes.
All adapters for both the hosts, vmkernel, and vswitch and the qnap storage network adapters are set to 9000bytes for MTU. The previous interfaces (vmkernels associated to the quad 1gig card) were configured the same (9000MTU), connected to a 2960X with jumbo mtu set to 9000 as below.
Andover#sh system mtu
System MTU size is 1500 bytes
System Jumbo MTU size is 9000 bytes
System Alternate MTU size is 1500 bytes
Routing MTU size is 1500 bytes
Perhaps this issue is one of an MTU setting somewhere that is incorrect, since i see the vlans showing mtu of 1500.
So
01-19-2021 07:06 AM
Hello,
maybe a topology diagram showing how everything is connected would help. In any case, make sure that the switch is in no way involved in any routing. Only one SVI needs to be up, with an IP address, just for management. That IP address cannot be used for any sort of connectivity (e.g. as a default gateway somewhere)...
01-19-2021 08:03 AM
01-19-2021 09:09 AM
Seems strange too, "sh vlan mtu" shows 9000, but "sh vlan" shows mtu of 1500.
Archimedes#sh vlan mtu
VLAN SVI_MTU MinMTU(port) MaxMTU(port) MTU_Mismatch
---- ------------- ---------------- --------------- ------------
1 9000 9000 9000 No
2 9000 9000 9000 No
20 - 9000 9000 No
21 - 9000 9000 No
1002 - 9000 9000 No
1003 - 9000 9000 No
1004 - 9000 9000 No
1005 - 9000 9000 No
Archimedes#sh vlan
VLAN Name Status Ports
---- -------------------------------- --------- -------------------------------
1 default active Te1/0/1, Te1/0/7
2 Network_Management active
20 iSCSI_1 active Te1/0/4, Te1/0/5
21 iSCSI_2 active Te1/0/10, Te1/0/11
1002 fddi-default act/unsup
1003 token-ring-default act/unsup
1004 fddinet-default act/unsup
1005 trnet-default act/unsup
VLAN Type SAID MTU Parent RingNo BridgeNo Stp BrdgMode Trans1 Trans2
---- ----- ---------- ----- ------ ------ -------- ---- -------- ------ ------
1 enet 100001 1500 - - - - - 0 0
2 enet 100002 1500 - - - - - 0 0
20 enet 100020 1500 - - - - - 0 0
21 enet 100021 1500 - - - - - 0 0
1002 fddi 101002 1500 - - - - - 0 0
1003 tr 101003 1500 - - - - - 0 0
1004 fdnet 101004 1500 - - - ieee - 0 0
1005 trnet 101005 1500 - - - ibm - 0 0
Im just grasping at straws here. its strange that the host cannot "see" the datastores on the iscsi targets now, but can ping those devices.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide