cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1801
Views
0
Helpful
9
Replies

ESXi Active/Active NIC config issues

Steven Williams
Level 4
Level 4

Been a while since I have looked at an issue that I have. The issue is when enabling both NICs on an ESXi host to be active/active vmotion and other traffic is not working or flakey on a host. I drew up a quick diagram. I think the issue is I need to vPC from the 5k to the chassis, but not sure. is the issue that because the dual active NICs on the blade will send traffic both ways up separate port-channels and the traffic doesn't come back correctly?

 

 

9 Replies 9

Walter Dey
VIP Alumni
VIP Alumni

Best practise is indeed to build 2 vpc from each fabric interconnect to the North bound N5k.

What is the load balancing algorithm that you use on the vswitch (default = originating id ?).

It could be, that initiator sends traffic on the interface connected to fabric A, and the receiver interface is on fabric B (same Vlan !!! L2 traffic !); this means, that traffic has to exit UCS domain, becoming L2 switched, and then entering UCS domain on the other fabric.

Without vpc, this could become a bottleneck between the 2 N5k.

Sorry forgot to mention a few things...Its not UCS, its IBM blade Center with Cisco ESMs in the chassis. I tried changing the LB method but vmotion stops at 14 percent all the time.

 

I think LB is important ! initiator port id is good;

Did you try to bind the vmkernel interface for vmotion to the same fabric ?

Does vmkernel ping work ok ?

port groups identical ?

I guess what are referring to as a fabric within in this scenario? The 5ks or the ESMs in the chassis?

VMkernal does not ping for the vmotion kernel port.

 

ESM and or N5k; the blades are dual homed, fabric A, B ?

Regarding VMkernel ping, see

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003728

If this doesn't work, you have a IP connectivity problem ?

Are both ESXi in the same vlan, IP subnet ?

Can you post the port groups ?

Is vmotion on the vmkernel interface enabled ?

Each ESM has a LACP port-channel to each 5k upstream. So ESM01 Port-channel to 5k01 and ESM02 Port-Channel to 5k02.

 

All ESXi hosts are configured the same way.

Vmotion is enabled on the kernel port.

But you cannot ping the vmotion vmkernel interface on the 2 ESXi ?

Is this vmotion vlan defined in all the uplink interfaces ?

Do you have a vlan trunk with all the vlans between the N5k ?

well assuming this is the right method, I cannot ping any vmotion IP from a host to another host.

 


This system has been migrated to ESXi 5.1.0.
~ # vmkping 192.168.248.226
PING 192.168.248.226 (192.168.248.226): 56 data bytes
64 bytes from 192.168.248.226: icmp_seq=0 ttl=64 time=0.058 ms
64 bytes from 192.168.248.226: icmp_seq=1 ttl=64 time=0.209 ms
64 bytes from 192.168.248.226: icmp_seq=2 ttl=64 time=0.051 ms

--- 192.168.248.226 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.051/0.106/0.209 ms
~ # vmkping 192.168.248.225
PING 192.168.248.225 (192.168.248.225): 56 data bytes

--- 192.168.248.225 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
~ # vmkping 192.168.248.223
PING 192.168.248.223 (192.168.248.223): 56 data bytes

--- 192.168.248.223 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
~ # vmkping -I vmk0 192.168.248.223
PING 192.168.248.223 (192.168.248.223): 56 data bytes

--- 192.168.248.223 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
~ # vmkping -I vmk0 192.168.248.225
PING 192.168.248.225 (192.168.248.225): 56 data bytes

--- 192.168.248.225 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
~ #

 

 

I assume you have several vmkernel interfaces ?

Therefore

  1. In ESXi 5.1 and later, you can specify which vmkernel port to use for outgoing ICMP traffic with the -I option:

    # vmkping -I vmkX x.x.x.x

Review Cisco Networking for a $25 gift card