01-26-2017 04:35 PM - edited 03-01-2019 01:02 PM
Hi,
I have recently implemented UCS with dual fabric and am running in to a network issue. Basically VMs with vNIC on Fabric A cannot communicate on VMs with vNIC on Fabric B (same VLAN). I will try to describe the environment and testing performed as below.
It appears that network is functioning correctly on both FI-A and FI-B but not between servers on FI-A and FI-B.
I have checked the upstream switch's MAC table and it has the MAC addresses of all servers, same with the FIs, but for some reason it's not working.
I believe the issue lies at the UCS layer which is preventing the traffic from exiting the FI-A uplink and entering the FI-B.
Any ideas would be greatly appreciated.
01-27-2017 03:15 AM
It seems to me that the problem is outside of UCS;
Basically VMs with vNIC on Fabric A cannot communicate on VMs with vNIC on Fabric B (same VLAN).
Because the FI-A cannot L2 switch to fabric B, it has to send the frame Northbound !
Are you sure that this particular VLAN is setup between the A and B Northbound switch, and therefore L2 switch from fabric A to fabric B ? This is mandatory, otherwise it won't work.
01-27-2017 03:19 AM
Greetings.
Your tests seem to indicate the various guestVM MACs are being correctly learned upstream if you can reach the DG and the ping responses make it back:
Some additional things to check:
If each FI is connected to both N3k, assuming your vnic A <> vnic B communication is in same vlan, then your switching should occur on a single N3k...
I'm not sure your problem resides in the UCSM.
Thanks,
Kirk
01-28-2017 08:32 PM
Thanks for the replies.
Tested against IP on same VLAN that is not either Chassis (a NAS server connected to the 3Ks). Both VMs on each fabric can reach it, VLAN appears to be configured correctly based on this test. Traffic on this VLAN works fine on the 3Ks from both FI-A and FI-B. Traffic only fails from FI-A > 3Ks > FI-B and vice versa.
02-09-2017 10:33 PM
Hi Robert
Is there any update from you, how did you get it resolved?
Thanks much
02-10-2017 03:34 AM
Seems like you may have some kind of issue with the VPC and the PCs on the N3Ks facing the FIs.
Please list the following for each n3k:
#show run int ethx/y (for any FI connected ports)
#show vpc brief
#show vpc
#show port-channel summary
#show vpc consistency-parameters
It would be helpful if you could update your topology diagram to include the N3k ports that are connected to FIs and indicate which FI and FI port the links are connected to. Also the VLAN for the guestVMs.
Then during a continuous pingtest from guestVM pinned to FI-A to guestVM pinned to FI-B test, list the following:
Thanks,
Kirk
02-10-2017 03:34 AM
Hi Kirk
I double check the issue in my case is our engineer forget to config disjoint Layer 2
It works after we config correct disjoin layer 2.
We use physical server (bare metal) not VM environment, so I am not sure about your case.
But normally, in my experience with vpc and vmware we can use IP hash on vswitch
Thanks
02-10-2017 04:11 AM
Greetings.
Robert has confirmed that they don't have a disjoint L2 config or requirement.
UCSM in EHM does not work with IPhash on the blade hosts because the FIs are not directly connected to each other, and wouldn't be able to bring up the other end of the port-channel.
Outside of the UCSM, the UCS rack servers can accommodate IP-hash with a correctly configured switch.
Thanks,
Kirk...
02-21-2017 12:23 PM
Hi all,
I got a TAC case going with the Cisco team and identified the issue.
For those interested, this is a bug in the Fabric Interconnects running release 3.1(1e) and 3.1(1g).
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCuz46574/?reffering_site=dumpcr
Thanks to all who assisted.
-Robert.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide