cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1572
Views
0
Helpful
4
Replies

ACI integration with UCS

joeharb
Level 5
Level 5

We are migrating services to UCS and deploying ACI.  We have had reports of a service (VDI) that is having issues when moved to a UCS blade.  I have into ACI and I see the that one of the endpoints is consistently transitioning between attached and detached between the 2 FI uplinks (see attached image).

 

I also noticed from the APIC that the endpoint was seen from both FI's at the same time (not sure if I executed the command during a transition change):

 

padapic1# show endpoints mac 00:50:56:A9:CF:49
Legends:
(P):Primary VLAN
(S):Secondary VLAN


Dynamic Endpoints:
Tenant : CORE
Application : PAD_VDI
AEPg : PAD_VDI_80

End Point MAC IP Address Source Node Interface Encap
Multicast Address Create TS
----------------- ---------------------------------------- ------------ ---------- ------------------------------ --------------
- --------------- --------------------
00:50:56:A9:CF:49 --- learned 101 102 vpc DC_OPS_CORE_FI_A vlan-80
not-applicable ---
00:50:56:A9:CF:49 --- learned 101 102 vpc DC_OPS_CORE_FI_B vlan-80
not-applicable ---

Total Dynamic Endpoints: 2
Total Static Endpoints: 0

 

I have checked other endpoints (not going to the same UCS) and do not see the same behaviour.

 

Any help would be greatly appreciated,

 

Thanks,

 

Joe

 

 

 

 

 

 

1 Accepted Solution

Accepted Solutions

Where you need to fix things is on vCenter.  Ensure the Uplinks teaming on each Port Group is configured for "Route Based on Virtual Port ID".  Then you should be fine from there.  (You're likely using IP Hash currently).

UCS DVS Uplinks - No Teaming - No VMM.PNG

Robert

 

View solution in original post

4 Replies 4

Sergiu.Daniluk
VIP Alumni
VIP Alumni

Hi @joeharb 

It looks like the VDI is actively using both vPCs DC_OPS_CORE_FI_A  and DC_OPS_CORE_FI_B when sending traffic to upstream.

You will need to check on your hypervisor if you have both vmnics (in case of ESXi) are being used as Active uplinks. 

You should change to Active/Standby.

 

Stay safe,

Sergiu

Sergiu is on the right track, but I wouldn't recommend changing any of your DVS uplinks to active/standby.  The Port Channel configuration for your VMM vSwitch policy is likely the issue. If your vSwitch policy is using any aggregated channeling (LACP or static) you'll see this with UCS.  The UCS FIs are not aggregate devices, so you can't treat them as such from a host uplink perspective.   Instead you should be using "Mac-Pinning" as the vSwitch port channel policy for the interfaces facing UCS hypervisors.    Here's a doc I put together that covers teaming options with ACI & UCS VMM.  You're mostly likely interested in Topology 6.

Robert

joeharb
Level 5
Level 5

We are not using an VMM integration at this point.  The VPC to the UCS is configured with CDP/LLDP Enabled and LACP Active.  Should we investigate the ESXI host for this or UCS service profile?

 

Thanks,

 

Joe

Where you need to fix things is on vCenter.  Ensure the Uplinks teaming on each Port Group is configured for "Route Based on Virtual Port ID".  Then you should be fine from there.  (You're likely using IP Hash currently).

UCS DVS Uplinks - No Teaming - No VMM.PNG

Robert

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Save 25% on Day-2 Operations Add-On License