cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1798
Views
40
Helpful
9
Replies

UCS Manager 4.1 no VM-FEX support

remi-reszka2
Level 1
Level 1

Hello everyone,

 

The VM-FEX is no longer supported for VMware on UCS Manager 4.1 onwards. How can I achieve the dynamic vNIC on UCS pass-through directly to the Virtual Machine bypassing VMware virtual switch? The only way now is to keep using VMware virtual distributed switch on ESXi servers? How about users using VM-FEX feature on previous versions of UCM and previous version of VMware want to upgrade? Do they loose the VM-FEX functionality? I read the guides and previously you would install the UCS Nexus extension in VMware vCenter Server, then install VEMs on ESXi servers and we would get the integration, now it looks like it is not possible.

 

I am trying to integrate UCM Manager 4.1(3c) with VMware vCenter server 7.02 and looks like I can no longer register the UCM extension on vCenter so I cannot even integrate UCS with VMware. Is it still possible for Virtual Machine management from within UCM and achieve VM-FEX or it is no longer possible?

 

Thank you all in advance.

 

Best regards,

Remi

 

 

 

9 Replies 9

Kirk J
Cisco Employee
Cisco Employee

https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/release/notes/CiscoUCSManager-RN-4-1.html

"Beginning with Cisco UCS Manager Release 4.1(1), VM-FEX is only supported with Red Hat Enterprise Linux (RHEL) on KVM.VMware VM-FEX on ESX, Windows VM-FEX, and Hyper-V VM-FEX are no longer supported."

 

Unfortunately your conclusion is correct.  UCSM 4.11 or higher will not support vm-fex with VMware.

 

Kirk...

Thanks Kirk for taking your time and commenting on my post.

 

That being said the VMware integration from within UCSM is no longer configurable, just sitting there. I guess it should have been removed from the menu to avoid the confusion. So now we only have the option of Nexus 1000v to use but we cannot achieve the VMware diretct-path as we could use it with VM-FEX. Since VMware no longer supports and permits 3rd party virtual switches, how can we bypass the VMware integrated DVS? The only option we can us is pass-through in VMware for certain VMs only? The dynamic vNICs in UCSM have no longer any use either correct? That was used for VM-FEX. I have been studying a bit of Cisco ACI and Cisco said that they will have a way to get to VM directly from the fabric without having to use VMware DVS is that correct? How this can be achieved? I cannot seem to find any technical paper for this. And finally what is the use of Cisco VIC adapters on blade servers if VM-FEX can no longer be used? I hope this can be achieved with Cisco ACI.

 

Best regards,

Remy

 

 

          "And finally what is the use of Cisco VIC adapters on blade servers if VM-FEX can no longer be used?"

The majority of customers enjoy the programmability of the VICs/vnics with service profiles.

In the case of UCSM integrated UCS rack servers, you have a 2 or 4 port (even more for blades) vic card running at 10/25/40 or even 100Gb speeds, but can logically present dozens of different PCI-E nics to the ESXi host,  each with it's own set of vlans.

In my own personal opinion, I think that VM-FEX support got reduced due to low adoption rate in combination with VMware killing off 3rd party dVS support.  Vast majority of customers were/are able to use standard dVS, or ACI extension of it, and get the speeds/latencies they need, without the added complexity that VM-FEX added.

 

You are asking how to get dVS style of management for a passthrough/SR-IOV solution, with VMware disallowing 3rd party dVS.  Answer is, you can't if you are using Vsphere.  While you can get some NICs setup with passthrough/SR-IOV, without a DVS style solution, it is not a manageable, scalable option.

 

What specific hardline, minimum requirement for speed & latency are you unable to meet with a dVS layer of presenting networking to the guestVMs?

 

Kirk...

Hi Kirk,

Thansk a lot for your comments. I understand all the transition from VM-FEX and all advantages of VIC cards. I probably think towards larger VDI deployment where the vDS has an impact on the CPU performance of the blade servers. VM-FEX was supposed to off-load the CPUs and use the VIC entirely for network data processing. Going for passthrough/SR-IOV is nos scalable at all, lots of manual work and admin. So I guess with ACI using its vDS plug-in on vShere should help to off-load the CPUs or not really? Is there any documentation on that?

Regards,

Remy

pille1234
Level 3
Level 3

Have a look at Vmware and SR-IOV, a more standardized way to bypass the hypervisor / DVS for Network IO. I am however no expert for UCS, so you may need to do some research in regards to UCS and Cisco VIC support for that.

 

ACI is not going to help you with that. There once was the AVS switch (N1K for ACI) and now there is AVE as a replacement - but it still uses the DVS internally as there is no other way to get to the VMs.

Thanks for your input. Yeah I thought of that but as Kirk mentioned it's not really scalable solution for a larger VDI deployment. Not sure how that works with vMotion using passthrough/SR-IOV on vSphere. As you say even going for ACI we have to use DVS but I guess this is based on some Nexus 1K virtual switch plug-in correct?

Regards,

Remy

The switch plugin was called AVS and was a very powerful and great solution. Unfortunately VMware killed any 3rd party switches, thus AVE was created, which is nothing but a regular VM running on each ESX server. All VMs on this host can only talk to the AVE-VM with host local VLANs, only AVE has uplink ports to the outside world. If latency and high IO is your issue then you want to run away from this solution.

Even if this is not your issue you still want to run away from AVE, as it is the a terrible buggy peace of software, We had lot's of outages due to this, 

Thanks a lot for sharing your experience on that. I will take it into consideration. So what would be your suggesting to use what technology if working with VDI? Thanks again for all your input.

We use regular vmware DVS with VMX3 interfaces for our Citrix based VDI infrastructure, but haven't given much thought to this honestly.

When we see latency related problems then they are usually caused by our tiny MPLS WAN bandwidth between the offices and our DC.

 

Just recently we started using SR-IOV in a new big data environment, where the virtual servers are really big in CPUs and memory and vendor says vmotion isn't needed nor supported anyway because the software stack has internal failover handling. 

 

 

Review Cisco Networking products for a $25 gift card