I am trying to move a virtual machine to a vlan during or after orchestration. The "customer" vlan does not have access to the hyper-visor resources directly. So the orchestration fails when placed on the vlan, since it does not have access. If it orchestrates on the management network which does have access to the hyper-visor resources it works successfully and then can manually be moved to the customer vlan. However we need a way to move the newly created VM to the customer network either during or after orchestration to make the process seem less for our internal and external resources.
We would be doing this for both for VMware and Hyper-V which both are running Nexus 1000v.
Solved! Go to Solution.
At the end of the day, all I am trying to accomplish is the following:
End user (customer) logs into the self-service portal: (We will manually provision the vDC, the networking components, storage, etc)
The End User will click on a shared catalog to provision a server (Lets say Windows).
UCS Director spins up the new Virtual Machine from the catalog and logically puts this system in the customers vDC (which is currently working) as a resource and puts the new VM on the customer vlan/port-profile (which is the part I am struggling with).
I just cannot figure out how to make this work, or what I am doing wrong, it appears as if the system is trying to orchestrate the new virtual machine on the customer vlan/port-profile which does not have access to the shared infrastructure. It does work correctly on the shared infrastructure network, but then it is a manual move to the customer vlan/port-profile.
So think a customer already has 10 VMs being managed by UCS-D and now they want to add another 10 from the Self-Service portal.
Thank you so much for the script(s), this would be the next step of what I like to do once I get the virtual machine to orchestrate on the customer network.
Basically, the problem I am having is that when I orchestrate a virtual machine, it will put the virtual machine on the customers vlan/port-profile (which is great) pulled from the vDC. The issue is that the customer/tenant vlan/port profile has no access to the shared infrastructure and thus fails to orchestrate the virtual machine. If I leave the it on the management network the virtual machine orchestrates correctly, but the server is not usable since it is on a shared management network. I would think as multi-tenant solution that you need to be able to orchestrate on the management network and then place the orchestrated VM on to the customer/tenant vlan/port-portfile (as it did, when it failed). I guess I just need a way to either orchestrate correctly or move the virtual machine to the customer vlan/port-profile post orchestration.
I would imagine it would be a script of some sort, or I am just orchestrating incorrectly. However, I would believe that for security concerns, etc that all customer/tenant vlans/port-profiles would not need nor need direct access to shared resources, i.e. vmm, vcenter, hyper-visor, etc.