Please refer to below links for Licensing information on ICF:
Intercloud Fabric will be available through an annual subscription for a pack of "hybrid ports" bought in advance of use. There is no perpetual license.
Additional Hybrid Port capacity can be purchased any time within the year and will co-term with the initial purchase subscription term.
A "Hybrid Port" is considered to be an active, running virtual machine, irrespective of VM size, deployed in a public cloud environment through Intercloud Fabric.
The customer may provision (and retire/suspend) multiple VMs. The Hybrid Port capacity is only counting the number of concurrently running VMs at any time.
The supported encryption algorithms are
AES-128-GCM, AES-128-CBC, AES-256-GCM (Suite B) and AES-256-CBC.
The supported hashing algorithms are
SHA-1, SHA-256 and SHA-384.
Cisco uses self-signed certificates that are generated by Intercloud Fabric Director and distributed to the Intercloud Fabric Extender, Intercloud Fabric Switch and virtual machines running in the public cloud
The traffic between the Intercloud Fabric Extender and Intercloud Fabric node is secured using DTLS. This tunnel is referred to as the "site-to-site" tunnel. Further, there is also a DTLS tunnel between the Intercloud Fabric Switch and the Virtual Machines running in the public cloud. This tunnel is referred to as the "access" tunnel. There is also support for TCP based tunneling for enterprises preferring to use TLS instead of DTLS
The Intercloud Fabric VSM (aka cVSM) is different form of original VSM.
cVSM is designed specifically for cloud (it is built on same NX-OS but different format)
So on basis of architecture cVSM has same principle as VSM
VSM has VEM as modules
cVSM has ICX and ICS as modules
So in summary one cannot use vanilla flavor of original N1kv VSM for cloud, they have to deploy cVSM to work with ICX and ICS.
I have added a new descusion on VMware-Vcenter <-> VSM(VSB) <-> VEM on ESXhost.
Many thanks for your replies...
You have scored on all the initial setup steps and almost close to using VSM-VEM :)
To answer your main question – Yes a ESXi host can have multiple active Virtual Switches in parallel.
That is to say, you can have VMware’s DVS, Nexus 1000v VEM, vSwitch 1, vSwitch 2, …., vSwitch X all ON at same time.
The separation at switching level happens on basis of which VMs (via Port-Groups) use which Virtual Switch.
The uplinks (network adapters - vmnics) of host are distributed across virtual switches (CANNOT be shared)
So multiple active Virtual Switches gives you flexibility to segregate your virtual workloads across those uplinks
Now regarding the L3 mode between VSM and VEM
You can either use existing mgmt. interface (vmk0) to communicate between ESXi host (VEM) and Nexus 1000v VSM
Or you can have dedicated (separate from mgmt.) IP subnet with new VMkernel (say vmk1) for VEM-VSM communication.
Please refer below document which walks through the scenario you have implemented:
we don’t have specific document to compare Nexus 1000v with other Distributed Virtual Switches
But few of advantages for opting Nexus 1000v are – its free, all NX-OS features, separate entity which can be owned/managed by Network team and other special features which I presented in above webcast recordings
Common deployments I have seen in field is Customers using vSwitch for mgmt. (vmk0) and other host specific functions
And they use Nexus 1000v VEM for NX-OS for additional functionality like LACP, PVLAN, QoS, ERSPAN and Virtual Machines traffic.
The Cisco Cloud Services Router (CSR) 1000V is comprised of single-tenant software routers in virtual form factor that deliver comprehensive WAN gateway functionality to multi-tenant, provider-hosted clouds.
Cisco Intercloud Fabric is a secure Layer 2 extension from a private enterprise data center to a public cloud. The Intercloud Fabric CSR is the base CSR image to which the Intercloud Fabric Driver is added.
Cisco Intercloud Fabric is a layer 2 extension. A Virtual Machine that is migrated to the public cloud can continue to be on the same enterprise VLAN and can retain its IP address. This eases application migration to the public cloud, as there are no configuration changes or re-architecting of applications needed.
customer is using N1Kv for quite some time now. It is working in Advanced mode (version 4.2.1.SV2.1.1a). Due to some security concerns customer wants to rebuild complete environment using their own ESXi image and upgrade N1Kv to new version 5.2.1.SV3.1.2. What will be the best practice to do this:
- create a backup of VSM VMs before destroying current host, then restore VMs to new host image and then proceed with VSM upgrade or
- save current licence file, then install new N1Kv version from scratch and request new license file for new host-id (and provide old license file) - is there online form available or TAC case is needed?
In addition to above question, I have another one that is somehow related: how it is possible that N1Kv is running in Advanced mode and at the same time its PAK (available inside license file that resides on bootflash) allows new license file creation on cisco.com/go/license web page (it behaves like that PAK was never used)?