Instead of complete buildup from scratch. Customer can do easy trick to avoid extra configuration and license steps.
1. Power-off existing secondary N1kv VSM VM and remove it from inventory.
2. Deploy new secondary N1kv VSM VM and sync it with existing primary N1kv VSM
3. Do switch-over from Primary VSM to newly created secondary VSM (this will become primary now)
4. Now power-off and remove the old VSM VM (which was original N1kv Primary)
5. Create new secondary N1kv VSM and sync it with "new" primary VSM
By using above method, you wont have to worry about Configuration and Licensing porting.
Regarding your other question for unused PAK, i think there is some registration issue of PAK with N1k's host id. I highly recommend to open TAC case with Nexus 1000v Licensing team to look into the issue.
Cool trick! Thank you!
As you are so resourceful, I do have two more questions about licensing:
- If licensing is based on CPU sockets on each VEM, does it mean that if I apply 8 licenses to VSM and then try to add 4 ESXi hosts (each running on B200M3 with two CPUs) there will be no more available licenses? From licensing perspective, is there any difference if I use one VSM (standalone) or two (active-standby), i.e. will I need more licenses just because I'm using HA for VSM part?
- What is the proper way to request new licenses if both VSM VMs were destroyed during host (ESXi) upgrade? What data do I have to provide in such case? I assume TAC case is the way to go but would like to double check this with you.
Thanks a million,
Below is quick summary of deployment location for ICF infrastructure components:
The Intercloud Fabric Director (ICF-D) and Intercloud Fabric Extender (ICX) virtual machines are deployed in the enterprise data center (private cloud)
The Intercloud Fabric Switch (ICS) is a virtual machine that will run in the provider cloud.
The Intercloud Fabric Provider Platform (ICFPP) is deployed in the provider cloud.
AVS is the Application-centric Virtual Switch, specific for ACI environments.
VACS is not intended for ACI or to run on a network in ACI fabric mode.
Only Nexus 1000V is the virtual switch version included with VACS.
ASAv will require the following port-profiles for its interfaces:
1. Inside interface (g0/0)
2. Outside interface (g0/1)
3. Failover interface (g0/2)
4. Management interface (m0/0)
There is no set-order that needs to be followed for chaining up virtual service-nodes such as ASAv, VSG, Load-balancer... so on
The service chaining is flexible and is usually defined on basis of traffic flow logic.
It is better to do first intra-segment security (VSG) than intra-segment firewall (ASAv).
More appropriate to have VM traffic go through first VSG and depending security compliance it be allowed to use load-balancing services by Citrix Netscaler.
The unit of operation for Intercloud Fabric is currently a Virtual Machine, so only VM can be moved across the cloud using ICF.
However, there are northbound APIs available from Intercloud Fabric Director to enable integration with cloud management tools that can have application level visibility.