I am aware of the Cisco VPC recommendation of having seperate VLANs for vpc and non-vpc vlans and having a seperate L2 trunk for the non-vpc VLANs.
However, i have a specific migration scenario where we need to migate to Nexus and while doing so (in the same down window), we would like to activate and enable vpc, but in the first phase it will not immediately be used. So the current setup is a classical L2 trunk between switches with hosts in Active/Standby based on port state.
Now when i migrate, i have two options:
1) keep all the existing VLANs (and ports) on a seperate L2 trunk and enable vpc peer link on seperate interfaces, not carrying any VLAN in the beginning as we have no vpc member ports yet after migration.
The disadvantage here is that once we have vpc member ports or convert a server config from Active/Standby to vpc LACP, we would need to move this server to a seperate vpc-enabled VLAN, which implies re-ipping the server ?? this will be hard for the server team.
2) I don't see why i wouldn't convert the existing L2 trunk to a vpc peer link and convert all existing VLANs on the switch to vpc vlans.
yes, i know, DHCP will become active on both sides, so i need to have an SVI for each vlan active on both sides.
yes, i know i need to have mirrored vlans otherwise vpc will not come up
yes, i know that orphaned ports mac addresses will only be learned on one side only, but i don't mind using the vpc peer link to switch this traffic. And, according to all documentation, the vpc peer link is going to do that for orphaned ports.
i don't mind the bandwidth "waste", as i am planning to setup the peer link with 2x 40GE links, which should be more than enough for all "inter" Nexus switching.
i am not assuming any "peer link down" scenarios. I know we have disaster when the peer link goes down, but the same was true before the migration in a classical L2 trunk (when that goes down). the link will be setup with the highest level of resilience: dual path, not on the same linecards or asic.
the big advantage here is that we can gradually move server ports to LACP enabled vpc channels without re-ipping.
so, am i missing something here ? any particular scenario's i need to be afraid of ?
is the only concern the vpc configuration check in which case vpc might completely block the link and therefore affecting ALL vlans on the system, while in scenario 1 the impact would only be limited to vpc enabled vlans ?
I don't know of a reason you wouldn't do #2 either. We run VPC on all our campus and Data Center (5/7/9K) cores and have plenty of "single-legged," non-VPC connections to them without an additional L2, non-VPC trunk link between the VPC peers. The key, as you say, is having SVIs on both peers with L3 addressing and some kind of FHRP running.