we will interconnect 3 Datacenters in 3 different locations connected with DWDM.
The idea was to use 2x NX 5548 in each location to setup a FP Cloud between the sides.
But I am not sure, if we can use NX5500 in spine and leaf mode !
Or do we need a N7K pair as spine.
The documentation is not 100% clear at this point.
Would be great if someone can answer that ;-)
We will order the stuff in the next few days, and let you know the expierences with the FB cloud, during the implementation.
I also got the information that the N5500 are valid for designs like this, but you need to keep in mind, if you want to use L3 Modules and HSRP in that topolgie.
HSRP Active-Site / Active-Site won´t work with more that two DC Sites, or if there are additional FP Switches in the topologie !
we set up now our Lab for the DCI Fabricpath testing !!!!
We configured the Fabricpath edge towards the OLD world with VPC+ as remommended,
no use of Layer 3.
The result is a very very slow convergenc time of 2,5 sec if VPC+ (Portchannel) memberlink will go down,
and even again 2,5 sec when the link comes back !!!!
We reconfigured the edge with RSTP towards the old Network, and the convergence time was, as estimatet subsecond,
arrounf 400-700 msec.
Why ist VPC+ with Fabricpath that slow, does anybody has the same expierences ?
Hope to get an answer, with a convergenc time of total 5 sec, we can´t implement that in a DCI environment !!!!!
we configured it already as RSTP connection,
but we will replace the edge systems in the old World soon with N7K (old c6500), and for that we would like to use VPC+ / VPC between the DCI edge switches and the old world with STP only, we didn´t migrate yet to VPC in the old world ...
I tested a few monthes ago VPC and MEC with 2x N5k and 2x C6500 (VSS),
and the convergence time was subsecond !
So we ar not very happy with the situation yet, because 2,5 sec in a DC environment is to long !!!!
we implement here a DCI interconnection between 3 sites only on Layer 2 !!!
It is a layer 2 desaster recovery, for Vmotion of ESX VM.
This 2,5 sec (unplugg) + 2,5 sec (plugging) will impact our DC environment very heavy.
The old setup with Nexus and STP has convergence time of subsecond, so we cant implement a new technology that is slower .....
We will see ....
So as a result, RSTP is much faster than VPC in convergence.
I did a debug on the ports on the Nexus N5k DCI in VPC mode:
Logging shows, a link down and the time until the Portchannel recognize the link failure.
It takes 3,5 sec !!!!
Between the first and second ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down
2012 Sep 4 11:05:10.927 DCI-FP-001 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel11: first operational port changed from Ethernet1/1 to none
2012 Sep 4 11:05:10.930 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel11: Ethernet1/1 is down
2012 Sep 4 11:05:10.930 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel11: port-channel11 is down
2012 Sep 4 11:05:10.951 DCI-FP-001 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down (No operational members)
2012 Sep 4 11:05:14.507 DCI-FP-001 %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/1 is down (Link failure)
2012 Sep 4 11:05:14.533 DCI-FP-001 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down (No operational members)
reconnect the link
It takes 2 sec
2012 Sep 4 11:13:43.517 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel11: Ethernet1/1 is up
2012 Sep 4 11:13:43.523 DCI-FP-001 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel11: first operational port changed from none to Ethernet1/1
2012 Sep 4 11:13:43.652 DCI-FP-001 %ETHPORT-5-IF_UP: Interface Ethernet1/1 is up in mode trunk
2012 Sep 4 11:13:45.415 DCI-FP-001 %ETHPORT-5-IF_UP: Interface port-channel11 is up in mode trunk
I did the same with RSTP and hat link down notification inbetween 200 msec !!!