cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6366
Views
0
Helpful
27
Replies
Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi all,

we will interconnect 3 Datacenters in 3 different locations connected with DWDM.

The idea was to use 2x NX 5548 in each location to setup a FP Cloud between the sides.

But I am not sure, if we can use NX5500 in spine and leaf mode !

Or do we need a N7K pair as spine.

The documentation is not 100% clear at this point.

Would be great if someone can answer that ;-)

Ciao Andre

27 REPLIES 27
Highlighted
Enthusiast

FabricPath-Datacenter-Interconnection-only with Nexus 5500

I want to use exactly the same design.

Just like spanning-tree, you can have any topology you want.

Leaf and spine architectures are just the state of the art for HPC networks to have a predictable latency from any peer to any peer.

Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

We will order the stuff in the next few days, and let you know the expierences with the FB cloud, during the implementation.

I also got the information that the N5500 are valid for designs like this, but you need to keep in mind, if you want to use L3 Modules and HSRP in that topolgie.

HSRP Active-Site / Active-Site won´t work with more that two DC Sites, or if there are additional FP Switches in the topologie !

Highlighted
Enthusiast

FabricPath-Datacenter-Interconnection-only with Nexus 5500

You can use inter site FHRP isolation if intervlan routing os owned by another devices, and if you apply a PACL or VACL on the Nexus 5500, at the edge ports (classical ethernet) of the trill cloud.

Highlighted
Beginner
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Andre,

what did you end up doing? did you get any more info on this?

Cheers,

Dion

Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Dion,

I got the information from Cisco that FP is approved only with the N5500,

so I ordered the hardware for the new DCI between the DCs (6x5500).

It should arrive mid of May !

Then we can start with the testing.

Ciao Andre

Highlighted
Beginner
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Nice.. enjoy those toys

so are the 5548UP capable running as spine?

Most design reference are for two DCs have you found one for three and more DCs.

Concept looks easy.. but when you start looking at detail connections and try to understand traffic flow... gives me headache.... LOL ..

Dion

Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

HI all,

we set up now our Lab for the DCI Fabricpath testing !!!!

We configured the Fabricpath edge towards the OLD world with VPC+ as remommended,

no use of Layer 3.

The result is a very very slow convergenc time of 2,5 sec if  VPC+ (Portchannel) memberlink will go down,

and even again 2,5 sec when the link comes back !!!!

We reconfigured the edge with RSTP towards the old Network, and the convergence time was, as estimatet subsecond,

arrounf 400-700 msec.

Why ist VPC+ with Fabricpath that slow, does anybody has the same expierences ?

Hope to get an answer, with a convergenc time of total 5 sec, we can´t implement that in a DCI environment !!!!!

Ciao Andre

Highlighted
Enthusiast

FabricPath-Datacenter-Interconnection-only with Nexus 5500

What does it happen if you try without VPC+ ?

VPC has longer convergence times than VSS or other technologies

Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Surya,

we configured it already as RSTP connection,

but we will replace the edge systems in the old World soon with N7K (old c6500), and for that we would like to use VPC+ / VPC between the DCI edge switches and the old world with STP only, we didn´t migrate yet to VPC in the old world ...

I tested a few monthes ago VPC and MEC with 2x N5k and 2x C6500 (VSS),

and the convergence time was subsecond !

So we ar not very happy with the situation yet, because 2,5 sec in a DC environment is to long !!!!

Ciao Andre

Highlighted
Enthusiast

Re: FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi.

2.5 seconds will not impact you so much; best practices for HSRP timers are still hello=1 / hold=3 because near all TCP applications won't face any issue with 3 or 4 seconds of disruption.

It will have only an impact if you use a lot of non TCP apps or very specific TCP-based apps.

Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi,

we implement here a DCI interconnection between 3 sites  only on Layer 2 !!!

It is a layer 2 desaster recovery, for Vmotion of ESX VM.

This 2,5 sec (unplugg) + 2,5 sec (plugging) will  impact our DC environment very heavy.

The old setup with Nexus and STP has convergence time of subsecond, so we cant implement a new technology that is slower .....

We will see ....

Ciao Andre

Highlighted
Enthusiast

FabricPath-Datacenter-Interconnection-only with Nexus 5500

What do the timestamped logs show ? Can you investigate where the time is consumed for these 2,5 seconds.

Highlighted
Beginner

FabricPath-Datacenter-Interconnection-only with Nexus 5500

So as a result, RSTP is much faster than VPC in convergence.

I did a debug on the ports on the Nexus N5k DCI in VPC mode:

Logging shows, a link down and the time until the Portchannel recognize the link failure.

It takes 3,5 sec !!!!

Between the first and second ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down

----------

2012 Sep 4 11:05:10.927 DCI-FP-001 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel11: first operational port changed from Ethernet1/1 to none

2012 Sep 4 11:05:10.930 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel11: Ethernet1/1 is down

2012 Sep 4 11:05:10.930 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel11: port-channel11 is down

2012 Sep 4 11:05:10.951 DCI-FP-001 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down (No operational members)

2012 Sep 4 11:05:14.507 DCI-FP-001 %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/1 is down (Link failure)

2012 Sep 4 11:05:14.533 DCI-FP-001 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down (No operational members)

reconnect the link

It takes 2 sec

-----------

2012 Sep 4 11:13:43.517 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel11: Ethernet1/1 is up

2012 Sep 4 11:13:43.523 DCI-FP-001 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel11: first operational port changed from none to Ethernet1/1

2012 Sep 4 11:13:43.652 DCI-FP-001 %ETHPORT-5-IF_UP: Interface Ethernet1/1 is up in mode trunk

2012 Sep 4 11:13:45.415 DCI-FP-001 %ETHPORT-5-IF_UP: Interface port-channel11 is up in mode trunk

I did the same with RSTP and hat link down notification inbetween 200 msec !!!

Ciao Andre

Highlighted
Enthusiast

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Do you have any carrier-delay on some interfaces ? Can give us the config of the DCI facing ports ?

Remind that Fabricpath over VPC is not supported as far as I know. DCI-facing links should not be port-channels in VPC; with VPC+ there is a FP forwarder elected into the VPC+ domain.