cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8084
Views
0
Helpful
27
Replies

FabricPath-Datacenter-Interconnection-only with Nexus 5500

a.schoppmeier
Level 1
Level 1

Hi all,

we will interconnect 3 Datacenters in 3 different locations connected with DWDM.

The idea was to use 2x NX 5548 in each location to setup a FP Cloud between the sides.

But I am not sure, if we can use NX5500 in spine and leaf mode !

Or do we need a N7K pair as spine.

The documentation is not 100% clear at this point.

Would be great if someone can answer that ;-)

Ciao Andre

27 Replies 27

HI Surya

we tested already to configure carrier delay on the c6500, the N5K does not have a similar command,

only the link debaounce timer, but that didn´t improve the link down notification.

We don´t use Fabricpath via VPC , the VPC connection is at the edge of the FP cloude towards an old STP DC cloud !

We configured the VPC+ with a virtual switch ID a recommendet !

Ciao Andre

This is our setup, may that will help ;-)

did you try to play with the "delay restore" parameter in your VPC domain ?

No improvement !

Configured restore delay 1 sec

Still the same convergence time of 2,5 sec unplugging + 2,5 sec plugging the link back !

How do you check the time of reconvergence ? Using the timestamp in the logs or by sending real traffic for your test ? Do you see real traffic loss of near 3 seconds ? I don't have a Nexus 5k on my desk currently but what I find strange is that the interfaces pushes it's state immediatly to the STP process when running STP and not in the case of VPC. Do you see the same delay if running STP over a traditionnal port-channel ?

Can you send the output of a "show run int e1/1" and "show run int po11" ?

Hi Surya,

we measure with a IXIA traffic generator.

I measure arround 2500 - 2700 msec at the link down, and arround 2000 -2300 msec at the link up.

The link down an up with STP is arround 200 msec.

Need to reconfigure it to send logs.

Sh run configs will follow !

Ciao Andre

!Command: show running-config vpc

!Time: Wed Sep  5 10:09:48 2012

version 5.1(3)N2(1a)

feature vpc

vpc domain 10

  role priority 1024

  system-priority 1024

  peer-keepalive destination 10.158.100.101 source 10.158.100.100

  delay restore 1

  auto-recovery

  fabricpath switch-id 10

  ip arp synchronize

interface port-channel10

  vpc peer-link

interface port-channel11

  vpc 11

DCI-FP-001#

DCI-FP-001# sh run int eth 1/1

!Command: show running-config interface Ethernet1/1

!Time: Wed Sep  5 10:08:56 2012

version 5.1(3)N2(1a)

interface Ethernet1/1

  description DC1-Backup, Te0/1#U

  switchport mode trunk

  switchport trunk allowed vlan 2-3967,4048-4093

  storm-control broadcast level 3.00

  storm-control multicast level 3.00

  channel-group 11 mode active

DCI-FP-001# sh run int po 11

!Command: show running-config interface port-channel11

!Time: Wed Sep  5 10:09:37 2012

version 5.1(3)N2(1a)

interface port-channel11

  description DC1-Backup, Po10#U

  switchport mode trunk

  switchport trunk allowed vlan 2-3967,4048-4093

  spanning-tree port type normal

  spanning-tree guard root

  spanning-tree bpdufilter disable

  storm-control broadcast level 3.00

  storm-control multicast level 3.00

  vpc 11

DCI-FP-001# sh vpc

Legend:

                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                   : 10 

vPC+ switch id                  : 10

Peer status                     : peer adjacency formed ok     

vPC keep-alive status           : peer is alive                

vPC fabricpath status           : peer is reachable through fabricpath

Configuration consistency status: success

Per-vlan consistency status     : success                      

Type-2 consistency status       : success

vPC role                        : primary                      

Number of vPCs configured       : 1  

Peer Gateway                    : Disabled

Dual-active excluded VLANs      : -

Graceful Consistency Check      : Enabled

vPC Peer-link status

---------------------------------------------------------------------

id   Port   Status Active vlans   

--   ----   ------ --------------------------------------------------

1    Po10   up     2-501,2000-3499                                          

vPC status

---------------------------------------------------------------------------

id     Port        Status Consistency Reason       Active vlans vPC+ Attrib

--     ----------  ------ ----------- ------       ------------ -----------

11     Po11        up     success     success      2-501,2000-3 DF: Partial 

                                                   499     

Strange issue; I have a test acceptance plan scheduled mid september for a new DC with VPC on Nexus 5596; I'll take a look to the convergence time

Hi.

Could you find the reason why the link status is reported so slowly ? Any TAC case opened ?

Hi

NO, we raised a case yet to Cisco´s DC Nexus BU.

Hope we will get an answer soon !

Did you test VPC convergence time without VPC+ / Fabricpath ? By just building a port-channel to another switch with pure ethernet and traditionnal VPC.

Yes and No, I did a test in another environment N5K and C6500 (sup720 as VSS).

MES on C6500 and VPC on N5K site, and we had subsecond convergence time.

Our concern is that we would like to use FP for the DCI connections, so we need a solution for the issue.

Hi Andre,

Did you experience any issue with your previous layer 2 interconnection between the three datacenters ?

Just wondering because this is an option I'm considering to connect three dc ...

Thanks, luigi.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: