03-19-2012 07:03 AM - edited 03-01-2019 07:04 AM
Hi all,
we will interconnect 3 Datacenters in 3 different locations connected with DWDM.
The idea was to use 2x NX 5548 in each location to setup a FP Cloud between the sides.
But I am not sure, if we can use NX5500 in spine and leaf mode !
Or do we need a N7K pair as spine.
The documentation is not 100% clear at this point.
Would be great if someone can answer that ;-)
Ciao Andre
09-05-2012 02:20 AM
HI Surya
we tested already to configure carrier delay on the c6500, the N5K does not have a similar command,
only the link debaounce timer, but that didn´t improve the link down notification.
We don´t use Fabricpath via VPC , the VPC connection is at the edge of the FP cloude towards an old STP DC cloud !
We configured the VPC+ with a virtual switch ID a recommendet !
Ciao Andre
09-05-2012 02:30 AM
This is our setup, may that will help ;-)
09-05-2012 02:40 AM
did you try to play with the "delay restore" parameter in your VPC domain ?
09-05-2012 02:49 AM
No improvement !
Configured restore delay 1 sec
Still the same convergence time of 2,5 sec unplugging + 2,5 sec plugging the link back !
09-05-2012 02:56 AM
How do you check the time of reconvergence ? Using the timestamp in the logs or by sending real traffic for your test ? Do you see real traffic loss of near 3 seconds ? I don't have a Nexus 5k on my desk currently but what I find strange is that the interfaces pushes it's state immediatly to the STP process when running STP and not in the case of VPC. Do you see the same delay if running STP over a traditionnal port-channel ?
Can you send the output of a "show run int e1/1" and "show run int po11" ?
09-05-2012 03:07 AM
Hi Surya,
we measure with a IXIA traffic generator.
I measure arround 2500 - 2700 msec at the link down, and arround 2000 -2300 msec at the link up.
The link down an up with STP is arround 200 msec.
Need to reconfigure it to send logs.
Sh run configs will follow !
Ciao Andre
09-05-2012 03:11 AM
!Command: show running-config vpc
!Time: Wed Sep 5 10:09:48 2012
version 5.1(3)N2(1a)
feature vpc
vpc domain 10
role priority 1024
system-priority 1024
peer-keepalive destination 10.158.100.101 source 10.158.100.100
delay restore 1
auto-recovery
fabricpath switch-id 10
ip arp synchronize
interface port-channel10
vpc peer-link
interface port-channel11
vpc 11
DCI-FP-001#
DCI-FP-001# sh run int eth 1/1
!Command: show running-config interface Ethernet1/1
!Time: Wed Sep 5 10:08:56 2012
version 5.1(3)N2(1a)
interface Ethernet1/1
description DC1-Backup, Te0/1#U
switchport mode trunk
switchport trunk allowed vlan 2-3967,4048-4093
storm-control broadcast level 3.00
storm-control multicast level 3.00
channel-group 11 mode active
DCI-FP-001# sh run int po 11
!Command: show running-config interface port-channel11
!Time: Wed Sep 5 10:09:37 2012
version 5.1(3)N2(1a)
interface port-channel11
description DC1-Backup, Po10#U
switchport mode trunk
switchport trunk allowed vlan 2-3967,4048-4093
spanning-tree port type normal
spanning-tree guard root
spanning-tree bpdufilter disable
storm-control broadcast level 3.00
storm-control multicast level 3.00
vpc 11
DCI-FP-001# sh vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 10
vPC+ switch id : 10
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
vPC fabricpath status : peer is reachable through fabricpath
Configuration consistency status: success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
vPC Peer-link status
---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po10 up 2-501,2000-3499
vPC status
---------------------------------------------------------------------------
id Port Status Consistency Reason Active vlans vPC+ Attrib
-- ---------- ------ ----------- ------ ------------ -----------
11 Po11 up success success 2-501,2000-3 DF: Partial
499
09-05-2012 11:57 AM
Strange issue; I have a test acceptance plan scheduled mid september for a new DC with VPC on Nexus 5596; I'll take a look to the convergence time
09-11-2012 01:30 AM
Hi.
Could you find the reason why the link status is reported so slowly ? Any TAC case opened ?
09-11-2012 02:18 AM
Hi
NO, we raised a case yet to Cisco´s DC Nexus BU.
Hope we will get an answer soon !
09-11-2012 02:26 AM
Did you test VPC convergence time without VPC+ / Fabricpath ? By just building a port-channel to another switch with pure ethernet and traditionnal VPC.
09-11-2012 02:35 AM
Yes and No, I did a test in another environment N5K and C6500 (sup720 as VSS).
MES on C6500 and VPC on N5K site, and we had subsecond convergence time.
Our concern is that we would like to use FP for the DCI connections, so we need a solution for the issue.
06-04-2014 03:28 PM
Hi Andre,
Did you experience any issue with your previous layer 2 interconnection between the three datacenters ?
Just wondering because this is an option I'm considering to connect three dc ...
Thanks, luigi.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide