10-29-2018 07:12 AM - edited 03-08-2019 04:29 PM
Hello,
I try to setup a VPC Domain with two Nexus 5548 Switches. We cannot use the copper port of mgmt0, so I would like to ping with a vlan interface, because we have no L3 modul.
I'm able to ping the peer-Switch but the vPC doesn't come up. Am I really forced to use the mgmt0 port (with the Management vrf) ?
Here the config:
Cisco Nexus Operating System (NX-OS) Software
version 7.3(3)N1(1)
feature interface-vlan
feature lacp
feature vpc
feature lldp
feature fabric access
vlan 31
name N5K-KeepAlive
vrf context KeepAlive
vrf context management
vpc domain 14
peer-switch
peer-keepalive destination 10.31.14.5 source 10.31.14.4 vrf KeepAlive
interface Vlan31
description N5K-KeepAlive
no shutdown
vrf member KeepAlive
ip address 10.31.14.4/24
interface port-channel14
description VPC-Peer
switchport mode trunk
no lacp suspend-individual
switchport trunk allowed vlan 2-10
spanning-tree port type network
speed 10000
duplex full
vpc peer-link
interface Ethernet1/29
description PEER
switchport mode trunk
switchport trunk allowed vlan 2-10
duplex full
channel-group 14
interface Ethernet1/30
description N5K_KeepAlive
switchport mode trunk
switchport trunk allowed vlan 31
speed 1000
duplex full
logging event port link-status
interface mgmt0
vrf member management
# show vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 14
Peer status : peer link is down
vPC keep-alive status : Suspended (Destination IP not reachable)
Configuration consistency status : failed
Per-vlan consistency status : success
Configuration inconsistency reason: Consistency Check Not Performed
Type-2 consistency status : failed
Type-2 inconsistency reason : QoSMgr type-1 configuration incompatible
vPC role : none established
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Disabled (due to peer configuration)
Operational Layer3 Peer-router : Disabled
Auto-recovery status : Enabled (timeout = 240 seconds)
vPC Peer-link status
---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po14 up -
# show vpc peer-keepalive
vPC keep-alive status : Suspended (Destination IP not reachable)
--Send status : Success
--Last send at : 2018.10.29 12:10:22 63 ms
--Sent on interface : Vlan31
--Receive status : Failed
--Last update from peer : (130) seconds, (491) msec
vPC Keep-alive parameters
--Destination : 10.98.14.5
--Keepalive interval : 1000 msec
--Keepalive timeout : 5 seconds
--Keepalive hold timeout : 3 seconds
--Keepalive vrf : KeepAlive
--Keepalive udp port : 3200
--Keepalive tos : 192
# ping 10.31.14.5 source 10.31.14.4 vrf keepAlive
PING 10.31.14.5 (10.31.14.5) from 10.31.14.4: 56 data bytes
64 bytes from 10.31.14.5: icmp_seq=0 ttl=254 time=3.176 ms
64 bytes from 10.31.14.5: icmp_seq=1 ttl=254 time=4.877 ms
64 bytes from 10.31.14.5: icmp_seq=2 ttl=254 time=4.98 ms
64 bytes from 10.31.14.5: icmp_seq=3 ttl=254 time=4.978 ms
64 bytes from 10.31.14.5: icmp_seq=4 ttl=254 time=4.979 ms
--- 10.31.14.5 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 3.176/4.598/4.98 ms
10-29-2018 07:45 AM
- What's in show logging when you try to bring up the VPC; are there any indicative messages that the mgmt0 is required ?
M.
10-29-2018 09:12 AM
hello,
when I use the mgmt0 port and move the ip addresses to mgmt0, the vpc comes up withaout any problems.
But I need to use two SFP-ports. I think I have a problem with the spanning-tree setup.
I recognized that on one switch the port of the KeepAlive-Link (eth1/30) go in BLK mode when I connect the PEER-Link. Which spanning-tree mode should I use?
"log":
#1 eth1/30 (both) are connected -> ping ok - vpc down
#2 connecting eth1/29 (PEER) -> ping ok - vpc comes up
#3 eth1/30 vlan 31 goes into BLK-mode -> VPC-peer unreachable
Of course the VLAN 31 (keepalive-vlan) is only configured on eth1/30 and not on eth1/29. So there wouldn't be a Loop.
Maybe the Switch overreacts about the incoming BPDUs? Would a bpdufilter be helpful on Eth1/30. But this shouldn't be the smartest solution.
10-29-2018 11:19 AM
- Basically I think the the problem is more fundamental from
I found :
Strong Recommendations: When building a vPC peer-keepalive link, use the following in descending order of preference: 1. Dedicated link(s) (1-Gigabit Ethernet port is enough) configured as L3. Port-channel with 2 X 1G port is even better. 2. Mgmt0 interface (along with management traffic) 3. As a last resort, route the peer-keepalive link over the Layer 3 infrastructure.
While the 7000 platform is even more advanced then the 5K, it will come down to that you will have to use mgm0 if you don't have L3.
M.
10-30-2018 12:13 AM - edited 10-30-2018 12:14 AM
hello,
today I set the bpdufilter on both keepalive-ports and rebooted. The vpc came up and stays up. A test ping within vlan 2 (peer-link) is successfull.
Refering to the quote: for N5K I saw also a guide for using SVIs (missing Link here). The SVI is a L3 port and a dedicated Link is also given. Maybe the spanning-tree problem is a "Feature" which is not (yet) included into cisco Guidelines?
I don`t see why this shouldn't work. I will have some more tests with redundancy and stability.
# Show vpc
vPC domain id : 14
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 0
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Operational Layer3 Peer-router : Disabled
Auto-recovery status : Enabled (timeout = 240 seconds)
vPC Peer-link status
---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po14 up 2-10
# show vpc peer-keepalive
vPC keep-alive status : peer is alive
--Peer is alive for : (960) seconds, (723) msec
--Send status : Success
--Last send at : 2018.10.30 04:57:34 154 ms
--Sent on interface : Vlan31
--Receive status : Success
--Last receive at : 2018.10.30 04:57:34 154 ms
--Received on interface : Vlan31
--Last update from peer : (0) seconds, (868) msec
vPC Keep-alive parameters
--Destination : 10.31.14.5
--Keepalive interval : 1000 msec
--Keepalive timeout : 5 seconds
--Keepalive hold timeout : 3 seconds
--Keepalive vrf : KeepAlive
--Keepalive udp port : 3200
--Keepalive tos : 192
10-30-2018 04:55 AM
hello,
I can't see why this setup shouldn't work: https://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/mkt_ops_guides/513_n1_1/n5k_L3_w_vpc_5500platform.html#wp999392 , there is a guide to configure a vlan Interface as Keepalive-Interface. Also even without L3-Module I can configure an active VLAN-Interface.
Also I can bring up a VPC with connected KeepAlive-Link when I set a bpdufilter on the Interfaces of the keepalive-Links. (not on Peer Link). This is working. With Default Spanning-tree configuration one of the Port become a BLK-Port.
In light of the fact, that the Switches are working as one Logical Switch it makes sense, that the Switch is blocking one Port (same as plugging a Loop on the same device). So I Need a Special Parameter which says that These ports does not belong to the same Switch.
The only Point I see is vpc orphand port - but this does not work - however I do not really know whether there are more steps to do.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: