cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2259
Views
0
Helpful
1
Replies

LLD showing config-mismatch error in leaf ( ACI)

mohanadbarahim
Level 1
Level 1

Hi Everyone, 

 

I need your support to find the root cause & resolve this issue, as recently we have connected 8 bare metal server to our ACI network, we start to receive the following logs coming from LF223. 

 

Log from LF223:

 

            10/22/2019 8:38:14 AM         172.16.59.18   Alert    LF223-BDC-03-03 %LOG_-4-SYSTEM_MSG [E4209105][config-mismatch][warning][sys/lldp/inst/if-[eth1/25]/adj-1] LLDP neighbor port vlan information is missing

            10/22/2019 8:38:11 AM         172.16.59.18   Alert    LF223-BDC-03-03 %LOG_-3-SYSTEM_MSG [E4209331][config-mismatch][major][sys/lldp/inst/if-[eth1/22]/adj-1] LLDP neighbor is bridge and its port vlan information is missing. If neighbor is running MST(802.1s) protocol, this could result in a layer2 topology with a loop

            10/22/2019 8:38:11 AM         172.16.59.18   Alert    LF223-BDC-03-03 %LOG_-3-SYSTEM_MSG [

 

Also, the system description & LLDP Neighbor port Description keep changing every second, kindly find the following error:

  • Broadcom Adv. Dual 25Gb Ethernet fw_version:AFW_20.8.100.0 / LLDP Neighbor port Description (NIC 1/10/25Gb)
  • SUSE Linux Enterprise Server 12 SP3 Linux 4.4.162-94.72-default #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f) x86_64/ LLDP Neighbor port Description (slave-0)

 

Bare Metal Team done troubleshooting the issue and provide the following command but doesn't seems enough information to agree/disagree if the bare metal has been configured correctly.    

 

Quote from VENDOR E-mail: 

After consulting with colleagues all three warnings are false lldp interpretations and hence false warnings from your switch.


Those are nodes running suse and not acting as bridges, especially that we don’t have network separation configured on this cluster.

So there are no vlans configured on those interfaces slave-0 and slave-1. They are just bonded as public interface.

Below is an example of the config from node 3:

N3:

admin@lthdwecs11:~> cat /etc/sysconfig/network/ifcfg-public
BONDING_MASTER=yes
BONDING_MODULE_OPTS="miimon=100 mode=4 xmit_hash_policy=layer3+4"
BONDING_SLAVE0=slave-0
BONDING_SLAVE1=slave-1
BOOTPROTO=static
IPADDR=10.127.19.161/22
LLADDR=B0:26:28:25:DF:D0
MTU=1500
STARTMODE=auto

admin@lthdwecs11:~> cat /proc/net/bonding/public
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable

Slave Interface: slave-0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: b0:26:28:25:df:d0
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0

Slave Interface: slave-1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: b0:26:28:25:df:d1
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0

1 Reply 1

ncfb-awarsame
Level 1
Level 1

I have this problem also. after connecting Rubrik servers to a leaf switch, I started receiving these errors from the leaf switch. Does anybody know how to stop them?

Save 25% on Day-2 Operations Add-On License