07-28-2023 07:32 AM - edited 10-19-2023 10:51 AM
Hello all,
BACKGROUND: We have a very odd issue that has lasted months and through many support emails with Cisco - and through several IOS upgrades (i.e. 17.4 - 17.9.3). We have a standard setup for RP+RMI between two 9800 LC chassis for redundancy. Chassis 1 trunk/management goes to a newer 9300L 48 POE switch, and Chassis 2 trunk/management goes to another 9300L 48 port POE switch to isolate each WLC. Both chassis run side by side and are connected via RP direct via ethernet.
PROBLEM: The concern is that at random times the chassis 1 will think it has lost an active GW, and fail over to chassis 2. Eventually Chassis 2 will again, think it lost active GW, and fail back over to chassis 1! These events are not together and will happen so independently in time.
NOTED:
Subject: RESET: Node (RMI) is Up. Node was down for 8 minutes.
In theory, the RMI switch over works, but this should not be happening randomly. It as if there was flapping between the RP ports. Again, this is random - it may go 2 days or 1 month or every other day.
We did add the WLC to our SolarWinds SEM and could capture (attached CSV) more logging information which I have attached. I also tried doing a port scan on our 9300 switch, but this generates too much data for our 1TB external drive and it stops the session.
Is anyone having issues like we are with this?
07-28-2023 10:17 AM
- Have a checkup of the (primary) 9800 WLC configuration with the CLI command show tech wireless ; feed the output into
https://cway.cisco.com/wireless-config-analyzer/
For the 9800 LC type controllers also make sure that IOS XE Hardware Programmable Devices are up to date (download from https://software.cisco.com/download/home/286323158/type )
During a debugging time you may increase the (chassis Peer Timeout) as mentioned in https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/213915-configure-catalyst-9800-wireless-control.html#toc-hId-307825303
You could use the max value as a test if this related at all (or not)
Appendix : a number of useful and or related commands (not all of them might be applicable)
show redundancy | i ptime|Location|Current Software state|Switchovers
show chassis
show chassis detail
show chassis ha-status local
show chassis ha-status active
show chassis ha-status standby
show chassis rmi
show redundancy
show redundancy history
show redundancy switchover history
show tech wireless redundancy
show redundancy states
show logging process stack_mgr internal to-file bootflash:
show platform hardware slot R0 ha_port interface stats
show platform hardware slot R0 ha_port sfp idprom (show details of SFP in SP)
test wireless redundancy rping
test wireless redundancy packetdump start
test wireless redundancy packetdump start filter port <0-65535>
test wireless redundancy packetdump stop
show platform software stack-mgr chassis active R0 peer-timeout
show platform software stack-mgr chassis standby R0 peer-timeout
show platform software stack-mgr chassis active R0 sdp-counters
show platform software stack-mgr chassis standby R0 sdp-counters
show redundancy config-sync failures {bem|mcl|prc}
show redundancy config-sync historic mcl
show redundancy config-sync ignored failures historic mcl
show redundancy switchover history
M.
07-28-2023 11:54 AM - edited 10-19-2023 10:52 AM
removed attachment
07-28-2023 11:55 PM
- Have a checkup of the (primary) 9800 WLC configuration with the CLI command show tech wireless ; feed the output into
https://cway.cisco.com/wireless-config-analyzer/
M.
07-28-2023 11:54 AM - edited 07-28-2023 12:49 PM
see above
07-28-2023 01:20 PM - edited 07-28-2023 01:27 PM
How are the 9300 switches configured? Are they stacked or standalone? What is the default gateway; SVI in the 9300, separate router, etc? Do they share a broadcast domain in the management VLAN? Have you tried a continuous ping with another device hooked up to the same switch in the same VLAN as the WLC management VLAN? (PingPlotter is a good tool for visualizing packet loss, latency, and jitter over time.)
07-28-2023 02:09 PM - edited 10-19-2023 10:53 AM
How are the 9300 switches configured?
Both switches are 9300L 48 port POE+, standalone as L3 core switches. We placed each WLC on each core to ensure that if a switch went down.
What is the default gateway; SVI in the 9300, separate router, etc?
Default GW points to the L3 switch through mngmt vlan
Do they share a broadcast domain in the management VLAN?
Both ports are trunked with no access restrictions or ACLs.
Have you tried a continuous ping with another device hooked up to the same switch in the same VLAN as the WLC management VLAN?
Yes. We have also monitored logging through an event management and on device, with no indication of port down, or loss of connection.
07-29-2023 06:35 AM
@frederick.mercado ours are connected to separate ASR9K and we have not seen this on 9800, although we do see it occasionally with 8540 (TAC never solved although there have been some bug fixes addressing these issues in recent releases).
But interesting that you're connected to 9300 switches - so you might want to look at this issue which we have encountered where random client traffic was getting silently dropped on 9300 switches:
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvt00292
The switch will randomly and silently drop traffic on specific ports/protocols/IPs until reloaded. Only when you check the stats mentioned in the bug description will you discover it's doing that. There is no fix for the issue yet - but the workaround is reload the switch. As you can see there are 69 cases attached to this bug already so plenty of customers encountering this problem.
Presume you have made sure all firmware is up to date:
https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_upgrade_fpga_c9800.html
Have you been through https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/220277-configure-high-availability-sso-on-catal.html ?
Have you tried increasing the gateway failover interval?
https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-9/config-guide/b_wl_17_9_cg/m_vewlc_high_availability.html#task_ED7B5FE608F841E582644C93C66D26F5
Doc says max 12 seconds but https://www.ciscolive.com/c/dam/r/ciscolive/global-event/docs/2022/pdf/BRKEWN-2846.pdf says it's now 20 seconds but I just checked on CLI and it's still max 12 on 17.9.3! Note that ARP and ping must fail before that timer kicks in.
As this monitoring is a control plane function if relies on CPU on your WLC - have your checked how the CPU looks on your WLCs? 9800 is particularly bad at alerting (nothing at all) when a CPU is getting overloaded. If the stack_mgr or rif_mgr process happen to be on a CPU at 100% then they're going to drop packets.
Start with "show process cpu platform sorted | incl wncd" and "show process cpu platform sorted"
If you haven't followed best practice guide with tags you might find every AP on a single wncd which can cause 100% CPU for the core that wncd is running on.
We actually had it caused by using web-auth with https redirection. It turns out 9800 is really rubbish/inefficient at handling https redirection (a lot worse than 8540) so had to disable https redirection. Again this causes silent packet drops resulting in web-auth failures when client redirections fail (even for http only redirections).
07-31-2023 07:12 AM - edited 10-19-2023 10:54 AM
Hi! Thanks for all the help. I did check both switches...and it would appear there are no flopping bits.
07-31-2023 08:03 AM - edited 10-19-2023 10:56 AM
I also saw your previous posting here: WLC 9800 restart and CRC and input errors on Cisco switches - Cisco Community
And thought to check the switches once more... I did see some elevated CRC and input errors however the counters have not been cleared in a year or more. I checked CPU, and cabling for both, that looks fine.
lslswmi-mdf-core01#show cable-diagnos tdr interface gi1/0/25
TDR test last run on: July 31 11:02:55
Interface Speed Local pair Pair length Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi1/0/25 1000M Pair A 14 +/- 10 meters Pair A Normal
Pair B 14 +/- 10 meters Pair B Normal
Pair C 14 +/- 10 meters Pair C Normal
Pair D 14 +/- 10 meters Pair D Normal
07-31-2023 08:06 AM
Forgot to post CPU from both switches:
lslswmi-mdf-core01#show process cpu platform sorted
CPU utilization for five seconds: 7%, one minute: 8%, five minutes: 7%
Core 0: CPU utilization for five seconds: 9%, one minute: 7%, five minutes: 7%
Core 1: CPU utilization for five seconds: 1%, one minute: 7%, five minutes: 7%
Core 2: CPU utilization for five seconds: 9%, one minute: 8%, five minutes: 8%
Core 3: CPU utilization for five seconds: 9%, one minute: 8%, five minutes: 8%
Pid PPid 5Sec 1Min 5Min Status Size Name
--------------------------------------------------------------------------------
12887 12024 14% 15% 14% S 253644 fed main event
9443 9037 5% 6% 5% S 844632 linux_iosd-imag
11569 11281 3% 3% 3% S 80240 cmand
9822 9430 2% 2% 2% S 54828 sif_mgr
32569 2 0% 0% 0% S 0 kworker/2:0
28722 28717 0% 0% 0% S 37708 python2.7
28717 28483 0% 0% 0% S 2788 rdope.sh
28619 1 0% 0% 0% S 3260 rotee
28483 1 0% 0% 0% S 4048 pman.sh
27958 2 0% 0% 0% S 0 SarIosdMond
27364 27174 0% 0% 0% S 8900 pttcd
27289 27029 0% 0% 0% S 157492 pubd
27279 1 0% 0% 0% S 3160 rotee
27174 1 0% 0% 0% S 4128 pman.sh
27140 1 0% 0% 0% S 3100 rotee
27029 1 0% 0% 0% S 4048 pman.sh
24826 24412 0% 0% 0% S 95912 smand
24546 1 0% 0% 0% S 3228 rotee
24412 8687 0% 0% 0% S 4172 pman.sh
24230 23984 0% 0% 0% S 32892 psd
24081 1 0% 0% 0% S 3104 rotee
23984 8687 0% 0% 0% S 4092 pman.sh
23762 23403 0% 0% 0% S 104620 cli_agent
23516 1 0% 0% 0% S 3156 rotee
23428 23169 0% 0% 0% S 13808 cmm
23403 8687 0% 0% 0% S 4108 pman.sh
23279 1 0% 0% 0% S 3212 rotee
23241 22976 0% 0% 0% S 189608 dbm
23169 8687 0% 0% 0% S 4116 pman.sh
23067 1 0% 0% 0% S 3104 rotee
22976 8687 0% 0% 0% S 4032 pman.sh
22927 22641 0% 0% 0% S 131228 fman_rp
22748 1 0% 0% 0% S 3156 rotee
22641 8687 0% 0% 0% S 4080 pman.sh
22498 22172 0% 0% 0% S 20828 tms
22314 1 0% 0% 0% S 3264 rotee
22290 22232 0% 0% 0% S 3064 journalctl
22232 21770 0% 0% 0% S 14076 plogd
22172 8687 0% 0% 0% S 4084 pman.sh
22012 21396 0% 0% 0% S 88820 repm
21970 1 0% 0% 0% S 3280 rotee
21770 8687 0% 0% 0% S 4072 pman.sh
21715 1 0% 0% 0% S 3232 rotee
21658 21612 0% 0% 0% S 1604 sntp
21612 1 0% 0% 0% S 2900 stack_sntp.sh
21510 20915 0% 0% 0% S 141148 sessmgrd
21396 8687 0% 0% 0% S 4136 pman.sh
21314 13139 0% 0% 0% S 276 inotifywait
21244 20630 0% 0% 0% S 104460 fman_fp_image
21167 1 0% 0% 0% S 3260 rotee
20915 8687 0% 0% 0% S 4028 pman.sh
20884 1 0% 0% 0% S 3076 rotee
20630 11368 0% 0% 0% S 4084 pman.sh
18332 2 0% 0% 0% S 0 bioset
18330 2 0% 0% 0% S 0 scsi_tmf_3
18329 2 0% 0% 0% S 0 scsi_eh_3
18328 2 0% 0% 0% S 0 kworker/0:1
18260 10947 0% 0% 0% S 272 inotifywait
18253 10947 0% 0% 0% S 9600 issu_stack.sh
18252 10947 0% 0% 0% S 268 inotifywait
18236 10947 0% 0% 0% S 9208 issu_stack.sh
18235 10947 0% 0% 0% S 272 inotifywait
17471 10278 0% 0% 0% S 280 inotifywait
17358 5576 0% 0% 0% S 252 inotifywait
16543 16195 0% 0% 0% S 15360 epc_ws_liaison
16335 1 0% 0% 0% S 3112 rotee
16311 13468 0% 0% 0% S 248 sleep
16195 11368 0% 0% 0% S 4088 pman.sh
15446 15256 0% 0% 0% S 11396 tams_proc
15374 1 0% 0% 0% S 3228 rotee
15366 15006 0% 0% 0% S 12924 tamd_proc
15256 1 0% 0% 0% S 4128 pman.sh
15209 14754 0% 0% 0% S 8796 tam_svcs_esg_cf
15203 1 0% 0% 0% S 3384 rotee
15006 1 0% 0% 0% S 4096 pman.sh
14916 1 0% 0% 0% S 3108 rotee
14754 1 0% 0% 0% S 4068 pman.sh
14394 13880 0% 0% 0% S 244 sleep
14221 1 0% 0% 0% S 3128 rotee
13970 2 0% 0% 0% S 0 kworker/u32:2
13880 13380 0% 0% 0% S 8328 periodic.sh
13742 2 0% 0% 0% S 0 kworker/u32:0
13566 1 0% 0% 0% S 3156 rotee
13560 1 0% 0% 0% S 3444 rotee
13468 13008 0% 0% 0% S 3036 sort_files_by_i
13380 8687 0% 0% 0% S 4124 pman.sh
13211 1 0% 0% 0% S 3348 rotee
13139 12491 0% 0% 0% S 10700 auto_upgrade_cl
13012 12375 0% 0% 0% S 18520 btman
13008 8687 0% 0% 0% S 4108 pman.sh
12750 1 0% 0% 0% S 3160 rotee
12645 12081 0% 0% 0% S 4320 auto_upgrade_se
12636 1 0% 0% 0% S 3160 rotee
12491 8687 0% 0% 0% S 4088 pman.sh
12375 11368 0% 0% 0% S 4120 pman.sh
12295 1 0% 0% 0% S 3240 rotee
12269 1 0% 0% 0% S 3248 rotee
12252 11806 0% 0% 0% S 20704 btman
12081 8687 0% 0% 0% S 4092 pman.sh
12024 11368 0% 0% 0% S 4128 pman.sh
11979 1 0% 0% 0% S 3224 rotee
11955 11557 0% 0% 0% S 23460 bt_logger
11806 8687 0% 0% 0% S 4084 pman.sh
11703 1 0% 0% 0% S 3152 rotee
11693 11368 0% 0% 0% S 268 inotifywait
11557 8687 0% 0% 0% S 4116 pman.sh
11548 1 0% 0% 0% S 3104 rotee
11381 1 0% 0% 0% S 3236 rotee
11368 1 0% 0% 0% S 6356 pvp.sh
11281 8687 0% 0% 0% S 4116 pman.sh
11272 1 0% 0% 0% S 3108 rotee
11224 10916 0% 0% 0% R 23204 hman
11033 1 0% 0% 0% S 3100 rotee
10947 10572 0% 0% 0% S 10492 issu_stack.sh
10916 8687 0% 0% 0% S 4092 pman.sh
10824 1 0% 0% 0% S 3152 rotee
10799 10408 0% 0% 0% S 18620 keyman
10772 1 0% 0% 0% S 3276 rotee
10572 8687 0% 0% 0% S 4132 pman.sh
10537 1 0% 0% 0% S 3232 rotee
10436 10135 0% 0% 0% S 23028 lman
10408 8687 0% 0% 0% S 4116 pman.sh
10278 9941 0% 0% 0% S 10912 ncd.sh
10274 1 0% 0% 0% S 3076 rotee
10135 8687 0% 0% 0% S 4080 pman.sh
10055 1 0% 0% 0% S 3232 rotee
9941 8687 0% 0% 0% S 4080 pman.sh
9920 9653 0% 0% 0% S 14428 nif_mgr
9802 1 0% 0% 0% S 3092 rotee
9653 8687 0% 0% 0% S 4088 pman.sh
9617 9216 0% 0% 0% S 31900 stack_mgr
9586 1 0% 0% 0% S 3096 rotee
9430 8687 0% 0% 0% S 4088 pman.sh
9383 1 0% 0% 0% S 3152 rotee
9216 8687 0% 0% 0% S 4104 pman.sh
9171 1 0% 0% 0% S 3156 rotee
9037 8687 0% 0% 0% S 4172 pman.sh
8835 8687 0% 0% 0% S 268 inotifywait
8732 1 0% 0% 0% S 3436 rotee
8687 1 0% 0% 0% S 6660 pvp.sh
8682 8615 0% 0% 0% S 256 inotifywait
8676 1 0% 0% 0% S 3164 rotee
8615 1 0% 0% 0% S 6692 psvp.sh
8528 8222 0% 0% 0% S 268 inotifywait
8484 8216 0% 0% 0% S 1684 inotifywait
8454 1 0% 0% 0% S 3104 rotee
8410 1 0% 0% 0% S 3476 rotee
8340 8230 0% 0% 0% S 2928 flash-rec
8230 1 0% 0% 0% S 3808 flash_recovery.
8229 1 0% 0% 0% S 2028 xinetd
8222 1 0% 0% 0% S 8336 rollback_timer.
8220 1 0% 0% 0% S 2008 xinetd
8216 1 0% 0% 0% S 5980 chasync.sh
6896 1 0% 0% 0% S 2112 xinetd
6045 2 0% 0% 0% S 0 lfts_sar_aux
5902 5545 0% 0% 0% S 268 inotifywait
5799 1 0% 0% 0% S 3232 rotee
5708 1 0% 0% 0% S 3200 rotee
5576 1 0% 0% 0% S 6248 reflector.sh
5558 5210 0% 0% 0% S 268 inotifywait
5545 1 0% 0% 0% S 4832 iptbl.sh
5514 5217 0% 0% 0% S 204 sleep
5502 1 0% 0% 0% S 3140 rotee
5469 1 0% 0% 0% S 3088 rotee
5348 2 0% 0% 0% S 0 nfsd
5347 2 0% 0% 0% S 0 nfsd
5346 2 0% 0% 0% S 0 nfsd
5345 2 0% 0% 0% S 0 nfsd
5344 2 0% 0% 0% S 0 nfsd
5342 2 0% 0% 0% S 0 nfsd
5340 2 0% 0% 0% S 0 nfsd
5335 2 0% 0% 0% S 0 nfsd
5327 2 0% 0% 0% S 0 lockd
5326 2 0% 0% 0% S 0 nfsd4_callbacks
5284 1 0% 0% 0% S 604 rpc.mountd
5281 1 0% 0% 0% S 2596 rpc.statd
5273 5263 0% 0% 0% S 252 inotifywait
5263 1 0% 0% 0% S 1892 boothelper_evt.
5255 1 0% 0% 0% S 1876 xinetd
5244 5209 0% 0% 0% S 14904 libvirtd
5238 1 0% 0% 0% S 2144 rpcbind
5217 1 0% 0% 0% S 3400 oom.sh
5210 1 0% 0% 0% S 6264 droputil.sh
5209 1 0% 0% 0% S 2720 libvirtd.sh
5198 1 0% 0% 0% S 3216 virtlogd
4774 2 0% 0% 0% S 0 kworker/2:2
2327 2 0% 0% 0% S 0 ixgbe
2313 2 0% 0% 0% S 0 bioset
2311 2 0% 0% 0% S 0 bioset
2306 2 0% 0% 0% S 0 bioset
2304 2 0% 0% 0% S 0 bioset
2302 2 0% 0% 0% S 0 bioset
2301 2 0% 0% 0% S 0 bioset
2297 2 0% 0% 0% S 0 bioset
2289 2 0% 0% 0% S 0 lsmpi-rx
2288 2 0% 0% 0% S 0 lsmpi-xmit
2287 2 0% 0% 0% S 0 lsmpi-refill
2267 2 0% 0% 0% S 0 bioset
2052 2 0% 0% 0% S 0 loop8
1930 2 0% 0% 0% S 0 loop7
1728 2 0% 0% 0% S 0 loop6
1351 2 0% 0% 0% S 0 loop5
1273 2 0% 0% 0% S 0 loop4
1265 2 0% 0% 0% S 0 loop3
1135 2 0% 0% 0% S 0 loop2
1092 2 0% 0% 0% S 0 loop1
1063 2 0% 0% 0% S 0 loop0
775 1 0% 0% 0% S 420 rpc.idmapd
686 1 0% 0% 0% S 3996 dbus-daemon
658 627 0% 0% 0% S 1728 audispd
627 1 0% 0% 0% S 2608 auditd
564 2 0% 0% 0% S 0 kworker/2:1H
538 2 0% 0% 0% S 0 kworker/3:1H
513 2 0% 0% 0% S 0 kvm-irqfd-clean
400 2 0% 0% 0% S 0 kworker/1:1H
345 2 0% 0% 0% S 0 kworker/1:4
331 2 0% 0% 0% S 0 kworker/0:1H
290 1 0% 0% 0% S 5664 systemd-udevd
271 2 0% 0% 0% S 0 mmcqd/0rpmb
268 2 0% 0% 0% S 0 bioset
265 2 0% 0% 0% S 0 mmcqd/0boot1
258 2 0% 0% 0% S 0 kworker/1:3
257 2 0% 0% 0% S 0 bioset
255 2 0% 0% 0% S 0 mmcqd/0boot0
251 2 0% 0% 0% S 0 kworker/3:2
250 2 0% 0% 0% S 0 kworker/0:2
247 2 0% 0% 0% S 0 bioset
244 1 0% 0% 0% S 4212 systemd-journal
240 2 0% 0% 0% S 0 mmcqd/0
237 2 0% 0% 0% S 0 bioset
206 2 0% 0% 0% S 0 irq/16-mmc0
175 2 0% 0% 0% S 0 kauditd
156 2 0% 0% 0% S 0 deferwq
141 2 0% 0% 0% S 0 ipv6_addrconf
140 2 0% 0% 0% S 0 dm_bufio_cache
139 2 0% 0% 0% S 0 bioset
138 2 0% 0% 0% S 0 bioset
137 2 0% 0% 0% S 0 bioset
136 2 0% 0% 0% S 0 bioset
135 2 0% 0% 0% S 0 bioset
134 2 0% 0% 0% S 0 bioset
133 2 0% 0% 0% S 0 bioset
132 2 0% 0% 0% S 0 bioset
131 2 0% 0% 0% S 0 bioset
130 2 0% 0% 0% S 0 bioset
129 2 0% 0% 0% S 0 bioset
128 2 0% 0% 0% S 0 bioset
127 2 0% 0% 0% S 0 bioset
126 2 0% 0% 0% S 0 bioset
125 2 0% 0% 0% S 0 bioset
124 2 0% 0% 0% S 0 bioset
123 2 0% 0% 0% S 0 bioset
122 2 0% 0% 0% S 0 bioset
121 2 0% 0% 0% S 0 bioset
120 2 0% 0% 0% S 0 bioset
119 2 0% 0% 0% S 0 bioset
118 2 0% 0% 0% S 0 bioset
117 2 0% 0% 0% S 0 bioset
116 2 0% 0% 0% S 0 bioset
115 2 0% 0% 0% S 0 bioset
114 2 0% 0% 0% S 0 bioset
113 2 0% 0% 0% S 0 bioset
112 2 0% 0% 0% S 0 bioset
111 2 0% 0% 0% S 0 bioset
110 2 0% 0% 0% S 0 bioset
109 2 0% 0% 0% S 0 bioset
108 2 0% 0% 0% S 0 bioset
107 2 0% 0% 0% S 0 bioset
106 2 0% 0% 0% S 0 bioset
105 2 0% 0% 0% S 0 bioset
104 2 0% 0% 0% S 0 bioset
103 2 0% 0% 0% S 0 bioset
102 2 0% 0% 0% S 0 bioset
101 2 0% 0% 0% S 0 bioset
100 2 0% 0% 0% S 0 bioset
99 2 0% 0% 0% S 0 bioset
98 2 0% 0% 0% S 0 bioset
97 2 0% 0% 0% S 0 bioset
96 2 0% 0% 0% S 0 bioset
95 2 0% 0% 0% S 0 bioset
94 2 0% 0% 0% S 0 bioset
93 2 0% 0% 0% S 0 bioset
92 2 0% 0% 0% S 0 bioset
91 2 0% 0% 0% S 0 bioset
90 2 0% 0% 0% S 0 bioset
89 2 0% 0% 0% S 0 bioset
88 2 0% 0% 0% S 0 bioset
87 2 0% 0% 0% S 0 bioset
86 2 0% 0% 0% S 0 bioset
85 2 0% 0% 0% S 0 bioset
84 2 0% 0% 0% S 0 bioset
83 2 0% 0% 0% S 0 bioset
82 2 0% 0% 0% S 0 bioset
81 2 0% 0% 0% S 0 bioset
80 2 0% 0% 0% S 0 bioset
79 2 0% 0% 0% S 0 bioset
78 2 0% 0% 0% S 0 bioset
77 2 0% 0% 0% S 0 bioset
76 2 0% 0% 0% S 0 bioset
75 2 0% 0% 0% S 0 bioset
74 2 0% 0% 0% S 0 bioset
73 2 0% 0% 0% S 0 bioset
72 2 0% 0% 0% S 0 bioset
71 2 0% 0% 0% S 0 bioset
70 2 0% 0% 0% S 0 bioset
69 2 0% 0% 0% S 0 bioset
68 2 0% 0% 0% S 0 bioset
67 2 0% 0% 0% S 0 bioset
66 2 0% 0% 0% S 0 bioset
65 2 0% 0% 0% S 0 bioset
64 2 0% 0% 0% S 0 bioset
63 2 0% 0% 0% S 0 bioset
62 2 0% 0% 0% S 0 bioset
61 2 0% 0% 0% S 0 bioset
60 2 0% 0% 0% S 0 bioset
59 2 0% 0% 0% S 0 kthrotld
45 2 0% 0% 0% S 0 nfsiod
44 2 0% 0% 0% S 0 fsnotify_mark
43 2 0% 0% 0% S 0 vmstat
42 2 0% 0% 0% S 0 kswapd0
35 2 0% 0% 0% S 0 kworker/3:1
32 2 0% 0% 0% S 0 rpciod
31 2 0% 0% 0% S 0 edac-poller
30 2 0% 0% 0% S 0 md
29 2 0% 0% 0% S 0 kblockd
28 2 0% 0% 0% S 0 bioset
27 2 0% 0% 0% S 0 crypto
26 2 0% 0% 0% S 0 writeback
25 2 0% 0% 0% S 0 khungtaskd
24 2 0% 0% 0% S 0 perf
23 2 0% 0% 0% S 0 netns
22 2 0% 0% 0% S 0 kdevtmpfs
21 2 0% 0% 0% S 0 kworker/3:0H
19 2 0% 0% 0% S 0 ksoftirqd/3
18 2 0% 0% 0% S 0 migration/3
17 2 0% 0% 0% S 0 kworker/2:0H
15 2 0% 0% 0% S 0 ksoftirqd/2
14 2 0% 0% 0% S 0 migration/2
13 2 0% 0% 0% S 0 kworker/1:0H
11 2 0% 0% 0% S 0 ksoftirqd/1
10 2 0% 0% 0% S 0 migration/1
9 2 0% 0% 0% S 0 migration/0
8 2 0% 0% 0% S 0 rcu_bh
7 2 0% 0% 0% S 0 rcu_sched
5 2 0% 0% 0% S 0 kworker/0:0H
3 2 0% 0% 0% S 0 ksoftirqd/0
2 0 0% 0% 0% S 0 kthreadd
1 0 0% 0% 0% S 8176 systemd
lslswmi-mdf-core02#show process cpu platform sorted
CPU utilization for five seconds: 13%, one minute: 12%, five minutes: 13%
Core 0: CPU utilization for five seconds: 12%, one minute: 11%, five minutes: 13%
Core 1: CPU utilization for five seconds: 13%, one minute: 12%, five minutes: 12%
Core 2: CPU utilization for five seconds: 15%, one minute: 14%, five minutes: 13%
Core 3: CPU utilization for five seconds: 11%, one minute: 13%, five minutes: 13%
Pid PPid 5Sec 1Min 5Min Status Size Name
--------------------------------------------------------------------------------
14236 13286 20% 20% 20% S 234192 fed main event
9702 9104 20% 20% 20% S 3383544 linux_iosd-imag
12538 11903 3% 3% 3% S 82080 cmand
10223 9586 2% 2% 2% S 54568 sif_mgr
2275 2 1% 1% 1% S 0 lsmpi-xmit
30411 30405 0% 0% 0% S 25572 nginx
30405 30216 0% 0% 0% S 20968 nginx
30337 1 0% 0% 0% S 3248 rotee
30216 1 0% 0% 0% S 4092 pman.sh
29984 29978 0% 0% 0% S 38036 python2.7
29978 29863 0% 0% 0% S 2780 rdope.sh
29909 1 0% 0% 0% S 3152 rotee
29863 1 0% 0% 0% S 4076 pman.sh
29231 2 0% 0% 0% S 0 SarIosdMond
28577 28252 0% 0% 0% S 8884 pttcd
28501 1 0% 0% 0% S 3096 rotee
28495 28015 0% 0% 0% S 157604 pubd
28289 1 0% 0% 0% S 3108 rotee
28252 1 0% 0% 0% S 4168 pman.sh
28015 1 0% 0% 0% S 4128 pman.sh
26689 26247 0% 0% 0% S 95896 smand
26503 1 0% 0% 0% S 3108 rotee
26247 8680 0% 0% 0% S 4040 pman.sh
26133 25631 0% 0% 0% S 31632 psd
25949 1 0% 0% 0% S 3156 rotee
25631 8680 0% 0% 0% S 4128 pman.sh
25542 25018 0% 0% 0% S 104344 cli_agent
25364 24739 0% 0% 0% S 13784 cmm
25353 1 0% 0% 0% S 3232 rotee
25077 1 0% 0% 0% S 3180 rotee
25021 24454 0% 0% 0% S 189364 dbm
25018 8680 0% 0% 0% S 4124 pman.sh
24779 1 0% 0% 0% S 3068 rotee
24739 8680 0% 0% 0% S 4164 pman.sh
24671 24070 0% 0% 0% S 126644 fman_rp
24454 8680 0% 0% 0% S 4180 pman.sh
24406 1 0% 0% 0% S 3156 rotee
24390 14701 0% 0% 0% S 232 sleep
24188 2 0% 0% 0% S 0 kworker/u32:2
24070 8680 0% 0% 0% S 4132 pman.sh
24037 23546 0% 0% 0% S 20836 tms
23864 1 0% 0% 0% S 3168 rotee
23822 23769 0% 0% 0% S 4588 journalctl
23769 23182 0% 0% 0% S 14940 plogd
23546 8680 0% 0% 0% S 4088 pman.sh
23543 1 0% 0% 0% S 3236 rotee
23511 22699 0% 0% 0% S 88168 repm
23216 1 0% 0% 0% S 3244 rotee
23182 8680 0% 0% 0% S 4104 pman.sh
23148 22212 0% 0% 0% S 141192 sessmgrd
22872 22848 0% 0% 0% S 1696 sntp
22848 1 0% 0% 0% S 2820 stack_sntp.sh
22714 1 0% 0% 0% S 3172 rotee
22699 8680 0% 0% 0% S 4080 pman.sh
22598 21917 0% 0% 0% S 100504 fman_fp_image
22212 8680 0% 0% 0% S 4076 pman.sh
22183 1 0% 0% 0% S 3128 rotee
21917 12302 0% 0% 0% S 4080 pman.sh
21278 14148 0% 0% 0% S 276 inotifywait
19938 14876 0% 0% 0% S 224 sleep
18692 10785 0% 0% 0% S 276 inotifywait
18564 11781 0% 0% 0% S 264 inotifywait
18557 11781 0% 0% 0% S 9540 issu_stack.sh
18556 11781 0% 0% 0% S 272 inotifywait
18550 11781 0% 0% 0% S 9272 issu_stack.sh
18549 11781 0% 0% 0% S 276 inotifywait
18353 5526 0% 0% 0% S 264 inotifywait
17750 17410 0% 0% 0% S 14036 epc_ws_liaison
17558 1 0% 0% 0% S 3244 rotee
17410 12302 0% 0% 0% S 4084 pman.sh
16784 16465 0% 0% 0% S 11292 tams_proc
16708 16228 0% 0% 0% S 12876 tamd_proc
16656 1 0% 0% 0% S 3108 rotee
16547 1 0% 0% 0% S 3208 rotee
16483 15987 0% 0% 0% S 8728 tam_svcs_esg_cf
16465 1 0% 0% 0% S 4096 pman.sh
16228 1 0% 0% 0% S 4044 pman.sh
16225 1 0% 0% 0% S 3068 rotee
15987 1 0% 0% 0% S 4108 pman.sh
15278 1 0% 0% 0% S 3092 rotee
14960 2 0% 0% 0% S 0 kworker/u32:1
14876 14049 0% 0% 0% S 8360 periodic.sh
14793 1 0% 0% 0% S 3196 rotee
14701 13596 0% 0% 0% S 3004 sort_files_by_i
14607 1 0% 0% 0% S 3248 rotee
14450 13755 0% 0% 0% S 18440 btman
14381 1 0% 0% 0% S 3444 rotee
14148 13204 0% 0% 0% S 10672 auto_upgrade_cl
14101 1 0% 0% 0% S 3180 rotee
14049 8680 0% 0% 0% S 4120 pman.sh
13784 12871 0% 0% 0% S 4300 auto_upgrade_se
13755 12302 0% 0% 0% S 4076 pman.sh
13675 1 0% 0% 0% S 3144 rotee
13596 8680 0% 0% 0% S 4084 pman.sh
13590 1 0% 0% 0% S 3168 rotee
13393 1 0% 0% 0% S 3264 rotee
13286 12302 0% 0% 0% S 4120 pman.sh
13232 12545 0% 0% 0% S 21812 btman
13204 8680 0% 0% 0% S 4108 pman.sh
12919 1 0% 0% 0% S 3228 rotee
12897 12210 0% 0% 0% S 23312 bt_logger
12881 12302 0% 0% 0% S 264 inotifywait
12871 8680 0% 0% 0% S 4120 pman.sh
12640 1 0% 0% 0% S 3232 rotee
12573 1 0% 0% 0% S 3276 rotee
12545 8680 0% 0% 0% S 4092 pman.sh
12302 1 0% 0% 0% S 6288 pvp.sh
12247 1 0% 0% 0% S 3156 rotee
12210 8680 0% 0% 0% S 4040 pman.sh
12107 1 0% 0% 0% S 3236 rotee
12084 11515 0% 0% 0% R 23560 hman
11903 8680 0% 0% 0% S 4084 pman.sh
11850 1 0% 0% 0% S 3168 rotee
11781 11154 0% 0% 0% S 10568 issu_stack.sh
11515 8680 0% 0% 0% S 4088 pman.sh
11512 1 0% 0% 0% S 3236 rotee
11426 10849 0% 0% 0% S 18592 keyman
11425 1 0% 0% 0% S 3096 rotee
11154 8680 0% 0% 0% S 4016 pman.sh
11147 1 0% 0% 0% S 3168 rotee
11124 10537 0% 0% 0% S 23064 lman
10849 8680 0% 0% 0% S 4080 pman.sh
10845 1 0% 0% 0% S 3256 rotee
10785 10214 0% 0% 0% S 10868 ncd.sh
10537 8680 0% 0% 0% S 4104 pman.sh
10493 1 0% 0% 0% S 3168 rotee
10355 9922 0% 0% 0% S 14848 nif_mgr
10238 1 0% 0% 0% S 3156 rotee
10214 8680 0% 0% 0% S 4064 pman.sh
9956 9355 0% 0% 0% S 31092 stack_mgr
9922 8680 0% 0% 0% S 4088 pman.sh
9912 1 0% 0% 0% S 3224 rotee
9690 1 0% 0% 0% S 3100 rotee
9586 8680 0% 0% 0% S 4080 pman.sh
9355 8680 0% 0% 0% S 4076 pman.sh
9341 1 0% 0% 0% S 3120 rotee
9104 8680 0% 0% 0% S 4096 pman.sh
8900 8680 0% 0% 0% S 272 inotifywait
8799 1 0% 0% 0% S 3296 rotee
8680 1 0% 0% 0% S 6636 pvp.sh
8675 8614 0% 0% 0% S 252 inotifywait
8670 1 0% 0% 0% S 3096 rotee
8614 1 0% 0% 0% S 6596 psvp.sh
8504 8217 0% 0% 0% S 268 inotifywait
8470 8222 0% 0% 0% S 1776 inotifywait
8434 1 0% 0% 0% S 3244 rotee
8400 1 0% 0% 0% S 3428 rotee
8315 8206 0% 0% 0% S 2836 flash-rec
8244 2 0% 0% 0% S 0 kworker/0:3
8222 1 0% 0% 0% S 5980 chasync.sh
8217 1 0% 0% 0% S 8368 rollback_timer.
8212 1 0% 0% 0% S 2000 xinetd
8206 1 0% 0% 0% S 3784 flash_recovery.
8201 1 0% 0% 0% S 2004 xinetd
6871 1 0% 0% 0% S 1960 xinetd
6801 2 0% 0% 0% S 0 kworker/2:0
6031 2 0% 0% 0% S 0 lfts_sar_aux
5849 5535 0% 0% 0% S 268 inotifywait
5725 1 0% 0% 0% S 3196 rotee
5681 1 0% 0% 0% S 3060 rotee
5535 1 0% 0% 0% S 4836 iptbl.sh
5526 1 0% 0% 0% S 6252 reflector.sh
5511 5203 0% 0% 0% S 268 inotifywait
5467 1 0% 0% 0% S 3488 rotee
5418 5191 0% 0% 0% S 236 sleep
5371 1 0% 0% 0% S 3088 rotee
5332 2 0% 0% 0% S 0 nfsd
5331 2 0% 0% 0% S 0 nfsd
5330 2 0% 0% 0% S 0 nfsd
5328 2 0% 0% 0% S 0 nfsd
5327 2 0% 0% 0% S 0 nfsd
5326 2 0% 0% 0% S 0 nfsd
5325 2 0% 0% 0% S 0 nfsd
5323 2 0% 0% 0% S 0 nfsd
5310 2 0% 0% 0% S 0 lockd
5309 2 0% 0% 0% S 0 nfsd4_callbacks
5280 1 0% 0% 0% S 608 rpc.mountd
5251 5188 0% 0% 0% S 14396 libvirtd
5244 1 0% 0% 0% S 2548 rpc.statd
5231 1 0% 0% 0% S 1884 xinetd
5225 1 0% 0% 0% S 2100 rpcbind
5222 5218 0% 0% 0% S 264 inotifywait
5218 1 0% 0% 0% S 1888 boothelper_evt.
5203 1 0% 0% 0% S 6244 droputil.sh
5198 1 0% 0% 0% S 3156 virtlogd
5191 1 0% 0% 0% S 3552 oom.sh
5188 1 0% 0% 0% S 2828 libvirtd.sh
4991 2 0% 0% 0% S 0 kworker/1:1
2311 2 0% 0% 0% S 0 ixgbe
2301 2 0% 0% 0% S 0 bioset
2297 2 0% 0% 0% S 0 bioset
2291 2 0% 0% 0% S 0 bioset
2290 2 0% 0% 0% S 0 bioset
2287 2 0% 0% 0% S 0 bioset
2285 2 0% 0% 0% S 0 bioset
2283 2 0% 0% 0% S 0 bioset
2276 2 0% 0% 0% S 0 lsmpi-rx
2274 2 0% 0% 0% S 0 lsmpi-refill
2257 2 0% 0% 0% S 0 bioset
2041 2 0% 0% 0% S 0 loop8
1919 2 0% 0% 0% S 0 loop7
1717 2 0% 0% 0% S 0 loop6
1334 2 0% 0% 0% S 0 loop5
1314 2 0% 0% 0% S 0 loop4
1252 2 0% 0% 0% S 0 loop3
1124 2 0% 0% 0% S 0 loop2
1074 2 0% 0% 0% S 0 loop1
1053 2 0% 0% 0% S 0 loop0
990 2 0% 0% 0% S 0 kworker/0:1H
939 2 0% 0% 0% S 0 kworker/1:1H
797 1 0% 0% 0% S 428 rpc.idmapd
681 2 0% 0% 0% S 0 kworker/2:1H
658 1 0% 0% 0% S 3948 dbus-daemon
627 610 0% 0% 0% S 1776 audispd
610 1 0% 0% 0% S 2612 auditd
586 2 0% 0% 0% S 0 kvm-irqfd-clean
485 2 0% 0% 0% S 0 kworker/3:3
333 2 0% 0% 0% S 0 kworker/3:1H
290 1 0% 0% 0% S 5888 systemd-udevd
275 2 0% 0% 0% S 0 mmcqd/0rpmb
274 2 0% 0% 0% S 0 bioset
273 2 0% 0% 0% S 0 mmcqd/0boot1
271 2 0% 0% 0% S 0 bioset
267 2 0% 0% 0% S 0 mmcqd/0boot0
262 2 0% 0% 0% S 0 bioset
256 2 0% 0% 0% S 0 mmcqd/0
252 1 0% 0% 0% S 4244 systemd-journal
248 2 0% 0% 0% S 0 bioset
245 2 0% 0% 0% S 0 kworker/0:2
205 2 0% 0% 0% S 0 irq/16-mmc0
174 2 0% 0% 0% S 0 kauditd
156 2 0% 0% 0% S 0 deferwq
141 2 0% 0% 0% S 0 ipv6_addrconf
140 2 0% 0% 0% S 0 dm_bufio_cache
139 2 0% 0% 0% S 0 bioset
138 2 0% 0% 0% S 0 bioset
137 2 0% 0% 0% S 0 bioset
136 2 0% 0% 0% S 0 bioset
135 2 0% 0% 0% S 0 bioset
134 2 0% 0% 0% S 0 bioset
133 2 0% 0% 0% S 0 bioset
132 2 0% 0% 0% S 0 bioset
131 2 0% 0% 0% S 0 bioset
130 2 0% 0% 0% S 0 bioset
129 2 0% 0% 0% S 0 bioset
128 2 0% 0% 0% S 0 bioset
127 2 0% 0% 0% S 0 bioset
126 2 0% 0% 0% S 0 bioset
125 2 0% 0% 0% S 0 bioset
124 2 0% 0% 0% S 0 bioset
123 2 0% 0% 0% S 0 bioset
122 2 0% 0% 0% S 0 bioset
121 2 0% 0% 0% S 0 bioset
120 2 0% 0% 0% S 0 bioset
119 2 0% 0% 0% S 0 bioset
118 2 0% 0% 0% S 0 bioset
117 2 0% 0% 0% S 0 bioset
116 2 0% 0% 0% S 0 bioset
115 2 0% 0% 0% S 0 bioset
114 2 0% 0% 0% S 0 bioset
113 2 0% 0% 0% S 0 bioset
112 2 0% 0% 0% S 0 bioset
111 2 0% 0% 0% S 0 bioset
110 2 0% 0% 0% S 0 bioset
109 2 0% 0% 0% S 0 bioset
108 2 0% 0% 0% S 0 bioset
107 2 0% 0% 0% S 0 bioset
106 2 0% 0% 0% S 0 bioset
105 2 0% 0% 0% S 0 bioset
104 2 0% 0% 0% S 0 bioset
103 2 0% 0% 0% S 0 bioset
102 2 0% 0% 0% S 0 bioset
101 2 0% 0% 0% S 0 bioset
100 2 0% 0% 0% S 0 bioset
99 2 0% 0% 0% S 0 bioset
98 2 0% 0% 0% S 0 bioset
97 2 0% 0% 0% S 0 bioset
96 2 0% 0% 0% S 0 bioset
95 2 0% 0% 0% S 0 bioset
94 2 0% 0% 0% S 0 bioset
93 2 0% 0% 0% S 0 bioset
92 2 0% 0% 0% S 0 bioset
91 2 0% 0% 0% S 0 bioset
90 2 0% 0% 0% S 0 bioset
89 2 0% 0% 0% S 0 bioset
88 2 0% 0% 0% S 0 bioset
87 2 0% 0% 0% S 0 bioset
86 2 0% 0% 0% S 0 bioset
85 2 0% 0% 0% S 0 bioset
84 2 0% 0% 0% S 0 bioset
83 2 0% 0% 0% S 0 bioset
82 2 0% 0% 0% S 0 bioset
81 2 0% 0% 0% S 0 bioset
80 2 0% 0% 0% S 0 bioset
79 2 0% 0% 0% S 0 bioset
78 2 0% 0% 0% S 0 bioset
77 2 0% 0% 0% S 0 bioset
76 2 0% 0% 0% S 0 bioset
75 2 0% 0% 0% S 0 bioset
74 2 0% 0% 0% S 0 bioset
73 2 0% 0% 0% S 0 bioset
72 2 0% 0% 0% S 0 bioset
71 2 0% 0% 0% S 0 bioset
70 2 0% 0% 0% S 0 bioset
69 2 0% 0% 0% S 0 bioset
68 2 0% 0% 0% S 0 bioset
67 2 0% 0% 0% S 0 bioset
66 2 0% 0% 0% S 0 bioset
65 2 0% 0% 0% S 0 bioset
64 2 0% 0% 0% S 0 bioset
63 2 0% 0% 0% S 0 bioset
62 2 0% 0% 0% S 0 bioset
61 2 0% 0% 0% S 0 bioset
60 2 0% 0% 0% S 0 bioset
59 2 0% 0% 0% S 0 kthrotld
45 2 0% 0% 0% S 0 nfsiod
44 2 0% 0% 0% S 0 fsnotify_mark
43 2 0% 0% 0% S 0 vmstat
42 2 0% 0% 0% S 0 kswapd0
35 2 0% 0% 0% S 0 kworker/3:1
34 2 0% 0% 0% S 0 kworker/2:1
32 2 0% 0% 0% S 0 rpciod
31 2 0% 0% 0% S 0 edac-poller
30 2 0% 0% 0% S 0 md
29 2 0% 0% 0% S 0 kblockd
28 2 0% 0% 0% S 0 bioset
27 2 0% 0% 0% S 0 crypto
26 2 0% 0% 0% S 0 writeback
25 2 0% 0% 0% S 0 khungtaskd
24 2 0% 0% 0% S 0 perf
23 2 0% 0% 0% S 0 netns
22 2 0% 0% 0% S 0 kdevtmpfs
21 2 0% 0% 0% S 0 kworker/3:0H
19 2 0% 0% 0% S 0 ksoftirqd/3
18 2 0% 0% 0% S 0 migration/3
17 2 0% 0% 0% S 0 kworker/2:0H
15 2 0% 0% 0% S 0 ksoftirqd/2
14 2 0% 0% 0% S 0 migration/2
13 2 0% 0% 0% S 0 kworker/1:0H
12 2 0% 0% 0% S 0 kworker/1:0
11 2 0% 0% 0% S 0 ksoftirqd/1
10 2 0% 0% 0% S 0 migration/1
9 2 0% 0% 0% S 0 migration/0
8 2 0% 0% 0% S 0 rcu_bh
7 2 0% 0% 0% S 0 rcu_sched
5 2 0% 0% 0% S 0 kworker/0:0H
3 2 0% 0% 0% S 0 ksoftirqd/0
2 0 0% 0% 0% S 0 kthreadd
1 0 0% 0% 0% S 8132 systemd
07-31-2023 10:53 AM
> Firmware looks good from what I can see:
But you only show ROMMON - did you check the PHY versions as per the links me & Marce provided?
https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_upgrade_fpga_c9800.html#id_132215
But ...
0 0 IGR_MISC_FATAL_ERROR 0 3257636 3257636
0 1 IGR_MISC_FATAL_ERROR 0 1 1
0 0 IGR_MISC_FATAL_ERROR 0 1077 1077
0 1 IGR_MISC_FATAL_ERROR 0 3 3
I'd say that first switch is being BADLY affected by CSCvt00292 on one ASIC!
You might also want to discuss with TAC but I would plan a reload of that switch ASAP and probably the other one too and monitor the counters again after reload.
07-31-2023 11:17 AM - edited 07-31-2023 11:29 AM
lslromi-wlc-01#show platform hardware chassis standby qfp datapath pmd ifdev | i FW
FW Version : 0x80000757
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 0x80000757
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 0x80000756
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 0x80000756
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 3.1.76
FW Version : 3.1.76
lslromi-wlc-01#
lslromi-wlc-01#show platform hardware chassis active qfp datapath pmd ifdev | i FW
FW Version : 0x80000757
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 0x80000757
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 0x80000756
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 0x80000756
FW MDIO : 9.1.2 ID: 43503 vers: 1385
FW Version : 3.1.76
FW Version : 3.1.76
Ok - worth trying out for the ASIC. We will try reloading the cores. We did have this issue with the WLC before utilizing the SSO/HA. Hence why we felt the need to ensure redundancy. The WLC would become unresponsive to web and SSH. No core dumps/logging saved and would show green lighted on the face. We could only get it back online by rebooting the chassis... Fast forward to now - we have them bouncing due to random Active GW failure.
08-01-2023 08:37 AM - edited 08-01-2023 08:39 AM
9.1.2 and 1385 look like they correspond to the 17.03.02 update as shown in the doc example. But there's a newer update 17.11.1 https://software.cisco.com/download/home/286323158/type/283425232/release/17.11.1 which presumably has newer revision numbers (I don't have 9800-L so can't test) and another fix so I suggest you make sure you have that updated too:
Caveat ID |
Description |
---|---|
C9800-L CRC error observed on bay-0 ports bundled in port-channel. |
08-01-2023 06:32 AM
So far after the evening when rebooting the switches the numbers look ok and no indications of WLC events, but time will tell:
lslswmi-mdf-core01#show platform hardware fed switch active fwd-asic drops exceptions | inc IGR_MISC_FATAL_ERROR
0 0 IGR_MISC_FATAL_ERROR 0 0 0
0 1 IGR_MISC_FATAL_ERROR 0 0 0
lslswmi-mdf-core02#show platform hardware fed switch active fwd-asic drops exceptions | inc IGR_MISC_FATAL_ERROR
0 0 IGR_MISC_FATAL_ERROR 97 97 0
0 1 IGR_MISC_FATAL_ERROR 9 9 0
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide