I bought 2 Cisco WAP581 wireless APs. I am having a problem with the wireless going offline after a day or two.
My Windows 10 laptop comes up and says there is no internet access after 24 hours. From the wireless laptop I cannot ping the WAP581 from the wireless side. I also have an iPad which states no internet. The WAP581 has a DHCP IP address. I can ping the WAP581 from a wired PC in the same network. There are 2 SSIDs on WAP581. If I try to connect to the other SSID my laptop will not connect stating there is no internet. I do have internet on the wired PC. If I reboot the WAP581 AP it all starts working again. What is going on?
I am in the USA with version WAP581-A-K9 V01. I am running firmware 126.96.36.199.
I replaced Cisco WAP371 APs with these WAP581 units and never had this problem. The WAP581 are connected to a SG300-28 switch in L3 mode on a trunk port with 2 VLANs defined to the WAP581 AP. The 2 WAP581 AP are setup in single point mode using DHCP off the SG300-28 switch. I am using a SG300-10MPP switch to power the WAP581 APs connected to the SG300-28 using a trunk port.
I did not state I am running both bands with band steering on. If you need me to start a support case send me contact info. If you want me try something I will be happy to as this is my home network with complete access all the time. Let's get this problem fixed.
You should be able to do a TDM test of the cable to find out how long the runs are. Just a shot in the dark here, but it sounds like you have some long runs that may be suffering PoE loss.
Here's a calculator to find out how much you need to supply per port based on the cable run length, type of cable, and standard of 802.3af/at: http://poe-world.com/Calculator/
Here's a quick PDF that tells how to determine the power required from the switch: https://comtrol.com/elements/uploads/fckeditor/file/Calc_PoE_PowerLoss.pdf
Look at the class the WAPs fall under. That'll tell you more what their power limit is than Cisco's own documentation. When you do a "show power inline", it's all the way at the right:
Port based power-limit mode Inrush Test: Enable Legacy Mode: Enable Unit Power Nominal Power Consumed Power Usage Threshold Traps Temp (C) ---- ------- ------------- ------------------ --------------- --------- -------- 1 On 62 Watts 25 Watts (40%) 95 Disable 51 Port Powered Device State Status Priority Class -------- -------------------- ---------------- ------------ -------- --------- gi1 Auto On critical class4 gi2 Auto On critical class4 gi3 Never Off low class4 gi4 Never Off low class4 gi5 Never Off low class4 gi6 Never Off low class4 gi7 Never Off low class4 gi8 Never Off low class4
Look at this link: https://en.wikipedia.org/wiki/Power_over_Ethernet#Powering_devices
You'll see class4 devices can pull up to 25.5W and the PSE can supply 30W and that keeps them in spec. I would suspect that WAP581s are class4 as well, and with 7 APs, you should budget 7*30, or 210W at the supply. This accounts for line loss due to resistance.
Long story short, don't believe manufacturer's specs. Find out how they classify the device, then find out how much power (per spec) they can draw. Budget off of that. I've had Cisco phones that were supposed to be pulling 6.3W. Take a wild guess how much they *actually* pulled as a class3 device. Hint: 15.4W.
To beat a dead horse into a bloody pulp: always budget for the max amount of power your class can draw, never what a manufacturer says.
Thanks for the reply, but this is just Cisco problem that needs to be fixed. I did already go over my complete network with Cisco and they found no problems at all with my network infrastructure. My longest run is exactly 120ft. The other runs are no longer than 100ft each. My server room is placed in a centrally located air conditioned room in the basement, that has straight shot conduit going to each floor above.
I calculated my line loss on my longest run with the link provided by you. Line loss for that device is .04 watts which is negligible. Like I said previously, to make sure my router wast not causing any of the issues supplying the power, Cisco sent me a SG350-52MP 52-Port Gigabit PoE Managed Switch to plug in all WAP's. "MP" stands for max power and I have the WAP's directly plugged into those ports. With this kind of power, throwing all the WAP's on it is nothing.
Unfortunately, after my last response, 1 of the WAP's went down again, proving it to be a WAP 581 issue. I will be updating this thread further in my next reply.
My SA did get back to me about my power reading that I pulled from him and he said:
"I been out of the office but yes I did get it and what I have seen doesn’t worry me. It’s actually reserving the power that’s why it’s show 25 I believe I saw . It’s not affecting the APs "
Sorry for another quick follow up. Looks like the link on the specs I sent for the WAP 581 got messed up.
Here it is:
I thumbled on this post, while searching for problems with the WAP581.
I have a customer with the same symptoms, frequently reboots or coverage lost.
Today 05/05/2021 Cisco released a new Firmware 188.8.131.52.
Anyone tested this one, like a beta or similar?
At the time the release notes are not available...
The release notes for 184.108.40.206 are now available. There is good news and bad news:
The good news: Cisco recognizes this problem as a known issue under CSCvr13623.
The bad news: The problem is not resolved in this release.
Looks like we'll have to wait for the next release.
After about 2 weeks of up time with WAP's on the Cisco supplied switch, 1 of the WAP's crashed again with the same result. I rebooted the WAP, grabbed the files requested by Cisco, and sent them. There is no question now that the WAP 581 is the culprit. Everything has been ruled out except the WAP 581 itself.
I sent my SA an email asking to receive an update for all of us, but he said:
"At this time the files are being investigated, I will update you once I have information to share with you." I'll respond back once I receive an updated.
In the meantime, I see that they released a new FW image and there is some concern that they have closed our bug.
As I've mentioned before, I'm currently on a special debug FW that is labeled "220.127.116.11-mem." The FW recently released for public is "18.104.22.168." I would imagine that this FW is similar to mine without the debug support and maybe additional bug fixes. My SA said that they were regression testing my build with an intention to release because they did fix some high level bugs. I would recommend that build even though it doesn't fix our Issue. I've definitely noticed a performance increase, which I've mentioned in my previous posts.
The closed bug everyone is concerned about can be found here: https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvr13623. Yes, the status has been set to "Terminated" but it seems as if there are duplicate issues created for this. Unfortunately, they just closed the bug that is open in the release notes and didn't say why, even though it's just because there is a duplicate. I think they closed that one because mine is the one currently being worked on with all the notes and files attached to it.
The bug that they have open and related to my case is https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvv89996. I would recommend everyone going to it logged in and adding it to your notifications. Better yet, even open a support case on it to get us some more ammo power. You can do all that right through the link.
Hope this update helps and eliminates any concern. There's no question that we're all ready for the fix. Cisco now knows the only possible issue is with the WAP 581 itself. All other equipment has been ruled out and eliminated. I'll send an update when I hear back.
Alright I've stayed quiet long enough. I've been working on this issue for two years and have had a team if cisco engineers involved for the last year. All these special firmware releases people keep talking about, I've been running them for awhile. I even gave Cisco full access to one of my sites and let them do their own data logging and hot fixing with completely unrestricted access on the entire network of the site. The bottom line: we have gotten nowhere. The WAP581 isn't discontinued but no longer shows up on their product catalog. In fact the entire WAPXXX series is gone....which is interesting because I have some other WAP371 units (which are a previous generation) that are experiencing the SAME EXACT failure as the WAP581s. So with the WAPXXX series gone, I tried out the new series, specifically model AC240....what a piece of junk!!! The physical built quality was terrible light weight plastic and the specs aren't nearly as good as the 581s. Additionally this is a product that has been out for years already with several firmware updates and STILL the pages and prompts have broken english text. I've seen better english inside of fortune cookies yet Cisco didn't seem aware of the issue. These are clearly white box imports that cisco throws a sticker on. I'm done with Cisco, they aren't what they once were. I have lost several clients and thousands of dollars in revenue because Cisco let the ball drop. This is a hardware issue and unfortunately the only hardware they have to replace it is made in china junk (hell the made in China sticker is almost the same size as the Cisco logo on the AC240 haha).
I am new to this thread but I've been watching it for quite some time hoping for a solution. I did noticed message below the Access Point System log.
|2021-May-07, 20:23:15||info||syslog||error occured for query on path cluster.stations-aggregated.session (function: data_service, file: /olddata/sigmund.lou/wap581/ap/broadcom/src/web-ui-framework/data_service.c, line: 481)|
|2021-May-07, 20:23:15||info||syslog||activator.stations-aggregated.session, timed out. (function: activator_execute_buffer, file: /olddata/sigmund.lou/wap581/ap/broadcom/src/web-ui-services/activator_query.c, line: 131)|
I started looking at the ethernet packets and noticed this message "(120 ICMP Destination unreachable (Port unreachable)" associated to both Access Points. Further discovery the source port was 55940 and the destination port was 137. I created a port forwarding rule to between the two ports. I both Access Points have been up for 3 days and there are no more "Destination unreachable" messages in the packets. If the problem reoccurs I will repost. I hope this helps.