Showing results for 
Search instead for 
Did you mean: 

BPDU Guard, err-disabled and IP Phone


A computer is connected to a Cisco IP Phone. Once in a while, the port gets err-disabled and we receive a user complaint. Switch logs show the following:

Dec 26 18:10:31.872 CET: %SPANTREE-2-BLOCK_BPDUGUARD: Received BPDU on port Fa1/0/32 with BPDU Guard enabled. Disabling port.

Dec 26 18:10:31.881 CET: %PM-4-ERR_DISABLE: bpduguard error detected on Fa1/0/32, putting Fa1/0/32 in err-disable state

Here's the interface configuration:

interface FastEthernet1/0/32

description ---UserX---

switchport access vlan 13

switchport mode access

switchport voice vlan 2222

switchport port-security maximum 2

switchport port-security

switchport port-security violation restrict

switchport port-security mac-address sticky

switchport port-security mac-address sticky 0022.64c0.259b

switchport port-security mac-address sticky e804.6212.5e19 vlan voice

ip access-group ACL_VLAN13 in

priority-queue out

mls qos trust cos

spanning-tree portfast

spanning-tree bpduguard enable

Any idea would be appreciated,


Hall of Fame Cisco Employee

Hello Wass,

A few questions:

  1. Is it by any means possible that the user has connected a switch, a wireless access point or any other device to the IP phone's PC port (or in place of the IP phone itself) that emits BPDUs?
  2. Is the IP phone powered by the switch (PoE), or is it using a separate power injector? I am thinking about issues related to the older Cisco-style of detecting a PoE device where the device essentially reflected the received signals back to the switch. Under certain circumstances, this could cause even the BPDUs being looped back to the port, resulting in tripping the BPDU Guard protection.
  3. What is the type of the IP phone and the switch?

Best regards,



Hi Peter,

1. It is not possible that the user plugs any IP device since I already configured Port-security. Besides, it's an office user.

2. IP phone is powered through PoE:

Interface Admin  Oper       Power   Device              Class Max


--------- ------ ---------- ------- ------------------- ----- ----



Fa1/0/32  auto   on         5.0     IP Phone 7911       2     15.4


3.  Switch is Catalyst 3750V2 with IOS 122-58.SE1.bin. IP Phone is 7911.

Hall of Fame Cisco Employee


Thank you for your answer.

I wonder: does this issue occur also on other ports of the same switch, especially with the same phone type connected to them? Is there any other indication logged that pertains to the same port?

Also, is it possible to reproduce this problem? Does it happen after the phone has been up and running without interruption for some time, or does this problem occur in a timeframe of a few seconds after the phone is plugged into the switch?

Have you perhaps tried replacing the patch cable or the phone to see if the problem persists?

Best regards,



Hi Peter,

The issue occured many times for only this 7911 on switchA, and also a couple of times for another

7911 on another switch (switchB). Otherwise, there are no issues of this kind with other switch interfaces.

I don't think it is possible to reproduce the problem, since it happens without any intervention. IP phones are plugged into the switch since a while, and end users did not change any setting. I can not exactly tell when it happens. It is arbitrary.

Could it be a patch cable problem, as you mentioned? I thought that, it if were a cable problem, no connection would be possible at all.

Hall of Fame Cisco Employee

Hello Wass,

I admit that the possibility of cabling causing this issue is strongly remote - but not impossible. I have seen miswired cables that tripped a looped port protection, behaved like unidirectional links and all different kinds of weird stuff.

In any case, we can make another experiment: try configuring the following global configuration level command:

spanning-tree portfast bpdufilter default

This configuration will cause a soft BPDUFilter to be applied to all PortFast-enabled ports. This means that the switch will send 11 BPDUs out of each PortFast-enabled port after it goes from down to up state and then it will stop sending BPDUs. BPDUs will still be received and processed correctly, and if a BPDU is received on such a port, the BPDUFilter will be disabled on that port. The operation of the BPDUGuard will not be limited, it will still behave in the same way it behaves now.

What I am pursuing by this change is to really see if the BPDUGuard is tripped by a BPDU sent by the switch itself that somehow gets looped and sent back to the switch, or whether the users are really doing something that should not do, either by connecting disallowed devices or by running some strange software on their PCs. The soft BPDUFilter will cause that the switchport will stop sending BPDUs after roughly 20 seconds of down->up transition. If the BPDUGuard deactivates a port at some time later, it will be obvious that the received BPDU must have been originated by some other device than the switchport itself.

What do you think?

Best regards,



I have also faced the same problem but able to fix with

interface ethx/y

spanning-tree bpduguard disable


no shut





















Not really a fix, bit of a vulnerability if you're using portfast, unless you're completely happy that nothing that can transmit BPDUs is going to be able to access that port.


I apologize for bringing back this old post, but have you found a root cause for this?

I'm facing a very similar issue.



This thread is very interesting to me. I'm having a similar problem with a 3560E running the same version of code (122-58.SE1.bin).

This is in a student dormitory setting and the difference is that the end-user has a small Netgear wifi router - which Netgear says is not capable of running STP. The student uses the wifi router to connect her laptop, smart phone, and DVD player. I have three other ports on the same switch that are now in err-disabled state because they received bpdu's. I was having a problem with mac-flapping so opened a ticket with Cisco TAC (the VLAN interface on the core was the mac that was flapping). The engineer configured portfast and bpduguard globally on all switches to prevent any further TCNs. About a month later the err-disabling started. Thinking there might be a problem with that particular switch, I tried moving the connection to a different switch in the same closet and that port err-disabled too. I have an open TAC now and am waiting on the engineer to get back with me. We did a packet capture two days ago (via monitor session). I didn't see any STP packets in the capture.

I just tried disabling PoE power to the port and shut / no shut. It err-disabled again immediately.

Just wondering if it might be a bug in that version of code.


I am experiencing the same issue, PC plugged into IP phone then into switch. 

Mar 23 13:24:45.761: %ILPOWER-7-DETECT: Interface Fa0/37: Power Device detected:
Mar 23 13:24:46.482: %ILPOWER-5-POWER_GRANTED: Interface Fa0/37: Power granted
Mar 23 13:24:51.784: %LINK-3-UPDOWN: Interface FastEthernet0/37, changed state t
o up
Mar 23 13:24:52.790: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEtherne
t0/37, changed state to up
Mar 23 13:25:06.305: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEtherne
t0/37, changed state to down
Mar 23 13:25:07.051: %SPANTREE-2-BLOCK_BPDUGUARD: Received BPDU on port FastEthe
rnet0/37 with BPDU Guard enabled. Disabling port.
Mar 23 13:25:07.051: %PM-4-ERR_DISABLE: bpduguard error detected on Fa0/37, putt
ing Fa0/37 in err-disable state

As mentioned this was occurring randomly for a few minutes and now it is fine. There was definitely no switch / access point on that port, just the IP phone which then led to a Dell PC. I replaced the physical cabling during this time and still had the port go err-disabled

Mar 23 13:28:07.055: %PM-4-ERR_RECOVER: Attempting to recover from bpduguard err
-disable state on Fa0/37
Mar 23 13:28:07.072: %PM-4-ERR_RECOVER: Attempting to recover from bpduguard err
-disable state on Fa0/3
Mar 23 13:28:08.741: %ILPOWER-7-DETECT: Interface Fa0/37: Power Device detected:
Mar 23 13:28:09.202: %ILPOWER-5-POWER_GRANTED: Interface Fa0/37: Power granted
Mar 23 13:28:14.797: %LINK-3-UPDOWN: Interface FastEthernet0/37, changed state t
o up
Mar 23 13:28:15.804: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEtherne
t0/37, changed state to up

I like the idea of using BPDUFILTER to confirm this.

Cisco Employee

Hi Wassim,

What is the plataform and IOS version running on the switch?

I know this is just a workaround but have you tried disabling errdisable in the interface via no err-disable detect cause x command  ?

Are you detecting collisions in the interface?

Best Regards!


Content for Community-Ad