cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7364
Views
5
Helpful
15
Replies

PXE boot fails on WS-2960X-48LPD-L

fredrikringberg
Level 1
Level 1

We have 2 identical stacks. In each we have 2 WS-2960X-48LPD-L and 1 WS-C2960X-48TS-L. Both working in layer 2.

We have PXE boot Lenovo computers in all 6 switches until the newest Lenovo T460/X260. With this PXE boot fails in 2960X-48LPD-L but succeeds in C2960X-48TS-L. We have identical port configuration.

Any ideas?

/Fredrik

15 Replies 15

pwwiddicombe
Level 4
Level 4

What is farming out DHCP addresses; and is it one source across all switches?

Also, what kind of error message on PXE boot - can't find the server, no boot image found, etc.

We have 3 ip-helper address in the L3 switch above the L2 switch.

2 of the ip-helper addresses is dhcp-servers and the third is the pxe server.

Error message: check Cable when F12 the computer

But only when connected to the 2960X-48LPD-L. And with older Lenovo computers it works pxe boating in all switches.

It could be a port negotiation issue, and the newer computers bring up the physical interface and expect an answer much faster.

On a test port on the switch, make sure you configure:

spanning-tree portfast

switchport mode access

There are about 4 negotiatings taking place on port connections - speed, spanning tree, trunking, PoE, and I have a suspicion there might be one other...  Spanning tree in particular can take up to 40 seconds before passing traffic; and in that time the connected laptop still doesn't see a response and get connectivity.  Once the OS is built, it will retry; but PXE boot isn't so forgiving.

The other way to to a quick test for this, is to put a minihub in place.  This way, IT won't do most of the negotiations with the workstation on PC startup (only the speed part, which is fast).  Beware that this WILL disable the switchport if the minihub is actually a switch with spanning-tree smarts AND the Cisco port has BPDUGUARD enable on that port.

We have exactly the same configuration on all ports in theese sex swicthes (user ports), both "spanning-tree portfast" and "switchport mode access" are enabled.

Silly question, but...

Is there a possibility that the new computers are gig-capable, but your patch cords are only cat5 and don't have all 8 pins wired?  100 meg connections only needed a lower grade of cable and 4 pins; and I have seen flaky Gig connections with longer runs of cat5.

The other thing is to test with a hub, as I suggested - a 100 meg one would eliminate the potential Gig failure.

DO the new workstations work normally on the new switches, AFTER you have built them using one of the older switches?

All patches are cat 6.

We tested with an hub between the switch and the the computer- the pxe worked! What is the problem then?

Hi Fred,

Interesting discussion.

Sound like the interfaces on the switch dont re-open quick enough for the PCs

Can you make sure you have either configured the PC facing interfaces as

!

interface gig X/X

switchport host

!

or

I

int gig x/x

switchport mode access

spanning-tree portfast

!

Regards

Alex

Regards, Alex. Please rate useful posts.

We have:

int gig x/x

switchport mode access

spanning-tree portfast

/Fredrik

Hi Fredrik,

On  a test port can you try disabling the POE

!

int gig x/x

shut
power inline never
no shut
!

Then re-test PXE boot on that port

Regards
Alex

Regards, Alex. Please rate useful posts.

Disabling PoE on the port let the PXE boat work!?

Interesting.  Does this PC support trickle-charging using PoE, but there was insufficient power budget per port to allow it to continue?  If that were the case, though, I would have thought there were error messages in the switch log to indicate it.

If you're curious, take off the "no logging event link-status" and see if that logs anything.  That may also explain why the intermediate hub worked, as it won't be supplying PoE.

I  have experienced exactly the same problem. And i have logged a TAC case about it (no solution though, "it is a hardware problem"). I have replicated the situation in the lab. The fact is: A 2960X switch is too slow for fast services. We have had a situation where Windows (with autologon) or PXE has booted up BEFORE the interface has come online (or more specifically i should say: before the interface starts forwarding packets, that is another thing). In particular PoE and Port-Security are two features that can dramatically delay packet forwarding of a port at linkup for up to 7-10 seconds. (note: this was also the case on C3750 switches, but there it was limited to 6 seconds and never caused us problems; the 2960 switch is slower). Disabling PoE or Port-security has always worked as "a workaround". If you have also this problem, please log a TAC case and -maybe- with enough cases, they will consider reprogramming the software or delaying PoE/Port Security at startup....NOTE: another workaround is also to put a IP Phone between the PC and the switch :-) then there is no switch linkup/down at boot and then it works also.

It does sound like either a negotiation doesn't occur as fast as expected, or the failing units do some additional negotiation when attached directly to the Cisco switch (or there's something that defaults differently on it).

Can we see the full configuration  on one of the ports, or have you ONLY got the portfast and switchport mode access statements on the ports ?

Very interesting that these workstations indicate "check cable...".  Do you see link lights on workstation and/or switch when this occurs?  What is the switchport status ?

Are the workstations set up for ONLY gig connections, and the switch is 10/100 ?

All ports like this:

interface GigabitEthernet1/0/24
 description To Customer Eq;USR;;;
 switchport access vlan 628
 switchport mode access
 switchport nonegotiate
 switchport port-security aging time 1440
 switchport port-security aging type inactivity
 switchport port-security
 no logging event link-status
 srr-queue bandwidth share 10 10 60 20
 queue-set 2
 priority-queue out
 no snmp trap link-status
 mls qos trust dscp
 auto qos voip trust
 storm-control broadcast level 8.00
 storm-control action shutdown
 spanning-tree portfast
 ip dhcp snooping limit rate 10
!

Review Cisco Networking for a $25 gift card