cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
28529
Views
0
Helpful
11
Replies

Switch ports keep going down/timing out?

terryit31
Level 1
Level 1

Let me preface my post by saying that I am extremely new at configuring and working on Cisco switches, so I may not be using the correct phrasing or terms to describe my problem.  We were sort of left high and dry by the company that set our network equipment up, as they moved out of state and do not provide service for us anymore.

A few months ago, they switched out our Allied Telesyn switches with PoE Cisco 2960 switches. Ever since they were put in, we have computers on our network that will randomly lose network connectivity. There doesn't seem to be any pattern to most of the computers dropping their connection. Sometimes it's at boot, sometimes it's midday, sometimes in the evening, etc.

I was hoping someone could take a look at the switch config and offer some advice/edits to improve it.  The only thing I have edited in the config is adding a KeepAlive to some of the ports with devices connected. Interface Fa0/18 is one of the computers that drops connection. It's an internal security camera computer, so not having security footage could be a problem.

Thank you!

http://pastebin.com/b0qwMNic

1 Accepted Solution

Accepted Solutions

Do you have any idea what the aggregate distance is on some of your cable runs?  This would be from switchport to the server and workstations in question.  If you exceed 100 meters (including all patch cords, and cabling within the building) you can run into issues.

The previous switches you had - were they 10 or 10/100 ports?  If the distance is longer than the limit (or you have cat 5 cabling on longer runs) then 100 Meg can be flaky, while 10 Meg may operate just fine.

From the logs, it isn't a spanning tree issue - it looks more like a physical link loss, possibly due to cabling issues.  Do a "sh int counters errors" and see if the ports having issues also have other errors.

View solution in original post

11 Replies 11

pwwiddicombe
Level 4
Level 4

How many switches do you have; and are they all configured similarly to the one given? 

Do you still have other switches (hubs) in the environment?

Is there any possibility that you have redundant links between the switches, and potentially causing a loop?  (i.e. meeting rooms where there are multiple ports, and users can be connecting ports).  This can cause spanning tree loops, which can be difficult to track down.  I *do* see bpduguard in place which will reduce that, but the older allied telesyn might not.

We have quite a few, I'd say around 12 switches, that should be configured similarly. 2-3 are the Allied Telesyn, and the rest are Cisco. The Cisco's were put in to provide PoE to our Cisco VOIP phones.

The computers plugged into the Allied Telesyn switches have never had a problem dropping connection.. It's only with the Cisco's.

Do you have cabled redundancy in place (i.e. multiple links to switches)?  The Allied Telesyn switches probably don't run PVST, so are incompatible for running spanning tree.  If you have them all set up as single link to the Cisco environment (and no links between the AT switches), then they shouldn't be a problem.  However, if you do, conditions can cause major spanning tree issues, that would cause difficult-to-isolate issues that would affect large numbers of users when they occur.

Are your problems widespread when they happen (i.e. multiple stations all congested at the same time), or random workstations or areas periodically losing connection?  The widespread one could be a spanning tree condition (or simply a major upllink failing).  Cisco logs should show flapping messages if it's spanning tree.

I believe we had a spanning tree issue that caused widespread network failures when they installed one of the new switches, but they were able to get that resolved.

The problem we are still having is just one computer at a time losing it's network connection randomly. Sometimes it happens once a week, sometimes we go weeks without a report. However, I am not 100% sure that all outages are being reported to IT.

The one I've noticed having the most trouble is Interface Fa0/18 in the configuration I provided. It seems to drop every 2-3 days and cause video recording to stop. Perhaps adding a KeepAlive option to it would help?

Keepalives may already be enabled by default (a "sh int fast 0/18" would show that); but it seems the purpose is to detect "unusual" cabling or looping issues on a given port.

Is there anything unusual in the switch logs?  A simple "sh log" will show the last 20 events or so that have happened on the switch, so doing this during/after the failure occurs may yield some information.  In fact, doing this periodically MAY alert you to problems on your network you weren't aware of.

Your logging buffer default may be quite small, so not have much history.  You might add "logging buffered 8000" to the config to give you a little more history.  Note when you do this, it clears the current buffer (which is just logging entries which you probably don't care about anyway).

Thanks for your help, btw. I appreciate it.

May 13 15:07:15.178: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/9, changed state to up

May 13 16:00:40.725: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to down

May 13 16:00:41.723: %LINK-3-UPDOWN: Interface FastEthernet0/6, changed state to down

May 13 16:00:43.938: %LINK-3-UPDOWN: Interface FastEthernet0/6, changed state to up

May 13 16:00:44.944: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to up

May 13 16:30:35.224: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/3, changed state to down

May 13 16:30:37.237: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/3, changed state to up

May 13 17:38:52.287: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/4, changed state to down

May 13 17:38:53.285: %LINK-3-UPDOWN: Interface FastEthernet0/4, changed state to down

May 14 08:27:43.700: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/4, changed state to up

May 14 08:27:44.790: %LINK-3-UPDOWN: Interface FastEthernet0/4, changed state to up

May 14 16:44:58.748: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/4, changed state to down

May 14 16:44:59.746: %LINK-3-UPDOWN: Interface FastEthernet0/4, changed state to down

May 16 06:58:16.721: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1, changed state to down

May 16 06:58:18.725: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1, changed state to up

May 16 08:10:09.849: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1, changed state to down

May 16 08:10:11.862: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/1, changed state to up

May 16 09:21:24.340: %LINK-3-UPDOWN: Interface FastEthernet0/4, changed state to up

May 16 09:21:27.494: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/4, changed state to up

May 16 09:22:39.787: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to down

May 16 09:22:40.785: %LINK-3-UPDOWN: Interface FastEthernet0/6, changed state to down

May 16 09:22:45.013: %LINK-3-UPDOWN: Interface FastEthernet0/6, changed state to up

May 16 09:22:46.019: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to up

May 16 09:23:00.918: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to down

May 16 09:23:02.931: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to up

May 16 12:24:37.585: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to down

May 16 12:24:38.583: %LINK-3-UPDOWN: Interface FastEthernet0/6, changed state to down

May 16 12:24:40.798: %LINK-3-UPDOWN: Interface FastEthernet0/6, changed state to up

May 16 12:24:41.804: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/6, changed state to up

May 16 19:54:43.927: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/4, changed state to down

May 16 19:54:44.934: %LINK-3-UPDOWN: Interface FastEthernet0/4, changed state to down

May 17 07:20:49.812: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/18, changed state to down

May 17 07:20:50.819: %LINK-3-UPDOWN: Interface FastEthernet0/18, changed state to down

May 17 07:20:53.033: %LINK-3-UPDOWN: Interface FastEthernet0/18, changed state to up

May 17 07:20:54.040: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/18, changed state to up

May 17 08:10:14.774: %SYS-5-CONFIG_I: Configured from console by vty0 (10.1.1.28)

May 17 08:20:50.630: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/3, changed state to down

May 17 08:20:52.644: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/3, changed state to up

May 17 09:21:52.048: %LINK-3-UPDOWN: Interface FastEthernet0/4, changed state to up

May 17 09:21:53.055: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/4, changed state to up

May 17 10:54:02.657: %SYS-5-CONFIG_I: Configured from console by vty0 (10.1.1.28)

OK - nothing interesting there.  Had the camera port experienced any issues in that interval (3 days...)?

No, it was up and functioning. It had just lost it's connection.

Do you have any idea what the aggregate distance is on some of your cable runs?  This would be from switchport to the server and workstations in question.  If you exceed 100 meters (including all patch cords, and cabling within the building) you can run into issues.

The previous switches you had - were they 10 or 10/100 ports?  If the distance is longer than the limit (or you have cat 5 cabling on longer runs) then 100 Meg can be flaky, while 10 Meg may operate just fine.

From the logs, it isn't a spanning tree issue - it looks more like a physical link loss, possibly due to cabling issues.  Do a "sh int counters errors" and see if the ports having issues also have other errors.

Switched ports and re-cabled and so far so good. Thanks again for your assistance. I was hoping it would be something more complicated :)

If it doesn't seem to make sense, then it's complicated ?

I have lived through both types of scenarios (spanning tree, miscabling; and poor quality cable/long runs).  Believe me, the cable quality/long runs is easier to deal with; although it might take time.

Actually, I had accidentally hit a semi-usefull command a few years ago, for the cabling issues.  "speed auto 10 100" or "speed auto 10" will allow speed negotiation, but limit the speeds allowed on ports; if the cabling doesn't handle it.  Not sure how many platforms it's supported on - this was 4506's at the time.