cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
35550
Views
0
Helpful
13
Replies
tmdempsey
Beginner

Line Protocol up/down vs Interface Resets

I've stumbled across something that a little puzzling, at least to me.  on a 3750e running 12.2 code, I have an interface where the logs show its line protocol has gone up and down several times this morning.  However, when I do a show interface and look at the interface statistics, "interface resets" is still 0.  Shouldn't this stat increment when the line protocol goes down?  I have a description of this stat in a few documents on the support site, and it staties what you see below.  I would assume a line protocol down would cause the reset interface stat to increment.

--blip from online docs--

Number of times an interface has been completely reset. This can happen if packets queued for transmission were not sent within several seconds. On a serial line, this can be caused by a malfunctioning modem that is not supplying the transmit clock signal, or by a cable problem. If the system notices that the carrier detect line of a serial interface is up, but the line protocol is down, it periodically resets the interface in an effort to restart it. Interface resets can also occur when an interface is looped back or shut down.

--blip from online docs--

-Tommy

2 ACCEPTED SOLUTIONS

Accepted Solutions
boss.silva
Beginner

Hello,

Interface resets are not related to up/down transitions.

They usually mean that a keepalive has been missed, link congestion or hardware issue. The following link contains more information about interface resets for serial lines specifically:

http://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1915.html#wp1020941

Let me know if that helps you.

Regards,

Bruno Silva.

View solution in original post

Peter Paluch
Hall of Fame Cisco Employee

Bruno,

The retries count was increased in IOS 12.2(13)T to 5, according to the IOS Interface and Hardware Component Command Reference. It was most probably 3 before that IOS version.

However, these keepalives are probably unrelated to the interface resets seen on Catalyst Ethernet (and faster) switchports.

Best regards,

Peter

View solution in original post

13 REPLIES 13
Leo L
VIP Community Legend

a 3750e running 12.2 code

Which one?

I have an interface where the logs show its line protocol has gone up and down several times this morning.

If you see two "protocol" going up and down and two "line" going up and down then this is fine.  It means that the client has powered ON their computers.

The specific version is 12.2(44r)SE3.  I'm not really worried about the line going up and down as I know these clients were rebooting all day, I'm just curious on why the reset interfaces didn't act as I would expect it to. 

boss.silva
Beginner

Hello,

Interface resets are not related to up/down transitions.

They usually mean that a keepalive has been missed, link congestion or hardware issue. The following link contains more information about interface resets for serial lines specifically:

http://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1915.html#wp1020941

Let me know if that helps you.

Regards,

Bruno Silva.

Is there anything like this out there for gig e interfaces?  Does a gig e interface even send keep-alives if you don't tell it to?  I came across this explanation of resets on a serial interface, but it didn't seem to really answer my question.  And, if there are keep-alives, they should fail when the line protocol goes down, correct?

Hello,

Is there anything like this out there for gig e interfaces?  Does a gig e interface even send keep-alives if you don't tell it to?

Yes. GigabigEthernet will send keepalives every 10 seconds by default and will repeat 5 times.

The following url explains in detail for each technology how to set up the keepalive and how to turn it off:

http://www.cisco.com/en/US/docs/ios/12_3/interface/command/reference/int_i1g.html#wp1154231

And, if there are keep-alives, they should fail when the line protocol goes down, correct?

The reset means an operational point-of-view. It is confusing, i agree. But for instance, if you do a "clear interface X", it will reset the hardware logic, therefore increasing the "interface resets" counter. Or a shut/no shut on the interface also increases it.

An interface going down completely does not increase the counter, as it has been in practice, even rapidly, shutdown and came up right back again.

Is that clear?

Regards,

Bruno Silva.

Thanks for that document.  I think it helps explain a little.  So, now i have one last question.  The document says "An interface is declared down after three update intervals have passed without receiving a keepalive packet unless the retry value is set higher".  So, from this, would I be correct to assume I should see a reset if line protocol goes down for longer than 30 seconds?

Thanks for your help on this.  It still seems like there should be a document out there that describes how interface resets and all other statistics actually work.

Thanks again.

Hello,

I'm glad that kind of answered your question.

This phrase indicates that the interface will be down after three tries, but that doesn't mean it will increment the "interface resets" counter. Theoretically it will not increment, just will take more time to the line protocol indicate as down.

Regards,

Bruno Silva.

Peter Paluch
Hall of Fame Cisco Employee

Bruno,

GigabigEthernet will send keepalives every 10 seconds by default and will repeat 5 times.

To my best knowledge, all switchports on Cisco Catalyst switches do that, starting with 10Mbps Ethernet. However, I do not understand the comment about "repeating it 5 times" - can you be more precise about that?

I had quite an extensive - and very inspiring! - discussion about these Ethernet keepalives with Giuseppe Larosa. Catalysts actually seem to be using those keepalives for loop detection - not for keepalive checking.

See the following discussion:

https://supportforums.cisco.com/message/3005684

Standalone keepalive message that actually check on the link liveliness are used, to my best knowledge, in HDLC, PPP and GRE. These Ethernet keepalives (also called LOOP) frames appear to having had been invented for a similar purpose, but do not fulfill it for a long, long time. Actually, with Catalyst switches, it is expected that you do not receive the LOOP keepalive message back - if you do, the port is declared self-looped and is err-disabled.

Best regards,

Peter

Hello,

The document is kind of confusing now!

It mentions:

retries

(Optional) Specifies the number of times that the device will continue  to send keepalive packets without response before bringing the interface  down. Integer value greater than 1 and less than 255. If omitted, the  value that was previously set is used; if no value was specified  previously, the default of 5 is used.

If using this command with a tunnel interface, specifies the number of  times that the device will continue to send keepalive packets without  response before bringing the tunnel interface protocol down.


AND:

An interface is declared down after three update  intervals have passed without receiving a keepalive packet unless the  retry value is set higher.

Question now is, whether this is 5 or 3.

Unfortunately I don't have any equipments to test it.

Regards,

Bruno Silva.    

Peter Paluch
Hall of Fame Cisco Employee

Bruno,

The retries count was increased in IOS 12.2(13)T to 5, according to the IOS Interface and Hardware Component Command Reference. It was most probably 3 before that IOS version.

However, these keepalives are probably unrelated to the interface resets seen on Catalyst Ethernet (and faster) switchports.

Best regards,

Peter

Hello,

Ok. Question solved.

Now the document needs to be updated. I'll send a correction request.

And yes, the keepalives are not related to the interface resets.

Nice to have these discussions. Seems to be a simple topic, but as any topic, can get quite complex.    

Regards,

Bruno Silva.

I will run a test to see whether or not the keepalives affect the interface reset statistics, or more specifically, whether missing for the max retries causes an increment on the interface reset.  I'll report back as soon as I get it.  It may be early tomorrow.

Thank you both for your help.

tmdempsey
Beginner

I tested out whether "reset interface" would increment after a time that would have been well past keep-alive*retries.  It did not increment. So, I guess the keep-alive is more of a local check and "reset interface" is incremented after something local to the switch causes it.  This is pretty much how you guys explained it above.  Thanks again for your help. 

Another random question which started my path down looking into this statistic in the first place.  Is there a statistic one can look at to detect flapping?  Sure you can look through the logs if you are logging high enough and see protocol up/down messages, but other than that is there anything else?