OSPF cost calculation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2016 07:24 AM - edited 03-05-2019 07:00 AM
Hello,
I know that the formula to calculate the OSPF cost is:
COST = reference bandwidth in bps / interface bandwidth in bps
I also know that it's important that the bandwidth value reflects the actual speed of the link so that the calculation is as accurate as possible and the route has accurate best path information.
What I'm just wondering is the term "actual speed". I know the the actual speed (I guess they refer to the throughput) is lower than the bandwidth.
So first question is what is the approach in defining the actual speed of a link ? Or let me ask this. How does IOS calculate the BW (Bandwidth) value when checking with show interface .............. | include BW ?
In CISCO materials they say, that for Ethernet interfaces bandwidth usually matches the link speed , but I'm not sure whether this statement is correct. I remember reading white papers and the result was not equal.
Also they mention that the actual speed of serial interfaces is often different than the default bandwidth. So again, how would I know which value to set with the bandwidth command ? And last, I remember that once our class instructor mentioned that serial connections are not really used nowadays is this true?
PS. I'm aware the neither changing the reference bandwidth or interface bandwidth is affecting the speed or capacity of the link.
I highly appreciate all comments :)
Wishing all of you a great weekend.
Best Regards
Adam
- Labels:
-
Routing Protocols
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2016 07:35 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I know that the formula to calculate the OSPF cost is:
COST = reference bandwidth in bps / interface bandwidth in bps
Yes and no.
By standards, OSPF cost has no relationship to anything. However, Cisco years ago, setup OSPF cost to be auto calculated on bandwidth, using the formula you note. (BTW, other vendors might not auto calculate, and those that do, often use a different default base bandwidth.)
I also know that it's important that the bandwidth value reflects the actual speed of the link so that the calculation is as accurate as possible and the route has accurate best path information.
Most of the time having cost accurately reflect available bandwidth is good, but since the purpose of OSPF cost is path selection, you might adjust path cost so that traffic flows by what you consider the best path.
What I'm just wondering is the term "actual speed". I know the the actual speed (I guess they refer to the throughput) is lower than the bandwidth.
Usually if you're costing, and setting bandwidth, for something like OSPF, you insure the "actual speed" is what's available. For example, now a days, you might have a 100 Mbps Ethernet WAN hand-off, but it's logically capped at some lower rate. So, for example, if your 100 Mbps is logically capped at 30 Mbps, you could adjust the interface's bandwidth statement to also be 30 Mbps.
So first question is what is the approach in defining the actual speed of a link ? Or let me ask this. How does IOS calculate the BW (Bandwidth) value when checking with show interface .............. | include BW ?
Usually IOS considers the bandwidth to match the physical interface bandwidth.
In CISCO materials they say, that for Ethernet interfaces bandwidth usually matches the link speed , but I'm not sure whether this statement is correct. I remember reading white papers and the result was not equal.
Also they mention that the actual speed of serial interfaces is often different than the default bandwidth. So again, how would I know which value to set with the bandwidth command ? And last, I remember that once our class instructor mentioned that serial connections are not really used nowadays is this true?
You might be confusing physical bit modulation speed to L2 or L3 speed.
Serial ports often detect they physical modulation speed and self adjust their bandwidth (similar to a 10/100/1000 Ethernet port will set its bandwidth to 10, 100 or 1000, depending on what speed the link came up at).
