cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1773
Views
5
Helpful
4
Replies

How does a router ethernet interface determine available bandwidth?

Hello there.  I'm looking at implementing some QoS on carrier ethernet links, most likely using the bandwidth and/or priority commnds in a QoS service policy.

If I understand them correctly, these commands can reserve particular number kbps of bandwidth, or percentage of available bandwidth.  Either way, the traffic would be shaped based on congestion on the line.  And I assume knowing the available total bandwidth is an important part of determining that - I could be wrong, please correct me if I am.

If the link were a T1 line, this would be straightforward - there's a known bandwidth, and known max-reserved-bandwidth.  But what about an ethernet handoff?  The provider might hand off at 1Gbps, so that's what the router interface sees - but the link itself may only be 10Mbps, or 20Mbps, etc.

I've used some GRE tunnels in the past, and the solution was to shape all traffic to the specific bandwidth of the line, as my understanding was that GRE tunnels are unable to detect congestion like a physical interface.  But I'd like to do away with those, as I'm moving toward more of a meshed WAN.

So my question is, in this case, where the interface might be connected at a particular link speed - say 1Gbps - but the actual available bandwidth of the provider line is something different - say 20Mbps - how does the router determine the available bandwidth?  Or am I asking a question that doesn't matter, because congestion will be determined some other way?

Thank you!                  

3 Accepted Solutions

Accepted Solutions

vmiller
Level 7
Level 7

One method is to use a sub interface and set the 'provisioned throughput' there.

View solution in original post

Hello, Robert.

Let's say you have G0/0 with 1000-full.

Let's say provider told you that you have only 20M on the link.

Your router will be able to send traffic out of the interface on link speed... but ... provider implements shaper (or even policer) on your link and drops all the traffic exceeding the bandwidth.

In this case congestion occures on provider's device, and (unless provider has QoS) you have no way to influence which packet should be dropped.

TCP is design to deal with packet loss, so you applications run at maximum speed they could (let's forget about latency for simplicity) according to real bandwidth.

What could you do: implement you own shaper on outbound direction and use some queueing mechanism, that will prefer important traffic over another in case of congestion.

Answering your question: you are the only one, who could let your router know actual bandwidth (you will configure it under shaper policy).

View solution in original post

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

So my question is, in this case, where the interface might be connected at a particular link speed - say 1Gbps - but the actual available bandwidth of the provider line is something different - say 20Mbps - how does the router determine the available bandwidth?

The router doesn't determine bandwidth.  By default, it will only "know" the bandwidth of the physical port.

I've used some GRE tunnels in the past, and the solution was to shape all traffic to the specific bandwidth of the line, as my understanding was that GRE tunnels are unable to detect congestion like a physical interface.  But I'd like to do away with those, as I'm moving toward more of a meshed WAN.

What you've done for GRE tunnels is also often the (similar) solution for WAN clouds that offer less bandwidth than the physical port, i.e. you need to shape.

How this is done varies much based on the WAN cloud technology.  For example, on older frame-relay you would shape per PVC, and on ATM, often its PVCs have a "built in" shaper (on the interface).

For Ethernet, you often need to shape per destination address block.

Dealing with mesh topologies is a problem unless your WAN provider also supported per site egress QoS.  Generally MPLS cloud vendors do, (Metro)Ethernet cloud vendors do not.  Even when there's cloud egress QoS, you may still need to shape for a local egress bandwidth when its less than port bandwidth.

So for example, if you had a gig hand-off, but a logical local cap of 20 Mbps (actually for gig hand-offs, more likely the logically cap might be 200 Mbps), you shape your egress traffic for that, but you might also need to shape for destination's bandwidth if less.  Again, with mesh topologies, as multiple sites could be sending to the same destination, you either need cloud egress QoS, or you need to "allocate" a shape bandwidth to that site for every other site controlling aggregate across all your sites.  (The latter is difficult to maintain and balloons as the number of sites increase.)

View solution in original post

4 Replies 4

vmiller
Level 7
Level 7

One method is to use a sub interface and set the 'provisioned throughput' there.

Hello, Robert.

Let's say you have G0/0 with 1000-full.

Let's say provider told you that you have only 20M on the link.

Your router will be able to send traffic out of the interface on link speed... but ... provider implements shaper (or even policer) on your link and drops all the traffic exceeding the bandwidth.

In this case congestion occures on provider's device, and (unless provider has QoS) you have no way to influence which packet should be dropped.

TCP is design to deal with packet loss, so you applications run at maximum speed they could (let's forget about latency for simplicity) according to real bandwidth.

What could you do: implement you own shaper on outbound direction and use some queueing mechanism, that will prefer important traffic over another in case of congestion.

Answering your question: you are the only one, who could let your router know actual bandwidth (you will configure it under shaper policy).

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

So my question is, in this case, where the interface might be connected at a particular link speed - say 1Gbps - but the actual available bandwidth of the provider line is something different - say 20Mbps - how does the router determine the available bandwidth?

The router doesn't determine bandwidth.  By default, it will only "know" the bandwidth of the physical port.

I've used some GRE tunnels in the past, and the solution was to shape all traffic to the specific bandwidth of the line, as my understanding was that GRE tunnels are unable to detect congestion like a physical interface.  But I'd like to do away with those, as I'm moving toward more of a meshed WAN.

What you've done for GRE tunnels is also often the (similar) solution for WAN clouds that offer less bandwidth than the physical port, i.e. you need to shape.

How this is done varies much based on the WAN cloud technology.  For example, on older frame-relay you would shape per PVC, and on ATM, often its PVCs have a "built in" shaper (on the interface).

For Ethernet, you often need to shape per destination address block.

Dealing with mesh topologies is a problem unless your WAN provider also supported per site egress QoS.  Generally MPLS cloud vendors do, (Metro)Ethernet cloud vendors do not.  Even when there's cloud egress QoS, you may still need to shape for a local egress bandwidth when its less than port bandwidth.

So for example, if you had a gig hand-off, but a logical local cap of 20 Mbps (actually for gig hand-offs, more likely the logically cap might be 200 Mbps), you shape your egress traffic for that, but you might also need to shape for destination's bandwidth if less.  Again, with mesh topologies, as multiple sites could be sending to the same destination, you either need cloud egress QoS, or you need to "allocate" a shape bandwidth to that site for every other site controlling aggregate across all your sites.  (The latter is difficult to maintain and balloons as the number of sites increase.)

These have all been very helpful, thank you.  Looks like I'll probably need to do my own shaping them - either per subnet, or per subinterface that terminates each.  Either way, doing a mesh is going to be complex on carrier ethernet - I can see why MPLS with provider QoS would be preferable.  Perhaps I'll consider sticking to hub-and-spoke as long as I'm on this technology.

Thanks again for everyone's help!

Review Cisco Networking for a $25 gift card