cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
309
Views
3
Helpful
10
Replies

QOS ASR 1K - asymetric path bandwidth

Jerome BERTHIER
Level 1
Level 1

Hi

Question about QOS on ASR1K series.

It supports only egress queuing. So service policies dealing with shaping are usable only on output.

In a design with asymetric bandwidth between interfaces (1Gbits to 10Gbits or 10Gbits to LAG 2x10Gbits for example), how to deal with the difference without limiting to the lower bandwitch ?

For example, let's say an ASR with:

- one interface ge-0/0/0 1G to service provider

- one Etherchannel Po1 2x1G to core network with 802.1Q sub-interfaces Po1.10 / Po1.20 / Po.30

There are at least three main data paths:

Path1) ge-0/0/0 to Po1 = 1G to 2x1G.

Path2) Po1 to ge-0/0/0 = 2x1G to 1G

Path3) Po1.x to Po1.y = 2x1G (in fact multiples 1G to 1G due to load balancing on LAG)

On path1, it's fine to shape around 1G to deal with ingress ge-0/0/0 bandwitdh.

On path2, it's fine to shape around 1G to deal with egress ge-0/0/0 bandwitdh.

On path3, we may shape around 2G to deal with egress Po1 bandwitdh but by default, the service policy applied on output does'nt care about input interface. So the shaping around 1G applied for path 1 , we'll also apply to path3 then it'll limit the overall Etherchannel bandwidth.

So I've got to questions:

- is it requested and feasible to separate two service policies on output for the Port-channel interface ? one by taking care of input-interface to shape lower to the ingress bandwidth of the slower path and the other to the overall Etherchannel bandwidth

- about Etherchannel interfaces, documentation states that we can apply the MQC service policy to the Etherchannel itself. Fine, but if we design the policy with the overall bandwitch in mind, what happens if the Etherchannel loose a member interface ? In my example, we'll fail to 1G of bandwith available but with a 2G policy. So would it be more accurate to apply old style agregate QOS on physical interfaces members ?

Agregate Etherchannel OQS (last evolution): https://www.cisco.com/c/en/us/td/docs/routers/ios/config/17-x/qos/b-quality-of-service/m_aggregate_etherchannel_quality_of_service.html

Old Etherchannel QOS: https://www.cisco.com/c/en/us/td/docs/routers/ios/config/17-x/qos/b-quality-of-service/m_qos-eth-int-1.html#GUID-95630B2A-986E-4063-848B-BC0AB7456C44

Regards

1 Accepted Solution

Accepted Solutions

"I cannot manage QOS on upstream equipments."

Ah, okay, that I understand.

You cannot fully control that as if you were able to manage upstream QoS.

You can "influence" upstream, but it's very dependent on the kinds of traffic involved, it's imprecise (at least on typical network devices - dedicated specialized device can do much better) and it can have some undesirable side effects.

If you like I can get into details, but understand, from a QoS perspective, it's possibly better than nothing, but also possibly, not much better; again side effects can be nasty.  (Sort of like war surgery over a hundred years ago, your life or your arm/leg.)

"Do you mean that in any case on this platform, if egress bandwidth is higher than ingress then congestion can never happen ?"

Correct, also if same (payload) bandwidth.  If you're unsure, try to find an example where it's not true.

As for Etherchannel, traditionally, aggregate bandwidth is used, so assuming just only traffic ingress is also egressed, it's a same bandwidth situation.

However, if you use a feature where you can specify some member links are to be used for specific subsets of traffic, and member link allocation can differ between the two sides/ends, then yes you could have congestion.  Basically, being able to assign member links is a hardware form of QoS, logically inferior because it's very easy to create unexpected situations.

View solution in original post

10 Replies 10

Joseph W. Doherty
Hall of Fame
Hall of Fame

Sorry, I don't understand your problem.

Slower to faster doesn't cause congestion, so there's no need for QoS.

Faster to slower can cause congestion, and QoS might be used to manage such congestion.

It's possible downstream there's a bottleneck (faster to slower) where you cannot apply QoS.  For such cases, upstream of that bottleneck, you can shape for the downstream bottleneck and manage the congestion there, at the shaper.

Your examples are slower to faster, correct?

Still trying to understand your issue.

Path 1 slower to faster case - non issue.

Path 2 is faster to slower case - can apply QoS - no need to shape.

Path 3 is same to same, no need for QoS.

 

Sorry but since to my readings, without shaping, there is no queuing and scheduling. So how could you manage priorities in case of congestion ?

Even on same to same bandwidth, you can face congestion and need priorities.

 

Hi Joseph

I cannot manage QOS on upstream equipments.

So in my example, if you look at path 1, yes it's a path where egress interface bandwidth is faster than ingress interface bandwidth. But as ingress queuing doesn't exist on ASR1K, how do you deal with ingress congestion ? I was thinking that queuing on egress is the way to regulate at ingress of the path as well. Do you mean that in any case on this platform, if egress bandwidth is higher than ingress then congestion can never happen ? Not sure of that. If true then it will be simplier to set QOS. No doubt.

Anyway my question about impact on QOS when loosing an interface member of an Etherchannel seems still here.

Regards

First etherchannel mandatory need same qos apply to each port member 

Second in end QoS will see one virtual interface not multi and hence QoS not effect if one port member down.

MHM

No. Declaring same QOS is no need anymore on port member. You can set it at Etherchannel level. See "agregated Etherchannel QOS": https://www.cisco.com/c/en/us/td/docs/routers/ios/config/17-x/qos/b-quality-of-service/m_aggregate_etherchannel_quality_of_service.html

OK you confirm what I was thinking. With agregated Etherchannel QOS, if you loose one interface member, QOS cannot apply anymore except if you shape at a rate below physical bandwidth of member.

I think I see the final use case. QOS on Etherchannel is much about cutting slices of traffic for sub-interfaces than dealing with the overall traffic of the agregated bandwidth.

 

QoS in PO

QoS use in physical or virtual interface depend on type and direction 

Screenshot (173).png

All those devices are switches.  OP is an ASR1k, router.  Router QoS is usually more capable than a switch's.

Also, this from a CiscoLive presentation?  Regardless, date of source?

this for ASR IOS XE 
you can see the QoS can apply 
A- member 
B- main interface of PO
C- subinterface of PO

I share two slides for you and other to understand there is direction of QoS and type 
and it decided in which interface you can config 

MHM 

Screenshot (174).pngScreenshot (175).png

"I cannot manage QOS on upstream equipments."

Ah, okay, that I understand.

You cannot fully control that as if you were able to manage upstream QoS.

You can "influence" upstream, but it's very dependent on the kinds of traffic involved, it's imprecise (at least on typical network devices - dedicated specialized device can do much better) and it can have some undesirable side effects.

If you like I can get into details, but understand, from a QoS perspective, it's possibly better than nothing, but also possibly, not much better; again side effects can be nasty.  (Sort of like war surgery over a hundred years ago, your life or your arm/leg.)

"Do you mean that in any case on this platform, if egress bandwidth is higher than ingress then congestion can never happen ?"

Correct, also if same (payload) bandwidth.  If you're unsure, try to find an example where it's not true.

As for Etherchannel, traditionally, aggregate bandwidth is used, so assuming just only traffic ingress is also egressed, it's a same bandwidth situation.

However, if you use a feature where you can specify some member links are to be used for specific subsets of traffic, and member link allocation can differ between the two sides/ends, then yes you could have congestion.  Basically, being able to assign member links is a hardware form of QoS, logically inferior because it's very easy to create unexpected situations.

Review Cisco Networking for a $25 gift card