cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1202
Views
30
Helpful
16
Replies

Cisco ISR and IP Precedence

Piotr Kowalczyk
Level 1
Level 1

Hi,

I just wander if somebody could help me to understand IP Precedence configuration on Cisco ISR. Basically following Cisco documentation (https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_plcshp/configuration/xe-3s/qos-plcshp-xe-3s-book/qos-plcshp-two-rate-plcr.html#GUID-26784877-A7A9-4BF1-86E2-A6BDDEDC4B66), I want to set cir and pir with IP precedence like below:

police cir 1048500

pir 20970000

conform-action transmit

exceed-action set-prec-transmit 2

violate-action drop

what is nice and clear. What is not clear for me, how to make it working. Is queue WFQ or WRED which will use this marking enabled by default or do I need to configure it? If I need to configure it, what will be the simplest way to achieve the goal - 10MBps traffic always transmitted with the highest priority, between 10MBps to 20MBps transmitted with low priority, above 20MBps - dropped.

Thank you

1 Accepted Solution

Accepted Solutions

Policies you apply for outbound traffic, will provide most anything you want for outbound management.

Policies you apply for inbound traffic, might "influence" bandwidth consumption, coming to you, and cannot control traffic prioritization for the inbound traffic.  You can, though, further limit what bandwidth your downstream clients receive.

Perhaps to make this clear, consider we have four clients, each send us, outbound, 100 Mbps, but we only have the 300 Mbps link.  As we control that outbound 300 Mbps, we can manage what happens to the 100 Mbps excess, and further decide how to prioritize the clients usage of the, now congested, 300 Mbps.

However, now consider the converse, your four clients attempt to pull down 100 Mbps each.  It will also congest on the far side's egress, but how is excess managed?  How can some traffic be prioritized?  Basically, we control neither, as we don't manage the far side's egress.  (Usually, your far side service provider will just let your four clients' traffic battle it out, among themselves, using a FIFO and BE "rules" (which, on average, would probably account for about a 25% drop rate on each client's 100 Mbps stream).

Once we receive the 300 Mbps (with potentially 25% dropped from each client's stream) what do you want to do?  You can, certainly shape/police each client's stream to whatever max rate they are entitled to.  Let's say, we police all four clients to 50 Mbps each.  Okay, we then only forward, locally, 200 Mbps, which fits well into our overall 300 Mbps.  Yet, does that immediately, reduce our overall received rate to 200 Mbps?  No, it does not.  Does that, immediately, reduce the far side's 400 Mbps to 200 Mbps.  No, it does not.  However, if this traffic is TCP, the drops both on the far side egress, and our policed drops, will both get the TCP sender's to slow such that the drops stop happening.  However, TCP, once the drops stop happening, will start to increase transmission rate, until drops happen again.

BTW, actually, the just described overate happens on our side too.  The difference is, turn around time to inform sender, on our side, is faster; we selectively queue, allowing transient bursts (far side queuing setup?); we can selectively drop traffic from some flows and not others (far side?); and we can prioritize when traffic is congested (far side?).

Yes, we can, often, "influence" traffic coming to us, but the granularity of effective traffic management, is hugely different between managing inbound vs. outbound traffic.

Years ago, I was in a situation with a campus of about 2,000 users with a pair of Internet connected T-3s (45 Mbps each), which were being totally overwhelmed by bandwidth demand.  There was much I could do, and did, to deal with outbound traffic, but for inbound, I even went to the extent of shaping returning ACKs.  All I can say, my efforts went from making Internet usage from useless, to horrible.  We very much needed more bandwidth, or management on far side, which SP, wouldn't provide (they very seldom do, more profitable to "sell" you more bandwidth).  We got more bandwidth, problem solved.

Your 300 Mbps, hopefully, isn't being hammered. Most of the time, I suspect, your companies, will get the bandwidth they expect.  However, again, unless you control the far side, you cannot truly guarantee, inbound bandwidth.

Again, let's say you policed all your 20 customer companies to 10 Mbps, i.e. 300 Mbps not exceeded.  True, every customer company will not get more than their 10 Mbps, but as the far side can congest with traffic beyond 300 Mbps, what happens then?

Also true, if TCP traffic, the aforementioned policers should, cause TCP to average near 10 Mbps, but TCP transmission rate will, for a bit of time, exceed the 10 Mbps.

As most traffic uses TCP, effectively managing it, requires understanding it.  Not knowing how well you understand TCP, I've provided two references.  The first has a nice animation in it too.

https://witestlab.poly.edu/blog/tcp-congestion-control-basics/

https://courses.cs.washington.edu/courses/cse550/16au/notes/lect16.pdf

 

View solution in original post

16 Replies 16

Joseph W. Doherty
Hall of Fame
Hall of Fame

Firstly, if you remark in an egress policy (egress policy necessary to support FQ or WRED, I recall (?) policy bases its actions on ToS what was passed into it.  However, you can police, and remark, on the same device, on an ingress policy, which, same device, egress policy can than use the ingress policy's ToS marking.

I.e. to do what you've requested, would be something like:

policy-map ingress
class class-default
police cir 1048500 pir 20970000 conform-action transmit exceed-action set-prec-transmit 2 violate-action drop

policy-map egress
class low-priority !needs class-map to match IPPrec 2
bandwidth percent 1
class class-default
bandwidth percent 99

in g1 !ingress
service-policy in ingress

in g2 !egress
service-policy out egress

You have a problem, though, if there are several ingress ports that "feed" to the egress port.  In such a case, only the egress port would have an accurate accounting for aggregate bandwidth consumption, and so you could not then prioritize conforming traffic over non-conforming traffic (BTW, this is an unusual requirement, generally, non-conforming is just set to drop sooner, but again, egress policy, I believe would use ToS marking passed into it for making dropping decision).

Hi Joseph,

Thank you for your reply. From your configuration, looks like I don't need to enable WFQ or WRED using additional command?

service-policy 

on the interface is enough? M'I right or I missed something?

 

Depends on the platform, but from what you've described, you probably would need only something similar to what I posted. 

Could you tell me how to check if I need additional configuration please?

Possibly - provide specifics for your platform, what you want the QoS to do, and network topology.

Joseph,

Thank you for your help, I really appreciate this.

The platform is Cisco ISR C1111-8p, IOS ver 16.12.4, license ipbase.

Network topology:



300MBps MAN connection  <---> GE0/0/0 --NAT-- ISR 1100 --- GE0/0/1 (with 20 sub interfaces GE0/0/1.101 - GE0/0/1.120) <---> Switch with VLANs



This router manages internet connection for office space renting company to separate their networks, NAT and manage bandwidth. I'm replacing old Cisco router which was working well but I want to improve bandwidth management. Until now I had following QoS policies (example for 10MBps):



policy-map PM-ANY-QoS-10MB
 class CM-ANY-QoS
   police cir 10485500
     conform-action set-dscp-transmit default
     violate-action drop

interface GigabitEthernet0/1.101
 service-policy input PM-ANY-QoS-10MB
 service-policy output PM-ANY-QoS-10MB

interface GigabitEthernet0/0
random-detect precedence



In new configuration I want to get more flexibility. So each VLAN will have cir (guarantied bandwidth) and on top of this shared bandwidth which is not guaranteed but if available can be used (pir-cir). I found on Cisco link which I shewed before example which in theory should do what I want.



policy-map PM-ANY-QoS-20MB-SHARED
 class CM-ANY-QoS
   police cir 1048500
             pir 20970000
     conform-action transmit
     exceed-action set-prec-transmit 2
     violate-action drop

policy-map PM-ANY-QoS-D30MB-S60MB
 class CM-ANY-QoS
    police cir 31455000
              pir 62910000
    conform-action transmit
    exceed-action set-prec-transmit 2
    violate-action drop


interface GigabitEthernet0/0/1.101
 service-policy input PM-ANY-QoS-20MB-SHARED
 service-policy output PM-ANY-QoS-20MB-SHARED

interface GigabitEthernet0/0/1.102
 service-policy input PM-ANY-QoS-D30MB-S60MB
 service-policy output PM-ANY-QoS-D30MB-S60MB


The second policy supposed to guarantee 30MBps and give additional 30MBps (60MBps in total) in shared bandwidth if available on both directions. My problem is, I'm not quite sure if this will work as I think it should work and how to manage this IP Precedence that interfaces (vlan or WAN) will manage traffic priority as I want them to manage. This should apply to both - upload and download speed. Or perhaps I should use DSCP instead of IP precedence?

The worst thing is, I'm not sure how to test QoS settings in lab environment so whatever I set it may be difficult to make sure this is actually working.

Thanks again

For starters, I'm not a big fan of policers, or (W)RED, but policers (or shapers) might especially be used if you truly want to cap the maximum amount of bandwidth a company might use, even if it's available.

"The second policy supposed to guarantee 30MBps and give additional 30MBps . . ."

As you note having 20 companies, and 300 Mbps total, you can really only guarantee 15 Mbps each.  Beyond the 15 Mbps, you might allow any one company to have access to all, otherwise, unused bandwidth, or further limit their access to that bandwidth.

Assuming you charge for "guaranteed" bandwidth, and assuming you might add additional companies, or increase some companies guaranteed bandwidth, if you don't want to be forced to immediately increase your MAN bandwidth, provide a lower guarantee, say 10 Mbps (which is what, they have now, in theory).

 

class-map match-all company01egress
match qos-group 1
.
.
class-map match-all company##egress
match qos-group #

policy-map MANegressBandwidthAllocations
class company01egress
bandwidth remaining percent 1
fair-queue
!optionally shape [might not be supported, in child policy, on an 1100] or police maximum bandwidth that can be used
.
.
class company##egress
bandwidth remaining percent 1
fair-queue
!optionally shape [might not be supported, in child policy, on an 1100] or police maximum bandwidth that can be used

policy-map MANegressShaper
class class-default
shape average 300000000 !300 Mbps, may need to decrease by about 15% if shaper doesn't account for L2 overhead
service-policy MANegressBandwidthAllocations

policy-map LANingressCompany01
class class-default
set qos-group 1
.
.
policy-map LANingressCompany##
class class-default
set qos-group #

interface g0/0/0
service-policy out MANegressShaper

interface g0/0/1.101
service-policy in LANingressCompany01

Effectively managing return traffic is hit-or-miss. Yes, we can again, shape or police, subinteface egree, but as we're now downstream of our MAN link, your limiting any one company's bandwidth usage, might, or might not, insure available bandwidth for other companies.

 

policy-map LANegressCompany
class class-default
shape [or police] average 60000000 !just set to highest

interface g0/0/1.#
service-policy out LANegreeCompany

 

Hi Joseph,

I believe I missed something in this configuration, so perhaps you will help me to understand this.

First of all, I realise I can’t give guaranteed bandwidth of 30Mbps to 20 companies if I have 300Mbps line. The point is, that companies will have different cir so same will have 1Mbps, other 10Mbps other 30Mps and so on. Sum of guaranteed bandwidth will be lower than 300Mbps.

But I also want to give second priority bandwidth to all companies. So if company has 1Mbps guaranteed bandwidth, it has additional 19Mbps as second priority (if available and with contention ratio) and that is it, everything above is dropped. So max bandwidth showed in speed test is 20 Mbps. Unfortunately I can’t see anything related to it in your configuration, unless I completely didn’t’ understand it.

 

For MAN egress, minimum bandwidths guarantees are supported by bandwidth statements.  I set all to 1%, so all would have equal share, but since you now mention you want to be able to assign unequal bandwidth, just change bandwidth statement, like:

 

class company01egress
bandwidth 1000 !that's in Kbps, so that's 1 Mbps, but you can assign 10 or 30 Mbps same way.
fair-queue
!optionally shape [might not be supported, in child policy, on an 1100] or police maximum bandwidth that can be used

 

For maximum cap, use a shape or police command, (as noted in prior post) such as:

class company01egress
bandwidth 1000 !that's in Kbps, so that's 1 Mbps, but you can assign 10 or 30 Mbps same way.
fair-queue
!optionally shape [might not be supported, in child policy, on an 1100] or police maximum bandwidth that can be used

 

For bandwidth management, from MAN, i.e. MAN ingress or LAN egress:

policy-map LANegressCompany
class class-default
shape [or police] average 60000000 !just set to highest to be permitted to particular company

interface g0/0/1.#
service-policy out LANegreeCompany

As you don't control far side of 300 Mbps link, you cannot really control traffic prioritization, i.e., for example, you cannot guarantee bandwidth (even policing aggregate below 300 does not truly guarantee far side will not burst to 300 - again, this is because you're downstream).  As you do seem to want to cap maximum bandwidth, reaching your companies, as shown shaper or policer will control that.

If you're still confused, please let me know.  What I'm suggesting is different from what you've been doing, and thought to do, but from what you've described, the forgoing better meets your goals.

BTW, when describing goals, for best recommendation, you need to describe all you envision.  Your latest mention of wanting to offer differ bandwidth allocations per company, really isn't unexpected, but supporting that vs. offering all your companies the same bandwidth could have made a difference in approach.

Hi Joseph,

Thank you for this explanation and sorry I wasn’t clear enough.

As far as I understand, you said there is no effective way to guarantee bandwidth for each company on download direction, what is a big problem for me. I just wander, could you tell me what options do I have here? Is it possible for example to split 300MPS to 2x150MPS and allocate 150MPS to companies which will need dedicated bandwidth (so let’s say 10 companies x 15MBps each) and 150 to share between rest? Or perhaps some other solution?

My problem is, that some companies require dedicated bandwidth and some not. So my idea was to introduce this prioritization but it looks it is not gone to work and I can’t use it to guarantee bandwidth for companies which need it. So I’m looking for any reasonable solution which would help resolving this issue.

Yea, downstream management, of upstream bandwidth, is a very common problem.

Although, downstream, you cannot control bandwidth as you could upstream, you can "influence" upstream bandwidth.  For example, TCP (i.e. later/recent implementations) might slow when a jump in RTT is detected (such as a downstream shaper may cause).  TCP will slow when it detects drops (such as a policer will do).  Non-TCP traffic, is hit or miss.  Some might slow if they have their own drop detection.  Fortunately, most traffic uses TCP, so using a shaper or policer, will likely have some effect in slowing upstream source's transmission rate.

Again, though, this slowing is imprecise and requires the sender to "get the word" to slow down.  I.e. true bandwidth (to us) guarantees cannot be provided.

BTW, possible the best devices that could "influence" an upstream sender's transmission rate, were products like Packeteer's (which I believe it, and other vendor versions, are no longer marketed).  (NB: a couple of things they did for upstream TCP rate control, I recall, were to spoof the receiver's RWIN size and/or shaped returning ACKs.)

For you, perhaps the best way you could truly manage bandwidth, to you, would be either to convince your MAN vendor to provide the control you desire or for the MAN provider to allow you to co-locate a piece of equipment, on their side or your MAN link, which all your traffic will flow through, and upon which you can implement the QoS management you desire.  (If MAN vendor won't provide either, consider shopping for another MAN vendor.)

The only other way I know of precise bandwidth control to us, for varying purposes, was some form of bandwidth channel provisioning, which I don't know is possible on a MAN.

In other words, I can apply those policies in oposit direction and this
may or may not help with downstream traffic prioritization (using
bandwidth and police or shaping), but there is a chance it will? 
Unfortunately other options (MAN support or changing MAN to another one)
are not available for me.

Policies you apply for outbound traffic, will provide most anything you want for outbound management.

Policies you apply for inbound traffic, might "influence" bandwidth consumption, coming to you, and cannot control traffic prioritization for the inbound traffic.  You can, though, further limit what bandwidth your downstream clients receive.

Perhaps to make this clear, consider we have four clients, each send us, outbound, 100 Mbps, but we only have the 300 Mbps link.  As we control that outbound 300 Mbps, we can manage what happens to the 100 Mbps excess, and further decide how to prioritize the clients usage of the, now congested, 300 Mbps.

However, now consider the converse, your four clients attempt to pull down 100 Mbps each.  It will also congest on the far side's egress, but how is excess managed?  How can some traffic be prioritized?  Basically, we control neither, as we don't manage the far side's egress.  (Usually, your far side service provider will just let your four clients' traffic battle it out, among themselves, using a FIFO and BE "rules" (which, on average, would probably account for about a 25% drop rate on each client's 100 Mbps stream).

Once we receive the 300 Mbps (with potentially 25% dropped from each client's stream) what do you want to do?  You can, certainly shape/police each client's stream to whatever max rate they are entitled to.  Let's say, we police all four clients to 50 Mbps each.  Okay, we then only forward, locally, 200 Mbps, which fits well into our overall 300 Mbps.  Yet, does that immediately, reduce our overall received rate to 200 Mbps?  No, it does not.  Does that, immediately, reduce the far side's 400 Mbps to 200 Mbps.  No, it does not.  However, if this traffic is TCP, the drops both on the far side egress, and our policed drops, will both get the TCP sender's to slow such that the drops stop happening.  However, TCP, once the drops stop happening, will start to increase transmission rate, until drops happen again.

BTW, actually, the just described overate happens on our side too.  The difference is, turn around time to inform sender, on our side, is faster; we selectively queue, allowing transient bursts (far side queuing setup?); we can selectively drop traffic from some flows and not others (far side?); and we can prioritize when traffic is congested (far side?).

Yes, we can, often, "influence" traffic coming to us, but the granularity of effective traffic management, is hugely different between managing inbound vs. outbound traffic.

Years ago, I was in a situation with a campus of about 2,000 users with a pair of Internet connected T-3s (45 Mbps each), which were being totally overwhelmed by bandwidth demand.  There was much I could do, and did, to deal with outbound traffic, but for inbound, I even went to the extent of shaping returning ACKs.  All I can say, my efforts went from making Internet usage from useless, to horrible.  We very much needed more bandwidth, or management on far side, which SP, wouldn't provide (they very seldom do, more profitable to "sell" you more bandwidth).  We got more bandwidth, problem solved.

Your 300 Mbps, hopefully, isn't being hammered. Most of the time, I suspect, your companies, will get the bandwidth they expect.  However, again, unless you control the far side, you cannot truly guarantee, inbound bandwidth.

Again, let's say you policed all your 20 customer companies to 10 Mbps, i.e. 300 Mbps not exceeded.  True, every customer company will not get more than their 10 Mbps, but as the far side can congest with traffic beyond 300 Mbps, what happens then?

Also true, if TCP traffic, the aforementioned policers should, cause TCP to average near 10 Mbps, but TCP transmission rate will, for a bit of time, exceed the 10 Mbps.

As most traffic uses TCP, effectively managing it, requires understanding it.  Not knowing how well you understand TCP, I've provided two references.  The first has a nice animation in it too.

https://witestlab.poly.edu/blog/tcp-congestion-control-basics/

https://courses.cs.washington.edu/courses/cse550/16au/notes/lect16.pdf

 

Joseph,

Many thanks for this clear answer. I really appreciate time which you spend writing it and explaining me so clearly why we can't fully manage downstream. It is bad news for me but at least I know what my real options are.

Only one thing is not fully clear for me. I probably will use policing instead of shaping as, if I'm not wrong about this, it will have faster influence on downstream as it eliminates buffer (am 'I right about this?). Is any disadvantage to apply all policies on VLAN (in and out) as I have now? I noticed that you apply policies on MAN interface but it make my configuration simpler if I will do it on VLAN instead. Speedtest sows limit exactly as I set in this way..

 

interface GigabitEthernet0/0/1.102
 service-policy input PM-ANY-QoS-D30MB-S60MB
 service-policy output PM-ANY-QoS-D30MB-S60MB

 

Review Cisco Networking products for a $25 gift card