cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3344
Views
10
Helpful
5
Replies

WAN Edge - QoS - Queuing and Shaping - General Understanding

twc-admin
Level 1
Level 1

Hello,  I've been banging my head against the wall as to how I can properly deploy QoS at our WAN edges for all locations.  Specifically for controlling both upload and download speeds whilst allowing for proper queuing of voice/video traffic into llq / priority queue.  

At this point I just need some confirmation that I am going about this the correct way and that my understanding of how things work is accurate.  The cisco guides and CBT nuggets don't quite give you real world scenarios.

 

So here it goes.  We have Site A, with a high-speed broadband internet connection (non-symmetrical).  The ISP promises at least 115mbps DOWN / 15mbps UP.  Of course, locally in the LAN / switched network, everything is 1GB connections or greater.  Our connection from the switch to the router is 1GB (we'll call this INSIDE INTERFACE) and our connection from the router to the ISP is 1GB (call this OUTSIDE INTERFACE).  Naturally, our congestion point is at the OUTSIDE interface.

 

Cisco vEdge routers are slightly different than IOS, but concepts are still the same. 

 

My understanding is that the shape command applies to traffic in the OUT or Tx direction.  If this is true and we apply shaping to the OUTSIDE Interface of the router, this will effectively only control UPLOAD speeds - since traffic is coming IN the INSIDE interface and then being sent OUT the OUTSIDE interface.  This could be read as UPLOAD, Is this correct?? 

 

Next, if that previous statement is correct - we can simply apply shaping to the INSIDE Interface.  This would effectively control traffic coming IN the OUTSIDE interface and then being sent OUT the INSIDE interface.  This could be read as DOWNLOAD, is this correct??

 

Moving on, if the previous two assumptions are correct.  If we start to apply classifications, queuing, and scheduling.  How do we allocate the proper bandwidth percentages when the interfaces are 1GB, but ISP is only offering 115mbps DOWN / 15mbps UP?

 

The major confusion I have is the following:  If we configuring shaping rates on the INSIDE interface to 115000 kbps (115mbps) and 1500 kbps on the OUTSIDE interface - does this effectively set these interfaces to shape all traffic at the correct ISP promised speeds?  Another question is:  If we begin applying queuing policies to these interfaces - do the "bandwidth percent" configurations apply to the "shaped" speeds?  Meaning, if we set a REALTIME Traffic Class (voice/video) to have 33% in a Priority LLQ and it is applied to the OUTSIDE interface, will this guarantee voice/video has 33% of 15mbps?  And in the opposite direction, if we apply similar OR same policy to the INSIDE interface, does this guarantee 33% of 115mbps?

 

Sorry for the long write up.  Haven't had to do much with QoS in the past and I want this SDWAN deployment to make the End User experience fantastic, especially for Voice/Video.

 

Thanks!!!!  Any input appreciated!!!

1 Accepted Solution

Accepted Solutions

Correct, if you're going to use a shaper, for ingress, it does need to be "slower" than inbound bandwidth, otherwise no traffic will congest at the shaper, hence no drops, and possibly (NB: some later TCP implementations also monitor jumps in latency, to indicate congestion) no ingress slow down.

Besides using a shaper, though, you might also police ingress traffic. A policer will generally drop traffic sooner than a shaper, but conversely (as prior mention of latency jumps impact some TCP implementations) without queuing, a policer might not impact ingress traffic any sooner.

Whichever you use, shaper or policer, though, the sender has to detect the congestion, and the detection time can be rather slow, especially if you're trying to keep bandwidth "free" for LLQ type traffic. Remember, as there's no prioritization on the other side's egress, to "guarantee" LLQ type bandwidth, you effectively need to keep sufficient free bandwidth on the link so there's no congestion on the far side egress at all. Further, remember something like TCP, with its "slow start" can be very bursty. So, trying to keep TCP from bursting into the bandwidth you're trying to keep free, for LLQ type traffic, means you often need to really, really trim back/down other traffic's bandwidth (or so has been my experience - once while waiting for an ISP bandwidth upgrade, I actually used these techniques for a few weeks, i.e. policing ingress and shaping outbound TCP ACKs, they work, but I found I had to trim back/down "a lot", like in the ballpark of 75% - effectively you lose a lot of your ingress bandwidth).

View solution in original post

5 Replies 5

twc-admin
Level 1
Level 1

To add a bit more to my above story, I guess the real confusion is INBOUND / Download traffic.  For example, if we go to Cisco.com, there may only be a handful of outbound / inbound packets just to pull the Web Page in and Browse.  But say then we download a file, like an IOS image or something.  At the same time, we have maybe a Webex Team Meeting with a few participants sharing video.  Yes, I know the video quality greatly relies on the meeting host's Internet and the QoS on their end, but how do we ensure that InBound/Download on the participant end gets priority llq over the HTTP / File Download??

Hope someone takes the time to read all of this haha.

What you understand, and perhaps desire to do for your upload bandwidth management, is correct/fine.

What you understand, and perhaps desire to do for your download bandwidth management, isn't really correct, in understanding, nor really accomplishes what you desire. (Understandable that you're real confusion is inbound/download.)

Inbound traffic management is about unmanageable on a Cisco router. For flow rate adjusting traffic, like TCP, you might be able to "control" the download rate by dropping ingress packets, as this is often seen as an indication of congestion. And/or, for some protocols, again like TCP, you can shape the egress/return ACKs, which can slow a sender's rate. On a Cisco router, these techniques can work, but they are imprecise and often the sender reacts slowly.

There are 3rd party traffic management appliances designed for this. They are much better in precision, and can do things a Cisco router cannot, such as spoof the size of a TCP receiver's RWIN. That said, they still have issues with ingress traffic.

The best way to deal with ingress traffic, is on the other side of the link managing its egress. Unfortunately, ISPs generally never support QoS on their side of the connection.

A "real world" solution, that I often recommend, is since ISP bandwidth is relatively inexpensive, obtain more than one connection, and dedicate links for certain purposes. E.g. (at least for ingress) LLQ type traffic on one link, non-LLQ type traffic on the other. (Actually my usual recommendation is for business site-to-site VPN on an ISP link different from an ISP link used for general Internet traffic, insuring separation based on traffic type will be more difficult.)

To your other questions:
". . . do the "bandwidth percent" configurations apply to the "shaped" speeds?"
Yes.
"Meaning, if we set a REALTIME Traffic Class (voice/video) to have 33% in a Priority LLQ and it is applied to the OUTSIDE interface, will this guarantee voice/video has 33% of 15mbps?"
Yes, but generally LLQ can use up to 100%, as long as overall bandwidth does try to go beyond 100%. If it does, LLQ will police to configured rate.
"And in the opposite direction, if we apply similar OR same policy to the INSIDE interface, does this guarantee 33% of 115mbps?"
Likely/probably not (full explanation, possibly both long and complex)

Thanks for the response. Yes, I did read that the inbound from Internet
isn’t really possible to control outside of a 3rd party appliance. Only
that it might be, and “might” is the key word, be able to be slightly
controlled by setting the Inbound/Download shaped speed to be slightly Less
than that of what the ISP is promising. This way you can at least try to
queue traffic and if it starts to drop packets, TCP may adjust its flow
accordingly.

Site to Site Inbound/Download will be easily controlled by the
Outbound/Upload shape/queue of the Remote Site like you said.

I was always fuzzy on the percentages of qos when you’re dealing with
mismatches speeds. So that is good confirmation to know that the QoS % are
based on what you’re shaping at.

Definitely cleared up my questions and confirmed what I thought.

Thanks again,

Correct, if you're going to use a shaper, for ingress, it does need to be "slower" than inbound bandwidth, otherwise no traffic will congest at the shaper, hence no drops, and possibly (NB: some later TCP implementations also monitor jumps in latency, to indicate congestion) no ingress slow down.

Besides using a shaper, though, you might also police ingress traffic. A policer will generally drop traffic sooner than a shaper, but conversely (as prior mention of latency jumps impact some TCP implementations) without queuing, a policer might not impact ingress traffic any sooner.

Whichever you use, shaper or policer, though, the sender has to detect the congestion, and the detection time can be rather slow, especially if you're trying to keep bandwidth "free" for LLQ type traffic. Remember, as there's no prioritization on the other side's egress, to "guarantee" LLQ type bandwidth, you effectively need to keep sufficient free bandwidth on the link so there's no congestion on the far side egress at all. Further, remember something like TCP, with its "slow start" can be very bursty. So, trying to keep TCP from bursting into the bandwidth you're trying to keep free, for LLQ type traffic, means you often need to really, really trim back/down other traffic's bandwidth (or so has been my experience - once while waiting for an ISP bandwidth upgrade, I actually used these techniques for a few weeks, i.e. policing ingress and shaping outbound TCP ACKs, they work, but I found I had to trim back/down "a lot", like in the ballpark of 75% - effectively you lose a lot of your ingress bandwidth).

Oh, a couple other things . . .

First, I suspect many Cisco shapers and policers don't account for L2 overhead. If if seems your device does not, just shape/police slower, to allow for that overhead, either average or worst case.

Second, your OP has "The ISP promises at least 115mbps DOWN / 15mbps UP." A variation of an old joke, how can you tell an ISP is lying? A. When their lips move.

Variable bandwidth, causes its own issues. Most, of course, if they go lower then "promised". Providing more than promised, means you don't obtain its benefit, and for ingress, would allow more traffic that could, possibly, impact sending sources (for example perhaps causing more dup ACKs or forcing sender into slow start).