10-17-2020 06:18 PM - edited 10-17-2020 06:24 PM
Experts,
While trying to implement basic QoS, I'm trying to understand a basic concept about some traffic might always imposing congestion:
1. Understood basic principle: "Queuing policies only engage when the interface is congested" ~Cisco. Got it. Unless there is congestion, the traffic unregulated by policing.
2. Next basic principle: I believe I correctly understand, in a specific case like a WAN link (Internet connection), QoS policies/queuing will not be effective if the device is not properly configured for the contracted service rate (sub-line-rate), as it will assume full line rate. Example: you connected your 100mbps ethernet internet connection into a gigabit ethernet port, you can hit your ISPs rate limit, but the router only sees 10% usage (roughly) on that interface, so it sees no reason to impose policies. ISP starts policing you on their side.
A provided example of accounting for sub-line-rate was:
policy-map WAN-EDGE-QUEUING
(Bunch of classes and priories/bandwidth, etc)
policy-map INTERFACE-EGRESS
class class-default
shape average 100M
service-policy WAN-EDGE-QUEUING
interface Gi 0/0
description Internet connection at 100mbps
service policy output INTERFACE-EGRESS
3. (TL:DR) My actual question:
Assume a scenario where you have a site-to-Site VPN between two sites with the same contracted internet speed, like both 100mbps. A user starts a file transfer of some kind, a download from an internal web server, or another stream who's protocol would like to go as fast as possible until it is signaled to slow down.
In this case, if you correctly configured your WAN link to match your contracted rate, won't you always end up causing congestion since that transfer being tunneled to the user in your remote site will go as fast as possible, even if there is hypothetically no other traffic?
Using Cisco's 8-class WAN edge model as an example:
policy-map POLICYMAP_EGRESS-WAN-EDGE
class VOICE
priority percent 10
class INTERACTIVE-VIDEO
priority percent 23
class NETWORK-CONTROL
bandwidth percent 5
class SIGNALING
bandwidth percent 2
class MULTIMEDIA-STREAMING
bandwidth percent 10
fair-queue
random-detect dscp-based
class CRITICAL-DATA
bandwidth percent 24
fair-queue
random-detect dscp-based
class SCAVENGER
bandwidth percent 1
class class-default
bandwidth percent 25
fair-queue
random-detect dscp-based
In this policy-map, if your traffic type was default and there was no congestion, you could exceed bandwidth 25% and not be policed. In the case where that transfer might always try to go as fast as possible, you'll always cause congestion, and then get policed down to 25%, then leaving 75% unallocated right?
Could anyone clarify if this is correct, or if totally wrong, and if so, how?
Thanks in advance
10-18-2020 08:21 AM
1. Understood basic principle: "Queuing policies only engage when the interface is congested" ~Cisco. Got it. Unless there is congestion, the traffic unregulated by policing.
Unsure you're using the term policing in its usual sense.
Policing is another term for rate limiting and doesn't require congestion.
2. Next basic principle: I believe I correctly understand, in a specific case like a WAN link (Internet connection), QoS policies/queuing will not be effective if the device is not properly configured for the contracted service rate (sub-line-rate), as it will assume full line rate.
Many WAN links are not rate limited, however even when they are, there might be a link, somewhere in the end-to-end path, with less bandwidth, either physically or possibly due to congestion (which could also apply to a physical link segment of even more bandwidth).
E.g.
rtr1 <100m> rtr2 <10m> rtr3
rtr41 <100m> rtr5 <1000m> rtr6
rtr42 <100m> rtr5 <1000m> rtr6
.
.
rtr411 <100m> rtr5 <1000m> rtr6
rtr421 <100m> rtr5 <1000m> rtr6
In the above rtr1 "sees" 100 Mbps, but sending traffic to rtr3 is limited by rtr2's 10m link. For this case, if we shape rtr1's egress to 10 Mbps, we can manage congestion (sort of) as if were were defining the policy on rtr2.
In the above rtr4x also "sees" 100 Mbps, to rtr5, but if there's enough concurrent traffic from all the 4x routers, they might not be able to obtain 100 Mbps. For this case, ideally rtr5 would have a policy we know or control. If not, we might be able to "dynamically" shape our traffic to match what appears to be the available bandwidth.
BTW: these are the two classic cases creating a bottleneck.
3. (TL:DR) My actual question:
Assume a scenario where you have a site-to-Site VPN between two sites with the same contracted internet speed, like both 100mbps. A user starts a file transfer of some kind, a download from an internal web server, or another stream who's protocol would like to go as fast as possible until it is signaled to slow down.
In this case, if you correctly configured your WAN link to match your contracted rate, won't you always end up causing congestion since that transfer being tunneled to the user in your remote site will go as fast as possible, even if there is hypothetically no other traffic?
Maybe, maybe not. There are some other variables you haven't defined. For example, what if the sending host's port is also 100 Mbps? What protocol is being used? If TCP, what's the receiving host's RWIN vs. BDP? Also if transfer using TCP, any drops causing TCP to revert to slow start or congestion avoidance? Also if TCP, is a later variant being used which slows based on jumps in latency or ECN?
Anyway, what's important to understand, shaper provides an egress point with less bandwidth than the physical port. Basically, it's behavior should be (mostly) like if you replaced your higher bandwidth port with a lessor bandwidth port.
E.g.
given rtr1 <100m> rtr2, behavior for rtr1 (shape 10 =>) <100m> (<= shape10) rtr2, should be much like rtr1 <10m> rtr2
In this policy-map, if your traffic type was default and there was no congestion, you could exceed bandwidth 25% and not be policed. In the case where that transfer might always try to go as fast as possible, you'll always cause congestion, and then get policed down to 25%, then leaving 75% unallocated right?
No.
In a regular bandwidth class, the allocation sets a guaranteed minimum, not a cap, and even the minimum isn't always just the specified minimum.
E.g.
policy-map Sample
class a
bandwidth percent 50
class class-default
bandwidth percent 25
What's the minimum guarantee for class A and class-default? (A. 2/3 for class a and 1/3 for class-default.)
If this is unclear, or you're still confused, please post additional questions.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide