cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Bookmark
|
Subscribe
|
435
Views
1
Helpful
9
Replies

Traffic shapping behavior with the traffic inbound

david.alfaro1
Level 1
Level 1

Hello Dears

 

I hope you are doing well

 

In order to better understanding about Traffic Shapping, we know that traffic shapping must be implemented in low-bandwith links, in the outbound traffic and the characteristics of trafic smoothing since the buffers, and everithing.

But my question, beyond the above and in order of your experience, how does incoming traffic behave, what happens with the input buffer, is it the same buffer for incoming traffic as well, what happens if the incoming traffic exceeds the committed rate, is the inbound traffic buffered with tha same parameters configured about Bc and Be, do Bc and Be contemplate the total inbound + outbound traffic, or just the outbound traffic?

 

Kind regards

1 Accepted Solution

Accepted Solutions

Correct, although the buffer drop issue isn't specifically tied to "low-speed interfaces", "severe congestion" or traffic shaper (or policer).

The general cause of drops is due to ingress rate exceeds egress rate and if the excess is being buffered/queued, and storage capacity is exceeded (usually logically not physically), you're forced to drop packets (incidentally, need not be the just newly arrived packet, although that's the norm).

BTW, my patience is far from being exceeded, but as you seem to be asking the same question and/or focused on traffic shaping, either my prior replies haven't been clear (always a very real possibility, due my writing deficiencies), or there's an aspect to your question I'm missing, and not answering.

Possibly the following will help.

Consider a network device with just two interfaces.

If said device has two same bandwidth interfaces, such as two 100 Mbps interfaces, there should be no need for (normal) buffering/queuing.

However, is the above device has one 100 Mbps interface and one 10 Mbps, 10 to 100, as 100 to 100, shouldn't need to buffer/queue.  However, 100 to 10 MAY need to buffer/queue.  (Do you understand why "may"?)

If again, device has two 100 Mbps interfaces, but we traffic shape one at 10 Mbps, the drop effect should be, more-or-less, like the device with a single 100 Mbps interface and with a single 10 Mbps interface.  I.e. especially in regards to packet drops, due to exceedingly possible buffer resources, it matter not that a traffic shaper is being used rather than a like bandwidth physical interface.

NB:  a traffic shaper's behavior, in certain aspects, will never correspond with a like bandwidth physical interface and in other aspects, parameters for both a physical interface and a shaper need to correspond.

View solution in original post

9 Replies 9

Joseph W. Doherty
Hall of Fame
Hall of Fame

Generally, traffic shaping cannot be applied to ingress traffic.

Hello Joseph

I understand that traffic shaping is not applicable to ingress traffic, I fully understand that. My question is not whether it is applicable to incoming or outgoing traffic, I fully understand that as a rule it is applicable to outgoing traffic, as I also fully understand how traffic shaping works with outgoing traffic. But my original question is, in general (so as not to repeat the whole question), what happens to incoming traffic when traffic shaping is applied at an interface?

 

Generally nothing different between ingress and egress.  I.e. if egress can not transmit as quickly as ingress is received, traffic is queued at egress.  Doesn't much matter if egress slower transmission rate is due to a physical interface or shaper.

Hello Jopseph

Thank you very much for the explanation

Let me see if I understand, both incoming and outgoing traffic, i.e. all throughput, will be determined by the queuing and the traffic shaping on the interface, or in other words, the "bottleneck" will always be in outgoing traffic controlled by traffic shaping. Is it this way? or am I wrong?

"Always", actually no, but "often" or "usually", yes.

In some cases, there can be ingress queuing but usually that has nothing to do with any egress bottleneck (including what an egress traffic shaper bottleneck may create although a traffic shaper may create an ingress issue).

Ingress queuing, or drops, may be caused by insufficient platform forwarding capacity (to which a shaper can help exceed) and/or an internal bandwidth bottleneck between ingress ports and egress ports (ports, plural, is likely a requirement).

Hello Joseph,

 

Thank your for your patience

I understand, then, in severe cases, in low-speed interfaces, where traffic shaping is commonly applied, if in severe congestion the output buffers, which are finite (regardless of what is entering or inbound), are exceeded at a certain bitrate in which the buffers are completely full, and traffic shaping reaches its maximum capacity, then we would be talking about packet drops given the total occupation of the buffers when this supposed excessive bitrate is exceeded, is that correct?

 

Kind regards,

 

Kind regards,

Correct, although the buffer drop issue isn't specifically tied to "low-speed interfaces", "severe congestion" or traffic shaper (or policer).

The general cause of drops is due to ingress rate exceeds egress rate and if the excess is being buffered/queued, and storage capacity is exceeded (usually logically not physically), you're forced to drop packets (incidentally, need not be the just newly arrived packet, although that's the norm).

BTW, my patience is far from being exceeded, but as you seem to be asking the same question and/or focused on traffic shaping, either my prior replies haven't been clear (always a very real possibility, due my writing deficiencies), or there's an aspect to your question I'm missing, and not answering.

Possibly the following will help.

Consider a network device with just two interfaces.

If said device has two same bandwidth interfaces, such as two 100 Mbps interfaces, there should be no need for (normal) buffering/queuing.

However, is the above device has one 100 Mbps interface and one 10 Mbps, 10 to 100, as 100 to 100, shouldn't need to buffer/queue.  However, 100 to 10 MAY need to buffer/queue.  (Do you understand why "may"?)

If again, device has two 100 Mbps interfaces, but we traffic shape one at 10 Mbps, the drop effect should be, more-or-less, like the device with a single 100 Mbps interface and with a single 10 Mbps interface.  I.e. especially in regards to packet drops, due to exceedingly possible buffer resources, it matter not that a traffic shaper is being used rather than a like bandwidth physical interface.

NB:  a traffic shaper's behavior, in certain aspects, will never correspond with a like bandwidth physical interface and in other aspects, parameters for both a physical interface and a shaper need to correspond.

Hello Joseph

Truly I appreciate your explanation. You are right, I was clinged about the idea that traffic shapping had something to do with  the management of the inbound traffic, but it has to do with factors beyond of traffic shapping which you have explained, now it is very clear.

Your patience has a super buffer beyond the unimaginable, again thanks a lot. I will be keeping this conversation in my library.

Kind ragards

I'm pleased to read you found my explanation helpful.

Again, just remember, a traffic shaper is used to simulate a slower/lower bandwidth physical interface, so when it comes to queuing/buffering, pretty much the same issues arise, for example, for either a 10 Mbps physical interface or a 10 Mbps shaper.  (Also again, shapers don't behave exactly like physical interfaces, but for why you would want to traffic shape, the differences usually doesn't matter, at least doesn't much matter.)

As to ingress, for traffic management, you can filter with an ACL or police with a policer.

Devices often do have "ingress" queues, but it's generally an "issue" if ingress traffic is queued, and because it's an unusual issue, often you don't have many options to deal with it, beyond increasing the queue depth allocation.

Also, again, when you do run short of buffers, you've often have hit a logical allocation limit, not a physical RAM exhaustion issue, but huge queues have their own negative impacts.

Actually, there's a science behind "right sizing" queue/buffer size allocations, but it's a subject not well understood by many network engineers, who even often think the "ideal" is always zero drops, or a particular level of bandwidth utilization is either good or bad.

Review Cisco Networking for a $25 gift card