cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
288
Views
0
Helpful
9
Replies

QOS - buffers and WRED

eitanbenari4
Level 1
Level 1

Hi everyone,
I'm currently studying QoS and a bit confused about buffers and queues. I have a question regarding WRED: if I want to implement it, should I configure it only on the outgoing interface or on each class separately?

From what I understand, each class gets a certain amount of buffer memory allocated from the interface, so I'm wondering if it's enough to apply WRED on the classes only or if it’s also necessary on the interface itself.

Also, someone in a chat suggested using policing, but that doesn’t make much sense to me because I know policing is a harsh tool usually used by service providers, and it doesn’t seem logical to drop your own traffic aggressively inside your network.

I’d appreciate a detailed explanation of the different types of buffers and how they are typically managed in professional environments. Thanks!

9 Replies 9

Joseph W. Doherty
Hall of Fame
Hall of Fame

regarding WRED: if I want to implement it, should I configure it only on the outgoing interface or on each class separately?

Depends on your QoS goals.  That said, unsure WRED, in the later IOS versions, is still directly setable on an interface without it being embedded within a service policy.

I'm wondering if it's enough to apply WRED on the classes only or if it’s also necessary on the interface itself.

As classes can cover all traffic, shouldn't be a need to apply it on an interface too, plus I recall (???) when interface WRED was supported it may have been exclusive, i.e. couldn't use it and a service policy.  (BTW, such a limitation may not have applied when subinterfaces were being used.)

Also, someone in a chat suggested using policing, but that doesn’t make much sense to me because I know policing is a harsh tool usually used by service providers, and it doesn’t seem logical to drop your own traffic aggressively inside your network.

Don't know the whole context of your chat session, but, perhaps,  misunderstandings about both policing and WRED (which is often common about QoS, as its workings are often not well explained, IMO).

I'd appreciate a detailed explanation of the different types of buffers and how they are typically managed in professional environments.

I don't believe I fully/correctly understand what you're asking.

Are you actually just asking about using WRED?

hey thanks. what i asked about buffers is just for my basic understanding - what types of buffers exist? i heard terms as interface buffer, input buffer, output buffer, main buffer - it is very confusing. i understand its a form of dedicated RAM for qos tasks, and the RAM of the interface that the service-policy is applied on gets devided to classes. again - very confusing

+ if there're multiple buffers for each class  - im guessing Weighted random early detection is working on them individually to prevent tail drop 

 

Generically, buffers are just temporary storage.  That temporary storage, for various uses, might be of all one kind, and all managed one way or even if of one kind, have multiple layers of management due to different usage needs, and/or different kinds, with the foregoing management approaches.

The various buffer types you've mentioned are tied to different usages of buffering, which can be using the same kind of buffer, or not.

You mention "dedicated RAM for qos tasks" maybe it's all main memory RAM, and maybe it's not, and maybe it's dedicated and maybe it's not.  It would depend on the platform and its IOS.

When it comes to buffers, sometimes you have an option to physically increase the resource, sometimes not.  Sometimes you have settable buffer options, sometimes not.  And, even if you have settable buffer options, values are limited.

 


Thanks for the response.
I was hoping for something a bit more concrete – especially in terms of how these different buffer types (interface, input/output, main) function and relate to QoS mechanisms like classification and WRED.
Your answer was very general and didn’t really clarify those distinctions. If you or someone else can provide a more structured explanation or point me to a clear source, I’d really appreciate it.


 

I was hoping for something a bit more concrete

I completely understand.  Unfortunately, my generic response is not by accident.

This because you don't, I believe, appreciate the magnitude of your questions, especially asking for a "detailed explanation".  The magnitude is due to both how generic your questions are and the fact there's many variations in implementations.

Literally there are books written on these subjects, so rather difficult to provide detailed explanations in a few paragraphs.

The foregoing is not meant there's anything wrong with your questions, nor it's not worthwhile learning this information, but these forums are, perhaps, not where to learn such detailed information.

Basically your questions are somewhat like asking for a detailed explanation of routing, and with WRED, also like expanding a detailed explanation of routing with all the details of a particular routing protocol.

I'm unaware of a single good source to answer your questions, in detail.  But, to try to truly convey the magnitude of your questions, let's start with a Cisco article on buffer tuning: https://www.cisco.com/c/en/us/support/docs/routers/10000-series-routers/15091-buffertuning.html

Read it.  After doing so, did you find it all understandable?

That article is rather old, and, I believe doesn't mention the latter buffers tune automatic feature.  Given this latter feature, should you need to fully understand the earlier reference?

There's a reason I earlier wrote "Are you actually just asking about using WRED?", as I'm trying to narrow your questions so they can reasonably be answered.  Oh, BTW, unlike the way WRED is usually presented in much QoS literature, to use it well (depending on your definition of "well"), I believe, is so difficult, I generally suggest it not be used at all unless you're a QoS expert.

BTW, on these forums, I'm usually considered the resident QoS expert.  So, modesty aside, it's usually rare anyone else will provide "better" answers in the QoS subject area, although, be warned, I often disagree with much QoS "book" information (as I found much of isn't optimal in the real world, as least as it's often presented).

Also, BTW, I'll gladly assist you in your QoS learning, as I can, but again, believe your questions are currently too broad for me to reasonable provide detailed answers.

 

Thanks! I understand it’s a broad topic. I’m currently studying for the ENCOR exam, and I’m just not a fan of memorizing commands without really understanding what’s going on under the hood, but I guess I'll get that over time.

 

 

I'm very much an advocate of learning as much as possible about what's going on under the "hood", although exams don't often require that.

Which is why those with just a paper certificate may be unable to solve some, even many, network issues.

The ideal is you have a true understanding that you can both solve network problems by stock answers and figuring out the answer.  Many exams can easily assess the former but the latter can be difficult to assess especially as there might be many acceptable answers.

For example, with QoS, when do you need it?  If I tell you an interface never show more than 5% utilization or one that shows 100% utilization or one that shows no drops or one that shows drops, can you tell whether QoS should be applied on any of those 4 interfaces, and if QoS should be used, used how?

Fortunately, unlikely, a exam will ask such a question.

Again, if you can provide a focused question, like an exam question, I likely can provide you an answer.

If you actually want to learn QoS, I might be able to help you there too.  It's actually not too difficult to learn, but it requires understanding a broad base of understanding traffic before getting into various QoS approaches.  Usually, most QoS material doesn't start with such basics.

Simple example, take a usual network device interface.  It usually has default software FIFO queues, for ingress and egress.  Do you know of the separate physical/hardware rx/tx rings?  Are the queue depth settings, for both rings and software queues optimal for you?

So, without getting into subjects like WRED or policing usage, if you cannot answer the above questions, do you think it's really appropriate to get into them?

So, an exam may look as a correct answer about using WRED to avoid global FIFO tail drop synchronization (true), unlikely it would expect you to understand "correct" usage of its multiple parameters.  (For the latter, the Cisco recommendation is likely work with TAC - laugh.)

Again, feel free to post questions, but some can be practically difficult to answer in a few paragraphs.

Yeah I'm not really familiar with those subjects 

I’m still at a relatively early stage – I’ve just recently started getting familiar with concepts like WFQ, CBWFQ, strict priority, token bucket CIR, PIR, BC, BE, and TC. I’m currently working through the CCNP/CCIE Enterprise book, but I’m also looking for additional resources. If you have any recommended books or other solid material on the subject, I’d really appreciate it. Thanks!

Yeah I'm not really familiar with those subjects

Exactly!  Which isn't meant as a ding against you.

I’m still at a relatively early stage

As we all were at some point in time.

I’ve just recently started getting familiar with concepts like . . .

Yup, and probably whatever you've been studying may be incredibly useful for passing a Cisco certification exam, but, IMO, not so much as using those concepts, effectively, real world.

. . . I’m also looking for additional resources. If you have any recommended books or other solid material on the subject . . .

Alas, none that I've found which are truly beneficial for a lot of real-world QoS.  Understand, I've extensively studied as much material on QoS, both Cisco and non-Cisco, as I could, before embarking on my journey on using QoS effectively.  But, found, beyond "book" QoS, like placing something like VoIP into a PQ or LLQ, such material left much unaddressed or didn't work very well (assuming you just copied what you read).

So, if you want to just be able to pass Cisco certification questions, concerning QoS, the Cisco Press books on QoS, would be excellent material.  (That's not to say, that material doesn't have any other useful nuggets, because it does.  But overall, for useful real world QoS, it's perhaps as good as Cisco's AutoQoS, which I'm not a particular fan of either.)

Again, don't believe learning QoS is really that difficult, except QoS materials don't, I believe, provide the informational foundation for using various QoS techniques.

BTW, even what really truly falls under "QoS" is debatable.

For example, I was presented the (real world) problem, a weekend database transfer was being done, and it needed to not run into week day hours.  So, additional WAN bandwidth (NB: link between USA and Europe) was acquired, enough, that, there should be more than adequate bandwidth available to allow the amount of data transfer, within 48 hours.  Problem was, transfer was NOT using all available bandwidth, but why not?

Is this a QoS issue?  Personally, as the transfer wasn't using all available bandwidth, I thought of it as such, but perhaps such categorization is debatable.

Pretty much, as soon as I was made aware of this issue, I thought I knew the underlying issue, and I was correct.  I also mentioned a couple of ways to work around the issue, one of which was adopted (although, laugh, they were aghast doing what I suggested), and transfer rate increased by 5x, which eliminated the problem.  Yet, the underlying issue, I don't recall, ever being addressed in QoS material, but it's often addressed in particular cases (usually LFNs) of slower than expected data transfers.

BTW:

Issue:

 

Spoiler
BDP - bandwidth delay product

 Suggested/adopted solution:

 

Spoiler
Increase RWIN on receiving host