cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
13762
Views
15
Helpful
14
Replies

Hold-queue in command on IOS XR

yansenyansen
Level 1
Level 1

Hi All,

i would like to ask a question regarding IOS conversion from regular IOS to IOS XR.

i am struggling to search the conversion of hold-queue in command on IOS XR.

here it is the command on

regaular IOS :

int tenx/x

hold-queue 4096 in

i checked on IOS XR documentation, that command is similar like queue-limit on IOS XR, then need to configure under policy-map (configure QoS).  i am not sure if that is correct.

Please let me know if You have some ideas about it.

this the link that i found about the queue-limit

http://www.cisco.com/en/US/docs/routers/asr9000/software/asr9k_r4.3/qos/command/reference/b_qos_cr43asr9k_chapter_0100.html#wp1482402264

1 Accepted Solution

Accepted Solutions

In IOS the holdQ is also used to trigger backpressure to the q'ing system to get activated.

in A9k/XR this is not necessary: typhoon (or trident for that matter) have a traffic manager (TM) that is always active, QOS configured or not. There is internal QOS inside the system already with backpressure, regardless of an egress queue.

The internal "VOQ's" (virtual output queues) are shaped already, and the traffic manager shapes the circuits on egress also already by nature.

For that reason there is no need for a holdQ to get backpressure.

Due to the architecture, this final fifo queue serves little purpose and for that reason it is not there as such (or configurable).

Note that a holdQ is not really the equivailent to a queue limit buffer.

But it is as close as it would get.

In short, you have no need to tune the holdQ, so you can omit this command in your conversation.

If you want to do something (but monitor the situation first before adding buffers to queues, as it will increase delay and jitter), a queue limit on a class-default on a main interface might be what you're after.

thanks

xander

View solution in original post

14 Replies 14

xthuijs
Cisco Employee
Cisco Employee

Yeah that is a good question but it is heavily platform dependent whether this is useful or not.

By platform depdendent, I am merely talking about the way the QOS is implemented for these platforms which in turn is directly related to the hw architecture of the forwarder that implements the QOS.

For ASR9000, it is the trident or typhoon NPU, and more specifically the traffic manager portion of it.

The hold queue in IOS (based platforms) is used to "keep the framer busy" with a pipeline of packets that have been scheduled for transmission. Also the excess use of the holdQ would exert backpressure to instruct the Q'ing hierarchy to starting buffering or Q'ing.

While this is a great methodology, it has pros and cons. Cons are that FIFO q's add latency. Also it consumes (unnecessary?) memory. In addition, if there is a high priority packet to be xmitted, it has to be put in that holdQ. So while holdQ buffering provides for good things in terms of performance and Q'ing strategy, at the same time it might affect PQ latency also.

Now, the way the a9k (for instance) implements QOS is by that TM scheduler, it makes sure the interface remains busy with packets and there is always a class default queue (which is then your sort of holdQ) for any unclassified traffic that has no specific queue.

so the way to mimick the holdQ is by means of a class-default queue on your main interface with a queue limit setting.

thanks

xander

Hi Xander,

Thanks for Your great explanation. This is for ASR9K typhoon card.

So it means hold queue on the IOS (based platform) will effect other class beside the class class-default ?

is there any way not to configure by using QoS to configure hold queue?  the current IOS config there is no QoS and we will be migrating to ASR9K , i just want to know if there is a better option to migrate it.

Many Thanks.

In IOS the holdQ is also used to trigger backpressure to the q'ing system to get activated.

in A9k/XR this is not necessary: typhoon (or trident for that matter) have a traffic manager (TM) that is always active, QOS configured or not. There is internal QOS inside the system already with backpressure, regardless of an egress queue.

The internal "VOQ's" (virtual output queues) are shaped already, and the traffic manager shapes the circuits on egress also already by nature.

For that reason there is no need for a holdQ to get backpressure.

Due to the architecture, this final fifo queue serves little purpose and for that reason it is not there as such (or configurable).

Note that a holdQ is not really the equivailent to a queue limit buffer.

But it is as close as it would get.

In short, you have no need to tune the holdQ, so you can omit this command in your conversation.

If you want to do something (but monitor the situation first before adding buffers to queues, as it will increase delay and jitter), a queue limit on a class-default on a main interface might be what you're after.

thanks

xander

i hardly find the material regarding Traffic Manager on Typhoon LC.

i was checking the default queue-limit behavior and found that

Queue Limit Default Values

The following default values are used when queue-limit is not configured in the class:

  • If QoS is not configured:
    • The queue limit is 100 ms at the interface rate.

the default value is in ms, is there any way to convert or see the default value in packet

Many Thanks

If you google "asr9000 quality of service architecture" I have a hopefully decent write up that might help there

https://supportforums.cisco.com/docs/DOC-15592

The TM is merely the hardware portion of the NPU concerned with the shaping and queuing (policing and marking is done in the pipeline already).

I'll see if I can write something more on this. Also during the Orlando A9K architecture and troubleshooting session this will be discussed more.

If you do "show qos interface < in=""> you can see what the hw is programming,

it also documents the buffer value. (note that this goes beyond any holdQ stuff).

In general when you configure QOS you will want to adjust the queue limit of your queues. For instance there is no need to buffer anything on PQ's (they are Q'd first always). And for the shaped classes, you dont want to burst 100msec unnecessarily either if the speed is huge.

Then depending on whether you have the L/B cards (trident) or TR cards (typhoon) as they have more limited buffer space, which is served on first come first serve basis.

Remember that the buffer space is particled and the time defined is done based on a standard packetsize (which I thought was somewhere between 256-512 bytes or so).

regards

xander

Thanks for great response. i am trying to understand the ASr9K QoS architecture now

Right on! thanks for your interest! We are always looking for ideas on topics to document to make things easier to deploy and use. And based on our interaction, I'll start writing some documentation on the holdQ principal and VOQ's of the ASR9K in a bit more detail.

If you are happy with the answer, could you mark this question "answered"? If there is something left open to be clarified, by all means drop a post!

xander

yes, It will be useful for us if there are some doc on the holdQ principal, so we can share to customer as well.

Many Thanks

Hi Xander,

when i tried to

"show qos interface < in=""> " i can not see anything as i do not apply qos on the interface. i would like to know the default queue limit on the interface.

can you please explain more about the statement on the Cisco Doc

Command Default

100 milliseconds: maximum threshold for tail drop

10 milliseconds: maximum threshold for high-priority queues

Maximum threshold units are in packets.

Many Thanks

Queue Limit Default Values

The following default values are used when queue-limit is not configured in the class:

#

  • #If QoS is not configured:#
    • # The queue limit is 100 ms at the interface rate.

If there is no QOS defined on the interface, that is there is no shaping
queue, then there is no buffering either.
Only when you have a QUEUE for the interface in the Traffic manager is when
potentially we are shaping and buffering.

formula to get default queue limit (in bytes) bytes = ((100 ms / 1000 ms) * Service rate kbps)) / 8 Service rate is the sum of minimum guarantee bandwidth and bandwidth remaining assigned to a given class.

The queue-limit is rounded up to one of the following values:
8 kbytes, 16 kbytes, 24 kbytes, 32 kbytes, 48 kbytes, 64 kbytes,
96 kbytes, 128 kbytes, 192 kbytes, 256 kbytes, 384 kbytes, 512 kbytes,
768 kbytes, 1024 kbytes, 1536 kbytes, 2048 kbytes, 3072 kbytes,
4096 kbytes, 8192 kbytes, 16394 kbytes, 32768 kbytes, 65536 kbytes, 131072 kbytes,
or 262144 kbytes. 

Hi Xander,

Thank You so much for the explanatation. i am checking more the comparison between regular IOS and IOS XR regarding hold-queue/queue.

as we know there is default input queue value and we can adjust by configuring hold-queue xxxx in under interface

router#show interfaces ethernet 0/0 ...
Input queue: 30/75/187/0 (size/max/drops/flushes); Total output drops: 0

and i tried to translate the config to IOS XR by configuring queue-limit on the QoS of class-default.
i got the error messange when configuring the policy-map :

interface TenGigE0/0/1/1

   service-policy input POLICY-EGRESS

!!% 'prm_ezhal' detected the 'warning' condition 'Ingress queuing not supported for this LC type or MPA-LC combination'

it is dawn on me that we can not apply queue on ingress interface.if i apply on egress it is working fine.

how we can apply queue-limit for ingress interface

The Traffic Manager in the NPU (aka TM) has 3 portions. 2 are used for egress, 1 is used for ingress.

Every packet, regardless of whether QOS is enabled or not, goes through the TM. The TM is limited in bandwidth.

For high speed interfaces where the potential ingress rate can exceed the TM capability, we have disabled the ingress TM

for that reason.

On the side, I think you need to let go of this "hold queue" requirement. These type of next gen routers dont need it.

The punt path is very well controlled and managed by LPTS, there is no holdQ buffering needed here or SPD like things.

(There is "EFD" in the NPU's RX queue in case it runs out of packet handles).

xander

Hi,

  Then how the QoS for MPLS between PE--P , P to PE. 

Imtiaz Sajid

hi imtiaz,

from PE to P we have access to the IP info, so on the ingress side of the PE you could mark based on anything you like and set a qos group.

on egress of that PE you can use that qos-group for classification and set the mpls EXP bits accordingly.

On the P to PE side, ingress on the PE there you can take the set EXP, and convert it to a qos-group. on the egress pe side you can use that qos-group to apply your desired Q'ing/policing.

Between P routers, they have only visibility on the EXP, and we can Q based on that.

cheers

xander