cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1423
Views
0
Helpful
1
Replies

queue-limit and tx-ring-limit on ASA5506

Tats0611
Level 1
Level 1

Hi everyone,

 

I am trying to set up priority queuing on ASA to prioritize voice traffic within my network but I cannot figure out what values should be put for queue-limit and tx-ring-limit. 

I read the below documentation but still cannot figure out how to determine what value is best here.

 

If I leave these configuration as default parameters, will it be not good? what is the best practices for configuring queue-limit and tx-ring-limit?

 

https://www.cisco.com/c/en/us/td/docs/security/asa/asa-command-reference/I-R/cmdref2/qr1.html#:~:text=The%20default%20queue%20limit%20is%201024%20packets.

 

You can apply one priority-queue command to any interface that can be defined by the nameif command.

The priority-queue command enters priority-queue configuration mode, as shown by the prompt. In priority-queue configuration mode, you can configure the maximum number of packets allowed in the transmit queue at any given time ( tx-ring-limit command) and the number of packets of either type (priority or best -effort) allowed to be buffered before dropping packets ( queue-limit command).

The tx-ring-limit and the queue-limit that you specify affect both the higher priority low-latency queue and the best-effort queue. The tx-ring-limit is the number of either type of packets allowed into the driver before the driver pushes back to the queues sitting in front of the interface to let them buffer packets until the congestion clears. In general, you can adjust these two parameters to optimize the flow of low-latency traffic.

Because queues are not of infinite size, they can fill and overflow. When a queue is full, any additional packets cannot get into the queue and are dropped. This is tail drop. To avoid having the queue fill up, you can use the queue-limit command to increase the queue buffer size.

 

I already configured followings:

----------------------------

object network ISP_VOICE
host 50.1.1.50

access-list ISP_VOICE extended permit ip any object ISP_VOICE

 

match access-list ISP_VOICE

policy-map QoS
class ISP_VOICE
priority


service-policy QoS interface outside

priority-queue outside

 

----------------------------

what I inteded to do with this configuration is to prioritize all the traffic destined to the voice server 50.1.1.50. it seems this configuration is working as the show service-policy command shows that the aggregate transmit count is increasing.

 

-----------------------------------

ciscoasa# show service-policy priority

Interface outside:
Service-policy: QoS
Class-map: VOICE_TRAFFIC
Priority:
Interface outside: aggregate drop 0, aggregate transmit 45

-----------------------------------

 

Best regards,

Tats

1 Reply 1

Joseph W. Doherty
Hall of Fame
Hall of Fame

In general, tx-ring-limit is the interface's hardware FIFO queue.  As a FIFO queue, there's no prioritization of any traffic.  So larger is bad for that.  However, it's better for platform performance.

In theory, you could calculate the smallest tx-ring-queue that would meet your service needs, but usually I just go with the platform's minimal value.

Queue-limit, controls the depth/capacity of one (or more) software queues.  Usually, when using CBWFQ, it controls the maximum number of packets per class.  If FQ is enabled for the class, the platform might support a flow queue limit too.

Too small or too large a value is often bad.  Cisco defaults often are far from optimal values.

For TCP traffic, I've found you may need to queue up to about half the BDP, for all TCP traffic on the interface.  As class queues are not shared and queue length is in packets, and BDP in in bytes, and BDP often varies per flow, and different classes of traffic may have different SLAs, determining optimal or even good values for queue-length can be difficult.

Fortunately, too large values, with very recent TCP stacks don't behave as badly as they would in the past.  The reason being, in the past TCP generally relied on packet lost to determine path congestion.  Recent TCP stacks now also react to jumps in end-to-end latency.

So, today, it's perhaps better to err on too large rather than too small.

I have no specific formula for computing a good value beyond keeping in mind the possible goal of half the BDP.

Again, keep in mind BDP is for TCP.  For not TCP traffic, needs vary.

For example, for something like VoIP, assuming its bearer traffic is being prioritized (first), there should be need for almost no queuing of that traffic.

Conversely a bulk data transfer, like a database dump, using TCP, with low priority obtaining bandwidth, will obtain the best "goodput" with a correctly sized queue-length.

Review Cisco Networking for a $25 gift card