Vielen dank/merci vielmal, Markus. It does sound like a CS1 scavenger use-case. One concern I have is with false-positives, and wasted buffers: in my use-case there are actually many firewalls, one on each switchport. If a different firewall is under attack using (for example) HTTP connect flood, then it's possible that legitimate HTTP connect traffic for all firewalls is erroneously tagged (CS1). This includes traffic to firewalls which are not the attack target.
I don't want any traffic to be dropped to a firewall unless the output 1gbps port is congested. Today, I see even the default "threshold 100 100 50 400" with "buffers 25 25 25 25" suffers from output drops to some firewalls. This means that both the CS0 and the CS1 queues would need to be deep enough to handle a reasonable amount of traffic.
I could allocate a larger amount of buffers to both CS0 q3 and CS1 q4 ("buffers 5 5 55 35"), and rely on the scheduler to service q3 first (bandwidth share 5 5 85 5), like a leaky priority queue. But there will be some buffers sitting around unused in CS1 q4 most of the time, doing nothing. These switches do not have enough output buffers, so I usually don't want to waste them.
Wasted buffers could be reduced by returning more buffers to common pool:
- srr-queue bandwidth share 5 5 85 5
- mls qos queue-set output 1 buffers 5 5 55 35 ! <--give big buffers to both CS0 and CS1
- threshold 3 100 100 30 400 ! <-- ...but give-back 70% to common pool
- threshold 4 100 100 20 400 ! <- give-back 80% of scavenger queue buffers to common pool
20% reserved of 35% allocated buffers (= 7% of all buffers) will usually sit unused in CS1 queue 4. If I were using the "threshold approach" instead, with one single queue used for both CS0 and CS1, but "threshold 3 100 100 25 400", then there would be fewer buffers sitting around, idle. But I think that using a separate scavenger class will be "more standard."