I'm placing a c3560x in-front of a firewall (between the firewall and Internet) in order to discard DoS attack traffic at wire-speed. Traffic which is possibly DoS attack traffic, is arriving at my catalyst already marked with CoS 2 by another device.
If the egress port to my firewall is not congested, I'd like all the traffic to be delivered to the firewall (no policing at the switch; let the firewall do its job).
If the egress port is congested, then I'd like as much CS2 traffic to be dropped as-necessary, to allow the unmarked traffic through. I could do this a few different ways:
for this scenario I would recommend you to go with option 2: "separate output queues for marked and unmarked traffic, with small % of buffers to the CS2 queue"
I would actually treat this traffic like scavenger in normal QoS deployments. So you should think about marking it CS1 instead of CS2.
CS2 is normally used for Network Management.
Assuming your Switch has following egress queuing: 1P3Q3T
I would say mark the DoS traffic CS1 and send it to queue 4 with 5% or so.
Mark the rest default and send it to queue 3.
Vielen dank/merci vielmal, Markus. It does sound like a CS1 scavenger use-case. One concern I have is with false-positives, and wasted buffers: in my use-case there are actually many firewalls, one on each switchport. If a different firewall is under attack using (for example) HTTP connect flood, then it's possible that legitimate HTTP connect traffic for all firewalls is erroneously tagged (CS1). This includes traffic to firewalls which are not the attack target.
I don't want any traffic to be dropped to a firewall unless the output 1gbps port is congested. Today, I see even the default "threshold 100 100 50 400" with "buffers 25 25 25 25" suffers from output drops to some firewalls. This means that both the CS0 and the CS1 queues would need to be deep enough to handle a reasonable amount of traffic.
I could allocate a larger amount of buffers to both CS0 q3 and CS1 q4 ("buffers 5 5 55 35"), and rely on the scheduler to service q3 first (bandwidth share 5 5 85 5), like a leaky priority queue. But there will be some buffers sitting around unused in CS1 q4 most of the time, doing nothing. These switches do not have enough output buffers, so I usually don't want to waste them.
Wasted buffers could be reduced by returning more buffers to common pool:
- srr-queue bandwidth share 5 5 85 5
- mls qos queue-set output 1 buffers 5 5 55 35 ! <--give big buffers to both CS0 and CS1
- threshold 3 100 100 30 400 ! <-- ...but give-back 70% to common pool
- threshold 4 100 100 20 400 ! <- give-back 80% of scavenger queue buffers to common pool
20% reserved of 35% allocated buffers (= 7% of all buffers) will usually sit unused in CS1 queue 4. If I were using the "threshold approach" instead, with one single queue used for both CS0 and CS1, but "threshold 3 100 100 25 400", then there would be fewer buffers sitting around, idle. But I think that using a separate scavenger class will be "more standard."
I don't know where the marking comes from. But if you have false positives, you anyway have a problem.
What I meant in my last reply is you should use CS1 and assign it 5% of the buffer, not bandwidth. So that should give you the behavior you described. CS1 is transmitted as long as there is bandwidth, but it is the first to be dropped in case of a congestion.
I don't know how many other classes you use, but if the rest is default, I would configure the queues something like this: 5 5 85 5
So you use most of the buffer for "default" traffic and 5% for Scavenger (CS1).
If you ever implement other classes, you would probably need to resize this queues, but with two classes that should work.
Whether you use CS1 or CS2 is probably not that important. From my point of view CS1 is the right class according to most standards, where CS2 is normally used for network management.
My answer was more related to your initial question, which mechanism you should use. And there I would definitely say option 2: separate output queues for marked and unmarked traffic with a small % queue for the bad traffic.