Hi everyone,
I have a 3850 with 16.6.x software which I'm testing.
I configured the 3850 for a QoS shaper test using the following policy-map:
===================
policy-map 1M_Test
class RTP
priority level 1 500
class UserTraffic
bandwidth 300
queue-limit dscp af31 percent 80
queue-limit dscp af32 percent 80
class class-default
bandwidth 200
queue-limit dscp af33 percent 80
policy-map 1M_Shaper
class class-default
shape average 1000000
service-policy 1M_Test
===================
I've noticed that when creating traffic via traffic generator for the RTP class, there is something interesting in how the priority queue can handle large frames rather than small ones.
For a fixed frame size of 1400 bytes of UDP traffic, the RTP class can reach roughly 470kbps without packet loss (not quite 500kbps, but close). However, for 160 bytes of fixed UDP traffic it can reach roughly 250kbps before dropping frames in the queue. If I were to send 470kbps, I'd reach roughly 50% packet loss. That is a massive drop in performance.
I didn't notice the same issue when working with "bandwidth" for all classes rathern than having a priority queue for one and shared for the rest. It seems this issue is reproducable only on a priority queue.
1) Any ideas why and how this can be fixed?
2) It seems as though the "queue-limit" command doesn't quite work as I expected. I figured it would drop according to Weighted Tail Drop by percent of allocated bandwidth. For example, "percent 80" for a class with "bandwidth 200" would start dropping frames once the queue reaches 160kbps. This isn't the case. Is anyone familiar with the correct way to go about configuring the tail drop by bandwidth?
Thanks alot for your time reading, and even more so for your time answering!