Hello,
We have a couple 5th Gen (Lightspeed Plus) line cards with 400GE/100GE combo ports coming our way. I've been reading the xrdocs.io article regarding dynamic buffering (http://xrdocs.io/asr9k/blogs/2018-09-06-dynamic-packet-buffering/) used on Lightspeed series LCs for output buffering.
Is there any specific configuration we need to apply on the interface (such as applying an output service-policy to assign a queue) to take advantage of HBM output buffering behavior on LS+ 100G/400G ports?
On EZchip based 2nd (Typhoon) and 3rd Gen (Tomahawk) forwarding engines, by default, if you do not configure any output policy-map on a 10GE port, the TM appears to only provide a small queue of just a few packets on the tx-ring:
I'm not sure how many packets the TM provides on the tx-ring queue by default when there is no policy-map configured, but it's enough to cause output drops when Nx10GE or 100GE of ingress is trying to exit a 10GE port. Because of this, on every edge router 10GE ports, we configure the following output service-policy to minimize output drops while not queueing excessively too much to add bufferbloat for end-user experience:
policy-map q-port-50ms
class class-default
bandwidth percent 100
queue-limit 50 ms
!
end-policy-map
!
So, my question: How does the TM behave for tx-ring queueing by default on Lightspeed series line cards with HBM dynamic buffering, when a 100G or 400G port has no service-policy configured (default naked port configuration)? Will it only provide buffering of few packets, where user will still have to configure 50ms policy-map on output like above?
Thank you!
James