cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3080
Views
5
Helpful
9
Replies

Hqos issue

Hello
I have come across an issue which I cannot fathom, its pertaining to hierarchical MPQ policy running on cisco 8300 CE rtr

When this Hqos policy is applied straight away I see vast mounts of output drops on the wan interface.
however looking into the policy map applied I see no drops being reported against any classes!

What is strange is it’s the exact same policy running on the same circuit but now applied on a new rtr (c8300 - 17.06.03a)
unfortunately the old rtr died so i cannot exclude if this is an issue inherited but having no historic data to cross-reference and validate is a pain, However we run the same policy (same classification) on other 8300’s and have no issue, same make/model feature set etc…all the same.

Now I have narrowed the issue down to the default shaping on the parent policy if I don’t not append the parent policy and append the child policy instead, I then don’t incur any drops:

policy-map CHILD
class EF
priority level 1
police cir 7.5m conform-action transmit exceed-action drop
class AF4x

priority level 2
police cir 79m conform-action transmit exceed-action set-dscp-transmit cs1
class MGMT
bandwidth 256
police cir 256k conform-action transmit exceed-action set-dscp-transmit cs1
class class-default
fair-queue

policy-map PARENT
class class-default
shape average 130560000
service-policy CHILD

Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul
1 Accepted Solution

Accepted Solutions

When a shaper is used, alone, it would be expected to manage is own queue (or queues, in some past instances I suspect it manages its own multiple flow queues).

When a shaper is used, in a parent policy, it would be expected its shaped traffic is pushed down into its child policy, but, in the past, I've suspected sometimes the shaper might still have its own queue(s).  (In concept, perhaps much like how an interface's TX FIFO queue is used to feed software queues.)

If the forgoing is happening, microbursts might overrun a parents shaper's queue before being pushed into the child queues.  If this is happening, a possible mitigation would be to increase the shaper's queue size (which might be possible, advanced queue buffer sizing commands seems to vary much by platform - best way to determine is see what configuration options are included in the parent's class being shaped).

As an alternative, still assuming the shaper is the problem, tweaking its Bc (and possible Be) might decrease shaper queuing needs for microbursts (downside, they pass the microburst to your provider - which can be happening now, as there's no shaper in place).

As to comparing this to other devices and/or other interfaces, you've noted shaping values might differ (to be expected) and traffic behaviors can certainly differ.  I.e. an apples to oranges comparison.

View solution in original post

9 Replies 9

We are all here, 
we will do our best to help you 
you mention output drop, do you check the error counter of interface ?

Hello
No errors on the interface whatsoever, not even drops reported on the qos classes within the policy. I also tried tweaking various things, including interface/class queue sizes, that it didn’t resolve anything, drops continued to increment at high rate until I either took off the whole policy or parent policy (default traffic shaping) then the drops stopped completely and this was an evening change, so traffic utilization was minimal (max tx/rx 6%)


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

". . . we run the same policy (same classification) on other 8300’s and have no issue, same make/model feature set etc…all the same."

Including the same port bandwidths and shaping values?

Could you post 

show interface 

and

show policy-map interface?

When the hierarchal policy was being used, what "tweaks" did you try, specifically?

What's the port bandwidth?  You're shaping for 130 Mbps, correct?

"traffic utilization was minimal (max tx/rx 6%)"

Over what time interval?

Hello Joseph


@Joseph W. Doherty wrote:

". . . we run the same policy (same classification) on other 8300’s and have no issue, same make/model feature set etc…all the same."

Including the same port bandwidths and shaping values? - No they are specific to each sites aggreed bearer/cdr

Could you post 

show interface

and

show policy-map interface? 

see attached for a current readout but im not sure it will help as unfortunately it doesn’t reflect the policy at the time of change as it is now minus the parent policy and applied to the main interface and not the subinterface. but as stated previously i could see no drops anywhere at that time.

When the hierarchal policy was being used, what "tweaks" did you try, specifically?
- I tried to peicemeal the child policy adding a single class and then anohter..etc  to see if the issue was specific to a certain class, 
- increased the queue lmit for the default class (to 256 per flow), and interface imput-hold queue to 1500
- applying the policy on the main interface instead of the subinterface


What's the port bandwidth?  You're shaping for 130 Mbps, correct? no it should be 150 , pasting mistake on OP

"traffic utilization was minimal (max tx/rx 6%)"

Over what time interval? load interval was set to 30sec - default tc 0.125ms



Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

(queue depth/total drops/no-buffer drops/flowdrops) 0/111/0/111

Only the defualt traffic,

Are you sure that traffic is classify ?

Hello
Default class has no classification.


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

Yes I know what is not clear to me is the MgMT queue not showing any packet but the default show packet and drop

When a shaper is used, alone, it would be expected to manage is own queue (or queues, in some past instances I suspect it manages its own multiple flow queues).

When a shaper is used, in a parent policy, it would be expected its shaped traffic is pushed down into its child policy, but, in the past, I've suspected sometimes the shaper might still have its own queue(s).  (In concept, perhaps much like how an interface's TX FIFO queue is used to feed software queues.)

If the forgoing is happening, microbursts might overrun a parents shaper's queue before being pushed into the child queues.  If this is happening, a possible mitigation would be to increase the shaper's queue size (which might be possible, advanced queue buffer sizing commands seems to vary much by platform - best way to determine is see what configuration options are included in the parent's class being shaped).

As an alternative, still assuming the shaper is the problem, tweaking its Bc (and possible Be) might decrease shaper queuing needs for microbursts (downside, they pass the microburst to your provider - which can be happening now, as there's no shaper in place).

As to comparing this to other devices and/or other interfaces, you've noted shaping values might differ (to be expected) and traffic behaviors can certainly differ.  I.e. an apples to oranges comparison.

Cheers @Joseph W. Doherty  your input is invaluable as ever - 
TBH i won’t be tweaking any buffers mate due to restrictions it will be denied anyway.
personally ive never experienced this before especially seeing no drops whatsoever in any class within a policy and what you stated makes absolute sense in fact ive since even verified that from cisco documentation however given the fact this in the only single instance on what is a fairly large migration   i have pushed it towards the ISP regards the expected qos we deliver to them. 
any resolution ill come back and post a response.


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul
Review Cisco Networking products for a $25 gift card