cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2502
Views
0
Helpful
12
Replies

QoS on MPLS Network

jasonsalomons
Level 1
Level 1

I've labbed up a basic MPLS VPN enviroment, running a single VRF over MPLS MPBGP.

Basic scenario is:   CE router1 -> PE router1  - PE router2 - CE router2

Im trying to establish two things: Firstly, I want to shape or rate limit the traffic over the connection, and secondly I want to prioritze VoIP packets that are coming from the customer (CE1) already tagged DSCP EF.

When I use the routers extended ping from CE 1 to CE2 (setting up mutiple pings) with one ping using the TOS:184 option (EF) I am seeing the rate limiting of 18K traffic, but not seeing a priority of 70%. (seeing about 55% DSCP, 45% non DSCP packets)

This is my QoS config, there is a very basic config on the router, no other filtering anywhere. Ive applied the below config output on the PE router 2 interface.

class-map match-any DSCP46

match ip dscp ef

!

!

policy-map PM_DSCP

class DSCP46

priority percent 70

policy-map PM_DSCP_18K

class class-default

shape average 18000

service-policy PM_DSCP

Being my first real QoS config, can someone please let me know what I am doing wrong?

2 Accepted Solutions

Accepted Solutions

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

These numbers are as one would expect, even though your policed throughput is a tad higher than the actual limit.

Most interesting about these numbers, the explicit policer is using a much large Bc then the implicit LLQ policer had in your prior posting and I also notice you've explicitly increased the implicit LLQ's Bc, also making it much larger than the default.  With the increased Bc, we can't really tell whether its the policer or the changed Bc that's changed performance so much.

View solution in original post

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

Do you think this issue could be caused by the packet size  being very large ( its causing fragmentation at thre receiver)?

Hmm, dunno.  You might try a payload size of 1K or 1400, see if there's any signficant difference.

View solution in original post

12 Replies 12

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

When I use the routers extended ping from CE 1 to CE2 (setting up mutiple pings) with one ping using the TOS:184 option (EF) I am seeing the rate limiting of 18K traffic, but not seeing a priority of 70%. (seeing about 55% DSCP, 45% non DSCP packets)

You should only see priority traffic at 70% if link is pushed to 100% and priority traffic offers 70% or more.

Hello Joseph,

With the exetended ping, I increased the size of each of the pings so they were each generating about 40kbps of traffic, so both pings were generating around 80kbps traffic, with one ping stream EF, and the other non EF. Without the rate shaping command I could see 80kbps of traffic on the interface.

That being said, I still wasnt seeing 70% of traffic as DSCP EF with the rate shaping enabled.. and really wanted to know if my QoS config looks OK?

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Do you have a snapshot of the policy stats while you were pushing the two 80 Kbps ping streams?

As to whether your policy looks "ok", it does as far as you describe what you want it to accomplish.  Although, Cisco recommends not setting LLQ higher than about 1/3.

Hi Joseph,

Ive made a few changes... set the rate shaping to 128k.

Ive now got 3 ping streams going, each generating around 80kbps traffic each, and two streams are DSCP EF marked.

Looking at wireshark on teh receiver (hop after shaping/policy maps), and see 50% packets are DSCP?

Here is the policy map as traffic was flowing:

['Router#sh policy-map interface gi 0/0']

[' GigabitEthernet0/0']

['  Service-policy output: PM_DSCP_81']

['    Class-map: class-default (match-any)']

['      165307 packets, 122524982 bytes']

['      30 second offered rate 248000 bps, drop rate 119000 bps']

['      Match: any']

['      Traffic Shaping']

['           Target/Average   Byte   Sustain   Excess    Interval    Increment']

['             Rate                Limit  bits/int     bits/int  (ms)        (bytes)']

['           128000/128000    1984   7936      7936      62           992']

['        Adapt  Queue     Packets   Bytes     Packets   Bytes      Shaping']

['        Active Depth                                Delayed   Delayed    Active']

['        -           66        91948     39299176  62738    27183468  yes']

['      Service-policy : TEST']

['        Class-map: DSCP46 (match-any)']

['          79600 packets, 65831404 bytes']

['          30 second offered rate 165000 bps, drop rate 107000 bps']

['          Match: ip dscp ef (46)']

['            79600 packets, 65831404 bytes']

['            30 second rate 165000 bps']

['          Queueing']

['            Strict Priority']

['            Output Queue: Conversation 24']

['            Bandwidth 70 (%)']

['            Bandwidth 89 (kbps) Burst 2225 (Bytes)']

['            (pkts matched/bytes matched) 66082/55354236']

['            (total drops/bytes drops) 36124/52464880']

['        Class-map: class-default (match-any)']

['          85707 packets, 56693578 bytes']

['          30 second offered rate 83000 bps, drop rate 9000 bps']

['          Match: any']

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

From what you're doing, and the stats, it appears the LLQ policer is dropping too much traffic.  Notice how it lists your LLQ bandwidth as 89 (kbps) and 70% of 128 kpbs is 89.6 kbps, so that looks about right yet with 165 kbps offered, it drops 107 kbps, leaving only 58 kbps!

If we add the two class drop rates 9 kbps and 107 kbps, we have 116 kbps yet the shaper's drop rate list 119 kbps!

What I suspect might be happening, the shaper (especially if this is pre-HQF CBWFQ) I believe maintains its own queues, using fair-queue, and may be dropping EF packets in addition to the LLQ dropping packets.  The shaper's fair-queue drops might be skewing the drop rates to be more even, so your 70% isn't being obtained.

You could, perhaps, experiment with the shaper's Bc, and some CBWFQ versions will accept queue depth parameters.  The purpose of such experimentation would be to try to determine if you can achieve your 70% expectation; or . . .

If the real purpose of your LLQ is to guarantee low latency for your VoIP trafffic, i.e. you don't want to drop it, try just overloading the link with non-EF traffic, and see if EF below the policy rate 70% limit maintains lower latency and isn't dropped.

Hi Joesph many thanks for the reply. I will play around with the shapers Bc etc (will have to research as the QoS is new to me).

JosephDoherty wrote:

If the real purpose of your LLQ is to guarantee low latency for your VoIP trafffic, i.e. you don't want to drop it, try just overloading the link with non-EF traffic, and see if EF below the policy rate 70% limit maintains lower latency and isn't dropped.

Yes my underlining goal is to prioritize voice - your statement above regarding checking to see if the EF maintains lower latency - how do you test that exactly?

I've read many whitepapers about QoS configuration and my exactly example is listed a few times yet it doesnt work? Could it be something to do with the extended ping (as that is how I am testing)...

Maybe I could try to not shape but rate limit the connection and see how that goes..

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Depending on IOS version, I've suspected (and TAC reported) some issues when a shaper is used.  My understanding, the later 15.x versions work as they should, but 12.x versions may not.

As to how to check latency; your pings should show an increase in latency for the non-LLQ traffic as the link is loaded.  The LLQ traffic latency (over an unloaded link), shouldn't be a small increase.  (There will be some increase in latency even for LLQ both due to the hardware's FIFO queue [which you can minimize by setting tx-ring-limit smaller] and by the shaper, if it does maintain its own queues.)

Rate limiting should only be used as a last resort.  For example, you could rate limit all non-LLQ traffic to 30%, which should work as to leaving 70% available bandwidth but then that traffic could never use more than 30% even if available, policers allow bursts, so w/o LLQ you might still see variable latency/jitter, the non-LLQ traffic, using default rate limiter values, if TCP, probably won't reach 30% without some rate limiter parameter tuning.

Hi Joesph,

I wasnt having any success with setting the shaping and prioirtizing, so I've allocated a CIR using the police command as below, but still left in the priority command:

class-map match-any DSCP46

match ip dscp ef

!

!

policy-map PM_DSCP

class DSCP46

   police cir 89000 bc 16687 be 22250

  priority percent 80 16000

policy-map PM_DSCP_18K

class class-default

  shape average 128000

  service-policy PM_DSCP

And heres the output:

Service-policy output: PM_DSCP_18K

    Class-map: class-default (match-any)

      99430 packets, 80110307 bytes

      30 second offered rate 265000 bps, drop rate 79000 bps

      Match: any

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int  (ms)      (bytes)

           128000/128000    1984   7936      7936      62        992

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        -      72        68678     50934223  66927     49554970  yes

      Service-policy : PM_DSCP

        Class-map: DSCP46 (match-any)

          26022 packets, 22664168 bytes

          30 second offered rate 146000 bps, drop rate 56000 bps

          Match: ip dscp ef (46)

            26022 packets, 22664168 bytes

            30 second rate 146000 bps

          police:

              cir 89000 bps, bc 16687 bytes

            conformed 14283 packets, 11983862 bytes; actions:

              transmit

            exceeded 2161 packets, 3136550 bytes; actions:

              drop

            conformed 90000 bps, exceed 56000 bps

          Queueing

            Strict Priority

            Output Queue: Conversation 24

            Bandwidth 80 (%)

            Bandwidth 102 (kbps) Burst 16000 (Bytes)

            (pkts matched/bytes matched) 5304/4670232

            (total drops/bytes drops) 0/0

        Class-map: class-default (match-any)

          73408 packets, 57446139 bytes

          30 second offered rate 119000 bps, drop rate 79000 bps

A wireshark capture shows I am getting around 74% DCSP EF marked packets on the receiver.

Im running 12.4(25d) on with 7200's by the way with this config on GNS3, but our production routers are npe-g1's. Im not sure if that would cause any issues with with prioritization and shaping.

How do those numbers compare? Thanks for all your assistance by the way, its always appreciated!

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

These numbers are as one would expect, even though your policed throughput is a tad higher than the actual limit.

Most interesting about these numbers, the explicit policer is using a much large Bc then the implicit LLQ policer had in your prior posting and I also notice you've explicitly increased the implicit LLQ's Bc, also making it much larger than the default.  With the increased Bc, we can't really tell whether its the policer or the changed Bc that's changed performance so much.

JosephDoherty wrote:

Most interesting about these numbers, the explicit policer is using a much large Bc then the implicit LLQ policer had in your prior posting and I also notice you've explicitly increased the implicit LLQ's Bc, also making it much larger than the default.  With the increased Bc, we can't really tell whether its the policer or the changed Bc that's changed performance so much.

Ok, reran the results and removed the explicit shaper Bc figure I previously set (still have the police figure set). The config is:

policy-map PM_DSCP

class DSCP46

police cir 89000 bc 16687 be 22250

  priority percent 80

policy-map PM_DSCP_18K

class class-default

  shape average 128000

  service-policy PM_DSCP

Below are the results. Now it looks like the shaping is happening but the packet distribution for the priority queue is messed up, 64k of DSCP traffic, and only 19K on non-dropped traffic forwarding. None of this should be dropped.

  Service-policy output: PM_DSCP_18K

    Class-map: class-default (match-any)

      6688 packets, 5730772 bytes

      30 second offered rate 347000 bps, drop rate 220000 bps

      Match: any

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int  (ms)      (bytes)

           128000/128000    1984   7936      7936      62        992

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        -      64        2413      2048366   2409      2042406   yes

      Service-policy : PM_DSCP

        Class-map: DSCP46 (match-any)

          800 packets, 1186576 bytes

          30 second offered rate 64000 bps, drop rate 45000 bps

          Match: ip dscp ef (46)

            800 packets, 1186576 bytes

            30 second rate 64000 bps

          police:

              cir 89000 bps, bc 16687 bytes

            conformed 749 packets, 1116202 bytes; actions:

              transmit

            exceeded 51 packets, 70374 bytes; actions:

              drop

            conformed 59000 bps, exceed 2000 bps

          Queueing

            Strict Priority

            Output Queue: Conversation 24

            Bandwidth 80 (%)

            Bandwidth 102 (kbps) Burst 2550 (Bytes)

            (pkts matched/bytes matched) 749/1116202

            (total drops/bytes drops) 508/758348

        Class-map: class-default (match-any)

          5888 packets, 4544196 bytes

          30 second offered rate 284000 bps, drop rate 179000 bps

Next, I tried to reenable the Bc command on the priority line, and removed the policing command. The config is:

policy-map PM_DSCP

class DSCP46

  priority percent 80 16000

policy-map PM_DSCP_18K

class class-default

  shape average 128000

  service-policy PM_DSCP

It looks like the priority is working to me (remember Im still new to QoS!).

Im asking it to priortize 102k of traffic, and it looks like its sending 93-3 = 90K. Heres the config as it stands:

And here is the result:

Service-policy output: PM_DSCP_18K

    Class-map: class-default (match-any)

      45703 packets, 39955068 bytes

      30 second offered rate 389000 bps, drop rate 262000 bps

      Match: any

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int  (ms)      (bytes)

           128000/128000    1984   7936      7936      62        992

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        -      69        13729     14044700  13721     14035644  yes

      Service-policy : PM_DSCP

        Class-map: DSCP46 (match-any)

          6412 packets, 9501300 bytes

          30 second offered rate 93000 bps, drop rate 3000 bps

          Match: ip dscp ef (46)

            6412 packets, 9501300 bytes

            30 second rate 93000 bps

          Queueing

            Strict Priority

            Output Queue: Conversation 24

            Bandwidth 80 (%)

            Bandwidth 102 (kbps) Burst 16000 (Bytes)

            (pkts matched/bytes matched) 6059/9011222

            (total drops/bytes drops) 1412/2101548

        Class-map: class-default (match-any)

          39291 packets, 30453768 bytes

          30 second offered rate 298000 bps, drop rate 258000 bps

After this test I changed the priority to 50%, and left the burst to 16000, and below is the important part! 95-31k = 64k which is half of 128K which is whats configured!

Class-map: DSCP46 (match-any)

          10449 packets, 15487758 bytes

          30 second offered rate 95000 bps, drop rate 31000 bps

          Match: ip dscp ef (46)

            10449 packets, 15487758 bytes

            30 second rate 95000 bps

          Queueing

            Strict Priority

            Output Queue: Conversation 24

            Bandwidth 50 (%)

            Bandwidth 64 (kbps) Burst 16000 (Bytes)

Joseph,

Do you think this issue could be caused by the packet size  being very large ( its causing fragmentation at thre receiver)?

Each ping stream was generating around 80K of traffic.

If I were to set this up on a live network, is there a recommended program to use to generate marked traffic? I thought from memory that Windows 7 PC wont mark packets etc when using applications that allow it?

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

Do you think this issue could be caused by the packet size  being very large ( its causing fragmentation at thre receiver)?

Hmm, dunno.  You might try a payload size of 1K or 1400, see if there's any signficant difference.