on 01-16-2014 08:00 AM
This document describes how to work with and implement QOS Policy Propagation for BGP, or in short QPPB.
QPPB is the concept of marking prefixes received from BGP with a certain QOS group assignment and this QOS-group can be used
in egress policies (or even ingress policies after marking) to perform certain actions.
While this seems so simple, and it is technically, the ability to achieve this functionality in hardware is quite complex.
In this article I hope to unravle some of these gotcha's and operation when it comes to QPPB.
QPPBwith egress qos policy is supported in Trident, Thor and Typhoon. Both IPv4 & IPv6 is supported.
ASR9K is the first XR platform to support IPv6 QPPB.
Based on interface qppb settings, for the source/Destination prefix, the IP Precedence and/or QOS group is set in ingress linecard and policy can be attached in egress linecard to act on the modified IP Precedence and/or QOS group.
In short the purpose of QPPB is to allow packet classification based on BGP attributes.
In the following example, Router A learns routes from AS 200 and AS 100. QoS policy is applied to all packets that match the defined route maps. Any packets from Router A to AS 200 or AS 100 are sent the appropriate QoS policy.
The show commands below follow the application and focus of the QPPB on Router "A" and then on the interface towards and connecting with
Router "B".
RPL configuration |
---|
route-policy qppb if destination in (60.60.0.0/24) then set qos-group 5 elseif source in (192.1.1.2) then set qos-group 10 else set qos-group 1 endif pass end-policy |
QPPB ...
After the route policy configuration you need to apply it to the table policy to do the marking, right when the routes are being received.
This then also needs to be enforced on the interface with the right QOS configuration to leverage this QPPB setting and act upon that.
BGP configuration |
---|
router bgp 300 bgp router-id 19.19.19.19 address-family ipv4 unicast table-policy qppb ! neighbor 192.1.1.2 remote-as 200 address-family ipv4 unicast route-policy pass-all in route-policy pass-all out ! |
Interface level configuration |
---|
interface TenG 0/0/0/0 ipv4 address 192.1.1.1 255.255.255.0 ipv4 bgp policy propagation input qos-group destination keepalive disable ! |
The following commands give an overview how to verify the QPPB configuration
RP/0/RP0/CPU0:pacifica#show cef 60.60.0.0/24 detail 60.60.0.0/24, version 0, internal 0x40000001 (0x78419634) [1], 0x0 (0x0), 0x0 (0x0) Prefix Len 24, traffic index 0, precedence routine (0) QoS Group: 5, IP Precedence: 0 gateway array (0x0) reference count 100, flags 0x80200, source 3, <snip> |
uIDB is a micro interface descriptor block, it is a data structure that represents the features that are applied to an interface.
In this case this command explains what features are applied to the interface specific to QPPB.
RP/0/RP0/CPU0:pacifica#show uidb data TenG 0/0/0/0 location 0/0/cpu0 <snip> IPV4 QPPB En 0x1 IPV6 QPPB En 0x1 IPV6 QPPB Dst QOS Grp 0x0 IPV6 QPPB Dst Prec 0x1 IPV6 QPPB Src QOS Grp 0x1 IPV6 QPPB Src Prec 0x0 IPV4 QPPB Dst QOS Grp 0x0 IPV4 QPPB Dst Prec 0x1 IPV4 QPPB Src QOS Grp 0x0 IPV4 QPPB Src Prec 0x0 <snip> |
This to the best of my knowledge
These debugging examples are not necessarily related to the configuration example above.
RP/0/RSP0/CPU0:public-router#show route 120.120.120.120 de
Wed Feb 23 15:20:01.263 UTC
Routing entry for 120.120.120.0/24
Known via "bgp 500", distance 200, metric 0, type internal
Installed Feb 20 23:58:13.096 for 2d15h
Routing Descriptor Blocks
10.1.7.2, from 10.1.7.2
Route metric is 0
Label: None
Tunnel ID: None
Extended communities count: 0
Route version is 0x1 (1)
No local label
IP Precedence: 6
QoS Group ID: 5
Route Priority: RIB_PRIORITY_RECURSIVE (7) SVD Type RIB_SVD_TYPE_LOCAL
No advertising protos.
RP/0/RSP0/CPU0:public-router#show cef 120.120.120.120 hardware ingress location 0/0/cPU0 | inc QPPB
QPPBPrec: 6
QPPBQOS Group: 5
CEF Command output in THOR :
Hardware Extended Leaf Data:
fib_leaf_extension_length: 0 interface_receive: 0x0
traffic_index_valid: 0x0 qos_prec_valid: 0x1
qos_group_valid: 0x1 valid_source: 0x0
traffic_index: 0x0 nat_addr: 0x0
reserved: 0x0 qos_precedence: 0x6
qos_group: 0x5 peer_as_number: 0
path_list_ptr: 0x0
connected_intf_id: 0x0 ipsub_session_uidb: 0xffffffff
Path_list:
urpf loose flag: 0x0
List of interfaces:
RP/0/RSP0/CPU0:public-router# show uidb data location 0/0/CPU0 gigabitEthernet 0/0/0/4 ingress | inc QPPB
Wed Feb 23 15:44:33.154 UTC
IPV6 QPPBSrc Lkup 0x0
IPV4 QPPBSrc Prec 0x1
IPV4 QPPBDst Prec 0x0
IPV4 QPPBSrc QOS Grp 0x0
IPV4 QPPBDst QOS Grp 0x0
IPV6 QPPBSrc Prec 0x0
IPV6 QPPBDst Prec 0x1
IPV6 QPPBSrc QOS Grp 0x0
IPV6 QPPBDst QOS Grp 0x0
IPV4 QPPBEn 0x1
IPV6 QPPBEn 0x1
IPV4 QPPBSrc Lkup 0x1
RP/0/RSP0/CPU0:public-router#show run interface gigabitEthernet 0/0/0/4
Wed Feb 23 15:44:52.313 UTC
interface GigabitEthernet0/0/0/4
service-policy output qppp-qos-pol
ipv4 bgp policy propagation input ip-precedence source
ipv4 address 12.12.12.1 255.255.255.0
ipv6 bgp policy propagation input ip-precedence destination
ipv6 address 12:12:12::1/64
!
RP/0/RSP0/CPU0:public-router#show uidb data location 0/2/cpu0 gigabitEthernet 0/2/0/0 ingress | inc QPPB
Wed Feb 23 15:47:32.436 UTC
IPV4 QPPBSrc Lookup En 0x0
IPV6 QPPBSrc Lookup En 0x1
IPV4 QPPBEn 0x1
IPV6 QPPBEn 0x1
RP/0/RSP0/CPU0:public-router#show uidb data location 0/2/cpu0 gigabitEthernet 0/2/0/0 ing-ex | inc QPPB
Wed Feb 23 15:49:19.118 UTC
IPV6 QPPBDst QOS Grp 0x0
IPV6 QPPBDst Prec 0x1
IPV6 QPPBSrc QOS Grp 0x1
IPV6 QPPBSrc Prec 0x0
IPV4 QPPBDst QOS Grp 0x0
IPV4 QPPBDst Prec 0x1
IPV4 QPPBSrc QOS Grp 0x0
IPV4 QPPBSrc Prec 0x0
(maybe a bit too deep for the general troubleshooting, but since I had this info I thought it might be nice to share in case, for those interested)
Another gotcha to this example below is a Gig interface in a SIP700, while this works, it is officially not supported, it is just for illustrational purposes to show how to feel inside info out to the surface and how to interpret it.
/*
* Flags bit field definitions for BGP_PA and QPPB.
*/
BF_CREATE(bgp_pa_flag_qppb_enabled, 6, 1); // QPPB
BF_CREATE(bgp_pa_flag_src_qos_group, 5, 1); // QPPB
BF_CREATE(bgp_pa_flag_dst_qos_group, 4, 1); // QPPB
BF_CREATE(bgp_pa_flag_src_prec, 3, 1); // QPPB
BF_CREATE(bgp_pa_flag_dst_prec, 2, 1); // QPPB
BF_CREATE(bgp_pa_flag_src, 1, 1); // BPG_PA
BF_CREATE(bgp_pa_flag_dst, 0, 1); // BPG_PA
uidb offset 780 for v4 and 2000 for v6 qppb:
RP/0/RSP0/CPU0:public-router#show controllers pse qfp infrastructure gigabitEthernet 0/1/1/4 input config raw loc 0/1/CPU0 | inc 780
Wed Feb 23 15:52:21.259 UTC
00000780 00 00 00 48 00 00 00 00
00001780 00 00 00 00 00 00 00 00
00002780 00 00 00 00 89 87 54 00
RP/0/RSP0/CPU0:public-router#show controllers pse qfp infrastructure gigabitEthernet 0/1/1/4 input config raw loc 0/1/CPU0 | inc 2000
Wed Feb 23 15:53:49.236 UTC
00002000 00 00 00 48 e7 ff 36 f0
RP/0/RSP0/CPU0:public-router#show run interface gigabitEthernet 0/1/1/4
Wed Feb 23 16:04:34.892 UTC
interface GigabitEthernet0/1/1/4
description TGN PORT 7 3
service-policy output qppp-qos-pol
ipv4 bgp policy propagation input ip-precedence source
ipv4 address 10.1.7.1 255.255.255.0
ipv6 bgp policy propagation input ip-precedence source
ipv6 address 10:1:7::1/64
ipv6 enable
!
(both trident and typhoon)
RP/0/RSP0/CPU0:ASR-PE3#show uidb data location 0/0/cpu0 gigabitEthernet 0/0/0/4.1 ingress | inc QOS
Mon Feb 20 10:57:22.781 UTC
QOS Enable 0x1
QOS Inherit Parent 0x0
QOS ID 0x2004
AFMON QOS ID 0x0
IPV4 QPPBSrc QOS Grp 0x1
IPV4 QPPBDst QOS Grp 0x0
IPV6 QPPBSrc QOS Grp 0x0
IPV6 QPPBDst QOS Grp 0x0
QOS grp present in policy 0x1
QOS Format 0x1
Verification of the NP counters, we need to see loopbacks happening (or feedback in PXF or Recirculation in 7600 terminology)
RP/0/RSP0/CPU0:ASR-PE3#show controllers np counters np3 location 0/0/cPU0
Mon Feb 20 11:08:49.262 UTC
Node: 0/0/CPU0:
Show global stats counters for NP3, revision v3
Read 33 non-zero NP counters:
Offset Counter FrameValue Rate (pps)
22 PARSE_ENET_RECEIVE_CNT 6861706 45574
23 PARSE_FABRIC_RECEIVE_CNT 6927056 45574
24 PARSE_LOOPBACK_RECEIVE_CNT 6818799 45573
29 MODIFY_FABRIC_TRANSMIT_CNT 6709882 45574
30 MODIFY_ENET_TRANSMIT_CNT 6929272 45574
34 RESOLVE_EGRESS_DROP_CNT 5 0
38 MODIFY_MCAST_FLD_LOOPBACK_CNT 6818940 45574
RP/0/RSP0/CPU0:ASR-PE3#show uidb data location 0/0/cpu0 gigabitEthernet 0/0/0/4.1 ingress | inc QOSQOS Enable 0x1QOS Inherit Parent 0x0QOS ID 0x2004AFMON QOS ID 0x0IPV4 QPPBSrc QOS Grp 0x1IPV4 QPPBDst QOS Grp 0x0IPV6 QPPBSrc QOS Grp 0x0IPV6 QPPBDst QOS Grp 0x0 QOS grp present in policy 0x1 QOS Format 0x1
If the user wants to have a policy to act only on the precedence value modified by qppb, user still needs to have a dummy “QOS group” in the policy for the loopback to happen to take effect of the new precedence value. This requires to configure something like this:
router bgp 1
address-family ipv4 unicast
table-policy QPPB-PREC
route-policy QPPB-PREC
if (community matches-any GOLD) then
set ip-precedence 5
set qos-group 1 <== dummy qos-group
endif
end-policy
!
And at the interface level I should have both ip-precedence and qos-group:
interface TenGigE0/3/0/5
cdp
mtu 9194
ipv4 bgp policy propagation input ip-precedence destination qos-group destination
ipv4 address 192.1.22.14 255.255.255.254
load-interval 30
!
Although in the ingress QoS policy I only match on ip precedence being set by QPPB.
You need to have “match on qos group” in input QOS policy and not on the route policy.
See the example below
router bgp 1
address-family ipv4 unicast
table-policy QPPB-PREC
route-policy QPPB-PREC
if (community matches-any GOLD) then
set ip-precedence 5
endif
end-policy
!
QOS policy will look like this
class-map match-any qos-grp-dummy
match qos-group 20
end-class-map
!
class-map match-any qos-prec
match precedence 5
end-class-map
!
policy-map qppb-input-pref
class qos-grp-dummy <--------dummy “qos group”
police rate percent 20
exceed-action drop
!
!
class qos-prec
police rate percent 10
exceed-action drop
!
!
class class-default
!
end-policy-map
!
And at the interface level you don’t need to have qos-group qppb
interface TenGigE0/3/0/5
cdp
mtu 9194
ipv4 bgp policy propagation input ip-precedence destination
service-policy input qppb-input-pref
ipv4 address 192.1.22.14 255.255.255.254
load-interval 30
!
This section provides the answer or resolution to the problem provided in the "Core Issue" section.
Xander Thuijs CCIE #6775
Principal Engineer IOS-XR; ASR9000, CRS and NCS6K
Hello Xander,
I hope you are doing alright!
is it possible to clasiify ip subscriber traffic? I mean:
ipv4 bgp policy propagation input qos-group destination
it is not available under dynamic template, I've applied it to the parent port-bundle but it does not seem to classify incomming traffic
Thank you
Regards
Elnur
p.s. it works otherwise, I've checked it over regular L3 interface.
qppb is mainly useful on the ingress side for classification and generally (yeah...) BGP doesnt run over the subscriber interface (requires static adds etc), so we didnt make it a priority to enable QPPB on the subscriber interface today.
hence it not being available/visible on the dyn tpl.
cheers
xander
Hi Alexander,
Can you provide simple diagram showing how the traffic flow ? It is hard to understand the interface direction of it's flow.
Regards, Pok.
I think your config is ok Pok, we could do a few verifications:
- check whether the qos-group is correctly set on ingress:
apply a qos policy out on the te0/x interface to see if the egress shaper/policer works properly
- if that test succeeds it is likely an issue with the inbound matching on the set qos group for which we need to verify if the packets are receiving the loopback as required. The tests for that are above described to check the UIDB programming and the packet counters on the NP for the MDF LOOP should increment.
regards
xander
Hi Xander,
It's possible in XR apply QPPB on egress for VPN MPSL? In regular IOS this is a limitant for LFIB lookup en egress (PE->CE) because QPPB rely CEF.
hi mauricio,
you'd want the packets to be marked with a special qos group on ingress via qppb and use that qos group on egress in a regular policy-map.
The QPPB action(s) are only in effect in the ingress direction.
cheers!
xander
Thanks Alexander, maybe leaking MPLS L3 to VRF lite solve the egress match for qos-group.
regards,
Hi Xander, Can I apply qppb in PWHE interfaces ingress and egress direction?
thanks,
that is a great question mauricio, I have never tried that to be honest. I am verifying with our test department whether this is part of the original delivery and whether this is meant to be working, somehow it sounds "dangerous" to me...
QPPB requires a feedback loop and PWHE isnt the smallest feature either, hence...
will confirm and get back.
cheers
xander
oh sorry let me make sure you have it straight in your mind:
qppb means marking based on a (bgp) prefix.
then you obviously want to leverage that marking in a policy. using that remarked value can be done on either an ingress policy (requires feedback loop) or on an egress policy (easier/liter on the fwd path).
But the marking by QPPB is an ingress feature only.
making sense? :)
cheers!
xander
Ok I understund, I need route lookup based in BGP inside the VRF.
thanks,
Hi Xander,
So under the interface, we have "ipv4 bgp policy propagation input". Can you please tell me why we do not have an "ipv4 bgp policy propagation output" ? Thank you!
hi adrian,
till date there hasnt been a true requirement to implement this. What qppb really is, is marking the traffic based on some bgp parameters. this marking can then be used in a policy for policing or shaping etc.
So technically, if you want to do something in the egress direction, you could have already marked it on ingress, right, so technically there is really no need to tag it on egress. Doing it egress would also impact some forwarding capability: we'd need to tag the traffic, recirculate and apply the qos policy on it, for that reason it is better to tag it on ingress, so we save that recirc/feedback on egress.
cheers
xander
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: