Showing results for 
Search instead for 
Did you mean: 

ASR9000/XR Feature Order of operation




This document provides an overview as to in what order features and forwarding is applied in the ASR9000 ucode architecture.

After reading this document you understand better when packets are subject to a type of classification or action.

Also you will better understand how PPS rates are affected when certain features are enabled.


Feature order of operation


The following picture gives a (simplified) overview how packet are handled from ingress to egress


Screen Shot 2013-04-11 at 5.50.01 PM.png



The following main building blocks compose the ingress feature order of operation


I/F Classification

When a packet is received the TCAM  is queried to see what (sub)interface this packet belongs to.

The flexible vlan matching rules apply here that you can read more on in the EVC architecture document.

Once we know to which (sub)interface the packet belongs to, we derive the uIDB (micro IDB, Interface Descriptor Block) and know which features need to be applied.

ACL Classification

If there is an ACL applied to the interface, a set of keys are being built and sent over to the TCAM to find out if this packet is a permit or deny on that configured ACL. Only the result is received, no action is taken yet.

QOS Classification

If there is a qos policy applied, the TCAM is queried with the set of keys to find out if a particular class-map is being matched. The return result is effectively an identifier to the class of the policy-map, so we know what functionality to apply on this packet when QOS actions are executed.



As you can see, enabling either ACL or QOS will result in a TCAM lookup for a match. ACL only will result in an X percent of performance degredation. QOS only will result in a Y performance degredation (and X is not equal to Y).

Enabling both ACL and QOS would not give you an X+Y pps degredation because the TCAM lookup for them both is done in parallel. So we save that overhead. It is not the case that 2 separate tcam lookups are done.

BGP flowspec, Open flow and CLI Based PBR use PBR lookup, which happens between ACL and Qos logically.


Forwarding lookup

The ingress forwarding lookup is rather simple, we dont traverse the whole forwarding tree here, but only trying to find out what egress interface, or better put, what egress LC is to be used here for forwarding. The reason for this is that the 9k is distributed in its architecture and therefore the ingress linecard has to do some sort of FIB lookup to find the egress LC.

Also when bundles are in play, and members are spread over different linecards, we need to have knowledge and compute the hash to identify the member the egress LC will be choosing. This so we can forward the packet over only to that linecard which serves that member that is actually going to forward the packet.



If uRPF is enabled, a full ingress FIB lookup, same as the egress is done, this is intense and therefore uRPF will have a relative larger impact on the PPS.

IFIB Lookup

In this stage we determine if the packet is for us and if it is for us, where it needs to go to. For isntance, ARP and netflow are handled by the LC, but BGP and OSPF are handled by the RSP. The iFIB lookup gives us the direction as to where the packet internally needs to go to when we are the recipient.

Security ACL action

If the packet is subject to an ACL deny for instance, the packet is dropped during this stage.

QOS Policer action

Any policer action is done during this stage as well as marking. QOS shaping and buffering is done by the traffic manager which is a separate stage in the NPU.

L2 rewrite

During the L2 rewrite in the ingress stage we are applying the fabric header.

QOS Action

Any other qos action, such as Queuing and Shaping and WRED are executed during this stage. Note that packets that were previously policed or dropped by ACL are not seen in this stage anymore. It is important to understand that dropped packets are removed from the pipeline, so when you think there are counter discrepancies, they may have been processed/dropped by an earlier feature.


See also the next section with more details on the different QOS actions.


iFIB action

Either the packet is forwarded over the fabric or handed over to the LC CPU here. If the packet is destined for the RSP CPU, it is forwarded over the fabric to the RSP destination. Remember that the RSP is a linecard from a fabric point of view. The RSP requests fabric access to inject packets in the same fashion an LC would.



The ingress linecard will also do the TTL decrement on the packet and if the TTL exceeds the packet is punted to the LC CPU for an ICMP TTL exceed message. The number of packets we can punt is subject to LPTS policing.



The following section describes the forwarding stages of the egress linecard.

Forwarding lookup

The egress linecard will do a full fib lookup down the leaf to get the rewrite string. This full fib lookup will provide us everything for the packet routing, such as egress interface, encapsulation, and adjacency information.

L2 rewrite

With the information received from the forwarding lookup we can rewrite the packet headers, applying the ethernet header, vlans etc.

Security ACL classification

During the fib lookup we have determined the egress interface and know which features are applied.

Same as in ingress we are able to build the keys and query the tcam for an ACL result based on the application to the egress interface.

QOS Classification

Same as on ingress, the TCAM is queried to identify the QOS class-map and matching criteria for this packet on an egress qOS policy.


The same note as above applies when it comes to ACL and or QOS application to the interface.

ACL action

ACL action is executed

QOS action

QOS action is executed, see next section for more details on the QOS actions



MTU verification is only done on the egress linecard to determine if fragmentation is needed.

The egress linecard will punt the packet to the LC CPU of the egress linecard for that fragmentation requirement.

Remember that fragmentation is done in software and no features are applied on these packets on the egress linecard.

The number of packets that can be punted for fragmentation is NPU bound and limited by LPTS.


QOS action notes


Screen Shot 2013-04-11 at 5.50.13 PM.png



The above picture expands on the different QOS actions that can be taken and how they intersect.  What is important to understand from this picture is that WRED (that is precedence or DSCP aware) will use the rewritten values from the packet headers.


The almost same applies on egress: shaping, queueing and wred will reuse rewritten PREC/DSCP values from either ingress or the egress policer/remarker.


Packets that were subject to a police drop action will not be seen anymore by the shaper.




From the above a few conclusions can be drawn:


1) Ingress processing is more intense then egress processing.

2) Enabling more features will affect the total PPS performance of the NPU as more cycles are needed in a particular stage of forwarding

3) Packets denied on ingress will not go over the fabric

4) Permitted packets by qos or acl on ingress go over the fabric and might get dropped on egress ACL or QOS. This means that there is a potential of fabric bandwidth wasting if there is a very restrictive ACL or low rate policer/shaper on egress and a high input rate on ingress. (Note that oversubscription will result in back pressure so the ingress FIA will prevent the packets from being sent over the fabric, the head of line blocking from the ASR9000 QOS architecture document).

5) Packets that are remarked on ingress will get their QOS (or ACL) applied based on the REWRITTEN values from ingress

6) Netflow will account for ACL denied packets (not specifically called out in the feature order of operation)

7) WRED uses the rewritten values from ingress or egress policers

8) The ingress linecard is not aware of features configured on the egress linecard (remember the qos priority propagation from the ASR9000 quality of service architecture document)




Xander Thuijs CCIE #6775

Principal Engineer ASR9000

Rajat Chauhan
Cisco Employee

Thanks much, where does SPAN fits in this order ?

Cisco Employee

Good question! considering that SPAN is invoked by an ACL, (by means of the "capture" keyword on an ACE) that is where span replication will happen.


Rajat Chauhan
Cisco Employee


Community Member

Hello Xander! 

What if SPAN configured via monitor-session ? 




Cisco Employee

Hi Max, that is the IOS way of doing it :).

if you define monitor interface acl, then it expects an ACL with the capture keyword on those ACE's that you are interested in spanning.

I think this is the best approach to filter a bit whatever you want to see out of the span interface (and make sure that the span interface is a l2transport kind).



Community Member


On the topic of  "Netflow will account for ACL denied packets (not specifically called out in the feature order of operation)"

Does this mean that netflow will still report flow records for an ACL-denied packet as if it had transited the router?  Ex: if I saw a multi-GB NTP reflection attack hit a netflow-enabled interface and put an ACL in place to filter it, would a collector still display the massive flood of traffic to the interface or would it drop to steady-state levels?

Cisco Employee

correct indeed! if those packets are sampled and denied, or policed or whatever, then they are still reported by netflow albeit with an extra flag stating that they were denied/policed etc.





Community Member

Perfect, thank you very much!


Excellent details Alexander.


I do have one question - if you don't mind:  Where does 'qos pre-classify' fit into the order you have outlined?

(assuming I'm using the ASR w/DMVPN config across sites and applying the 'qos pre-classify' on the tunnel interface)


Thanks in advance for your feedback.

Cisco Employee

ah thanks! :) yeah that command pre-classify doesnt exist in XR/a9k per-se but it is somewhere there naturally already. the pre-classify in IOS helped with sw based forwarding so that when there is an egress policy that matches on something would look at the inner header as opposed to matching on the outter header of a tunelled packet.

Now since a9k is a 2 stage forwarding, you have that matching on inner vs outter header there naturally.

on ingress, you could match on the values you like, set a qos group and leverage that qos group on egress.




Thank you Sir.

Adam Vitkovsky

Hi Xander,


First of all I’d like to thank you for your exceptional work on documenting ASR9k and sharing all this information publically.


I’d like to ask regarding the order of operation:

I’d like to understand the reasoning behind putting ACL Classification and QOS Classification so far apart from Security ACL and QOS Policer action respectively please?

–as it then seems like the NPU wastes cycles on QOS Classification Forwarding lookup and uRPF just for the packet to be dropped anyways.


Regarding ACLs:

Are ACLs compiled into a set of lookup tables, while maintaining the first match requirements and packet headers are used to access these tables in a small, fixed number of lookups, independently of the existing number of ACL entries please?

Or are ACLs searched sequentially to find a matching rule resulting in variable time taken by the router to search the list, adding a variable latency to the packet forwarding please?


Thank you



Cisco Employee

hey adam, thank you also, very nice to hear!

yeah so the reasoning for that is, the NPU's have a set of TOP's (task optimized processors).

One stage is very good at parsing/classification, one stage is very good at searching, the other is good at applying and the final one is very good at modifying.

And there you go :) So in qos, we need to classify (class-match), apply (police/drop) and modify a packet (remark, header update etc). QOS is therefore spread over 3 different stages for optimized processing.

It is true that a dropped packet as classified already in PARSE, only gets effectively tossed in RESOLVE.

For ACL:

We use a TCAM for ACL matching. TCAM is really "reverse" memory. Normally in memory we say, hey give or store me this data at this address. Where as TCAM we say here is the data (or KEY) tell me if you find it in this memory address space.

TCAM provides a very deterministic result, because it will ALWAYS respond within the same time back to say match or no match. Therefore TCAM is perfect for ACL matching becuase it doesn't matter how long the ACL is, the lookup time is alway the same. However TCAM is power hungry too, and has limited space. So while we have deterministic lookup for any size ACL. That ACL needs to fit in TCAM. On TR cards you have 24k entries, on SE cards 96k.

We have hybrid ACL too, compression, that would leave some in search and some in tcam, that combines deterministic tcam look up with a lineair search on source/dest. Performance is worse, but doesn't put constraint on ACL size.

Check cisco live ID 2904 from orlando (for ACL compression details) and sandiego (for TCAM util details) if you like to know more about it.



Adam Vitkovsky

Aaah I see :) so that's why the order is like that, makes sense.


Regarding the ACLs

I see, if I think about it the TCAM kind of resembles the approach taken by lookup tables and hash of header/key (done in SW) and has the same capacity constrain.

Thank you very much, I'll definitely check out the presentations




Dear Alex

In the discussion, as you said that even sampled or policed or ACL denied packets will be passed on via netflow to its collector with an extra flag.

How we can limit the netflow stats to a collector ?

Secondly, can you elaborate on 'extra flag.'