cancel
Showing results for 
Search instead for 
Did you mean: 
cancel

ASR9000/XR ABF (ACL Based Forwarding)

31520
Views
19
Helpful
43
Comments

 

Introduction

 

In today’s high performance internetworks, organizations need the freedom to implement packet forwarding and routing according to their own defined polices in a way that goes beyond traditional routing protocol concerns. In XR platform this is achieved by using ACL Based Forwarding (ABF). ABF is a subset of PBR (Policy Based Routing) infrastructure in the XR platform. In summary this features allows traffic matching specific ACL rule to be forwarded to user specified nexthop instead of route selected by a routing protocol.  ASR9K supports ABF v4 and ABF v6 for forwarding IPv4 and IPv6 traffic respectively using ABF rules. Note that ABF is an ingress only feature, i.e., traffic matching ingress ACL can be forwarded using ABF.

 

The Benefits of ACL Based Forwarding

 

  • Source-Based Transit Provider Selection – Internet service providers and other organizations can use ABF to route traffic originating from different sets of users through different internet connections across the policy routers.
  • Cost Savings – Organizations can achieve cost savings by distributing interactive and batch traffic among low-bandwidth, low-cost permanent paths and high-bandwidth, high-cost, switched paths.
  • Load Sharing – In addition to the dynamic load-sharing capabilities offered by destination-based routing that the Cisco IOS-XR software provides network manager can implement policies to distribute traffic among multiple paths based on the traffic characteristics.

ABF Configuration

 

In XR platforms ABF is supported using ACL infrastructure. Enhancements are done to allow specification of ABF nexthop in the ACE rule. There are different variants of specifying ABF nexthop in a ACE. Following illustrates different options for specifying ABF nexthops.

 

ABF Nexthop:

We can specify list of specified IP addresses (up to 3 IP addresses will be supported for ABF along with VRF) in the ACE rule. In this case each nexthop IP address can specify the next hop router in the path towards the destination to which the packets must be forwarded. The first IP address associated with a currently “up” connected interface will be used to route the packets. Following is the example

ABFv4 Example :

 

ipv4 access-list abf-1

10 permit ipv4 any 100.100.100.0/24 nexthop1 VRF RED ipv4 1.1.1.1 nexthop2 VRF BLUE ipv4 2.2.2.2 nexthop3 ipv4 3.3.3.3

 

ABFv6 Example :

 

Ipv6 access-list abf-2

10 permit ipv4 any 2010::1/64 nexthop1 VRF A ipv6 2020::1 nexthop2 VRF B ipv6 2030::1 nexthop3 VRF C ipv6 2040::1

 

In the above example, there are 3 nexthops specified in the ACE rule. nexthop1 has the highest priority and nexthop3 has the lowest priority. if nexthop1 is reachable, i.e, there exist a route for the nexthop1 then it will be selected for forwarding the traffic matching the ACE. Nexthop2 is only selected if nexthop 1 is not reachable, i.e., there is no route present for the nexthop1 and nexthop2 is reachable, i.e., nexthop2 has a route and so on.

 

ABF Default Route

 

List of default IP addresses (up to 3 IP addresses will be supported for ABF) – IP address can specify the next hop router in the path towards the destination to which the packets must be forwarded, if there is no explicit route for the destination address of the packet in the routing table. The packet DA lookup should result in default-route. Under this condition ABF will be used to select the nexthop. The first IP address associated with a currently “up” connected interface will be used to route the packets. Following is the example for the same.  Please note “default” keyword in the ACE rule.

 

           ABFv4 Example:

 

     ipv4 access-list abf-1      

10 permit ipv4 any 100.100.100.0/24 default nexthop1 VRF RED ipv4 1.1.1.1 nexthop2 VRF BLUE 2.2.2.2 nexthop3 VRF GREEN 3.3.3.3

 

           ABFv6 Example:

 

           ipv6 access-list abf-2

     10 permit ipv4 any 2010::1/64 default nexthop1 VRF A ipv6 2020::1

 

In the above example if matching traffic does not have any route present for the packet DA and it has only default route, in that case ABF nexthop1 is selected for forwarding. If None of the  ABF nexthops are UP then packet is routed using  default route as expected. Note that if Packet DA has route learnt by routing protocol in that case ABF default Nexthop are not selected, instead packet is forwarded using route for packet DA and incoming interface VRF. Lets illustrate this with an example:

 

In the above example for ABFv4, lets says incoming packet DA is 100.100.100.100

 

  • If 100.100.100.100 lookup results in default path, then ABF nexthop is selected in the order of priority when UP, where nexthop1 has highest priority and nexthop3 has lowest priority.

 

  • If 100.100.100.100 has a non-default route, then packet will be forwarded using packet DA lookup, ABF path is not taken in this case.

 

 

  • If 100.100.100.100 has default route and none of the ABF nexthops are UP then packet will be forwarded to the default route.

 

ABF VRF select

 

This allows to specify only nexthop VRF instead of specific IP address for the nexthop. With this packet DA lookup will be performed in the ABF Nexthop VRF. If route is present then packet is forwarded otherwise dropped. This allows packet arriving on ingrsss VRF to get routed on egress interface which is in different VRF.

 

ABFv4 VRF select Example: Ingress vrf BLUE , Egress vrf RED

 

     ipv4 access-list abf-1  

     10 permit ipv4 any 100.100.100.0/24 nexthop1 VRF RED

 

Int Ten Gig 0/1/0/0

Vrf BLUE

Ipv4 addres 30.30.30.1/24

ipv4 access-list abf-1 ingress

 

In this case packet arriving in VRF BLUE will be forwarded to nexthop in VRF RED. Note that, VRF select feature can be used to send packet arriving on default VRF ton non-default VRF nexthop. Similarly, packet arriving on non-default VRF interface can be forwarded to nexthop in default VRF. Please see following example

 

Example: Ingress vrf BLUE, and egress vrf DEFAULT.

 

ipv4 access-list abf-default

10 permit ipv4 any 100.100.100.0/24 nexthop1  

 

Int Ten Gig 0/1/0/0

vrf BLUE

Ipv4 addres 30.30.30.1/24

ipv4 access-list abf-default ingress

 

In the above case, packet arriving on VRF BLUE will be forwarded to nexthop in default route. Packet DA lookup will be performed in the default vrf table for forwarding.

ABF with track option

 

Track option in ABF enables track object to be specified along with nexthop VRF and IP address. Status of track object is used along with reachability of the ABF Nexthop to determine if corresponding nexthop should be selected or not for forwarding.  At present ABFv4 is supported with track option. ABFv6 track option will be supported from 5.1.0 release onwards.

 

The following simple topology will be used to show this interaction:

 

Untitled.png

 

Description

 

From the above diagram the requirement is to have traffic coming into the RED interfaces/VLANs/VRFs on A9K-PE1 to be forwarded with ABF to IGW1, if IGW1 is to fail or not become reachable, the traffic should then get forwarded to IGW2, the same requirement is present on A9K-PE2. There might be multiple Internet gateways in this network, so it’s important to associate each PE with the primary and secondary IGW. In the examples below we will use the loopback addresses of the IGWs, equally one can use any IP address on the GW. It some designs it might be more suitable to use the IP address of the internet facing interface.

 

Solution

 

 

 

Configuring IPSLA

 

Pre-requisite for IPSLA is the MGBL pie, without this pie the IPSLA CLI will not be available, this is only required on the PEs, as the IGW is acting as a responder. Step one is to configure IPSLA tracking towards the IGWs from the PEs, this is an example from A9K-PE1.

 

RP/0/RSP0/CPU0:A9K-PE1#sh run ipsla

 

Thu Mar 28 21:54:21.934 UTC

 

ipsla

 

operation 1

 

type icmp echo

 

   destination address 10.10.10.10

 

   frequency 5

 

!

 

!

 

operation 2

 

type icmp echo

 

   destination address 10.20.20.20

 

   frequency 5

 

!

 

!

 

reaction operation 1

 

react timeout

 

   action logging

 

!

 

!

 

reaction operation 2

 

react timeout

 

action logging

 

!

 

!

 

schedule operation 1

 

start-time now

 

life forever

 

!

 

schedule operation 2

 

start-time now

 

life forever

 

!

 

!

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#

 

 

 

Verify the IPSLA operation with show ipsla statistics

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#show ipsla statistics

 

Thu Mar 28 21:54:53.052 UTC

 

Entry number: 1

 

   Modification time: 21:19:42.937 UTC Thu Mar 28 2013

 

   Start time       : 21:16:55.158 UTC Thu Mar 28 2013

 

   Number of operations attempted: 456

 

   Number of operations skipped : 0

 

   Current seconds left in Life : Forever

 

   Operational state of entry   : Active

 

   Operational frequency(seconds): 5

 

   Connection loss occurred     : FALSE

 

   Timeout occurred             : FALSE

 

   Latest RTT (milliseconds)     : 2

 

   Latest operation start time   : 21:54:50.367 UTC Thu Mar 28 2013

 

   Next operation start time     : 21:54:55.367 UTC Thu Mar 28 2013

 

   Latest operation return code : OK

 

   RTT Values:

 

     RTTAvg : 2         RTTMin: 2         RTTMax : 2        

 

     NumOfRTT: 1         RTTSum: 2         RTTSum2: 4

 

 

 

Entry number: 2

 

   Modification time: 21:46:26.132 UTC Thu Mar 28 2013

 

   Start time       : 21:46:26.116 UTC Thu Mar 28 2013

 

   Number of operations attempted: 102

 

   Number of operations skipped : 0

 

   Current seconds left in Life : Forever

 

   Operational state of entry   : Active

 

   Operational frequency(seconds): 5

 

   Connection loss occurred     : FALSE

 

   Timeout occurred             : FALSE

 

   Latest RTT (milliseconds)     : 5

 

   Latest operation start time   : 21:54:51.325 UTC Thu Mar 28 2013

 

   Next operation start time     : 21:54:56.325 UTC Thu Mar 28 2013

 

   Latest operation return code : OK

 

   RTT Values:

 

     RTTAvg : 5         RTTMin: 5         RTTMax : 5        

 

     NumOfRTT: 1         RTTSum: 5         RTTSum2: 25

 

RP/0/RSP0/CPU0:A9K-PE1#

 

Configuring Tracking

 

 

 

The next step is to configure tracking , so to have IPSLA associated with the tracker. Tracked objects could be Line protocol, Routes, IPSLA reachability, lists etc.. In below configuration we are associating the above IPSLA operation 1 through rtr 1.

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#sh run track

 

Thu Mar 28 21:33:44.466 UTC

 

track IGW1

 

type rtr 1 reachability

 

!

 

track IGW2

 

type rtr 2 reachability

 

!

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#

 

 

 

Verify the track operation status:

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#show track brief

 

Thu Mar 28 21:55:53.004 UTC

 

Track                           Object                                   Parameter       Value              

 

--------------------------------------------------------------------------------------------------------

 

IGW2                             IPSLA Operation 2                         reachability     Up  

 

IGW1                             IPSLA Operation 1                         reachability     Up  

 

RP/0/RSP0/CPU0:A9K-PE1#

 

Configuring ABF on IOS XR

 

 

 

The next step is to associate the Tracker with the ABF statement; it’s done through the standard ABF CLI. The nexthop ip address does not need to be the same as the tracked ip address, as showing in this example.

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#sh run ipv4 access-list

 

Thu Mar 28 21:50:23.672 UTC

 

ipv4 access-list ABF_IGW

 

10 permit ipv4 any any nexthop1 track IGW1 ipv4 10.10.10.1 nexthop2 track IGW2 ipv4 10.20.20.2

 

!

 

Bind the ABF statement to an ingress interface.

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#sh run int bundle-e1

 

Thu Mar 28 21:50:28.751 UTC

 

interface Bundle-Ether1.100

 

ipv4 address 192.168.100.1 255.255.255.0

 

ipv4 access-group ABF_IGW ingress

 

!

 

 

 

RP/0/RSP0/CPU0:A9K-PE1#

 

Notes, the above configuration can be applied to an interface in a VRF or a global routing table.

 

 

Null0 to drop traffic

 

The  current functionality of ABF does not allow us to use recursive static routes to blackhole traffic locally. In the example given below  if nexthop1 fails the suitability criteria then packet is forwarded  based on the DA (normal forwarding) rather than using the recursive  route pointing to null.

 

RP/0/RSP0/CPU0:asr9010-e11-1#sh access-list ABF-TEST

ipv4 access-list ABF-TEST

10 permit ipv4 host 109.159.249.66 host 10.10.10.10 nexthop1 ipv4 109.159.249.74 track NH1 nexthop2 ipv4 169.254.0.254

20 permit ipv4 any any

RP/0/RSP0/CPU0:asr9010-e11-1#

 

RP/0/RSP0/CPU0:asr9010-e11-1#sh route 169.254.0.254

Routing entry for 169.254.0.254/32

  Known via "static", distance 1, metric 0 (connected)

  Installed Oct 10 13:28:57.613 for 01:35:15

  Routing Descriptor Blocks

    directly connected, via Null0

      Route metric is 0

  No advertising protos.

 

RP/0/RSP0/CPU0:asr9010-e11-1#sh track

Track NH1

        Response Time Reporter 61 reachability

        7 changes, last change 14:57:09 BST Thu Oct 10 2013

        Latest operation return code: TimeOut

        Latest RTT (millisecs) : 0

 

RP/0/RSP0/CPU0:asr9010-e11-1#sh access-lists ABF-TEST hardware ingress location 0/0/CPU0

ipv4 access-list ABF-TEST

10 permit ipv4 host 109.159.249.66 host 10.10.10.10 (50 hw matches)

20 permit ipv4 any any (23559 hw matches)

 

A  possible workaround for this is to set a next hop VRF instead of  specific IP address and configure fixed default route (or destination  prefix itself) pointing to null0 within that VRF.

 

ipv4 access-list ABF-TEST

10 permit ipv4 host 109.159.249.66 host 10.10.10.10 nexthop1 ipv4 109.159.249.74 track NH1 nexthop2 vrf DROP_VRF

!

router static

vrf DROP_VRF

address-family ipv4 unicast

  0.0.0.0/0 Null0

 

RP/0/RSP0/CPU0:asr9010-e11-1#sh track                                                     

Track NH1

        Response Time Reporter 61 reachability

        16 changes, last change 21:30:59 BST Thu Oct 10 2013

        Latest operation return code: TimeOut

        Latest RTT (millisecs) : 0

 

RP/0/RSP0/CPU0:asr9010-e11-1#sh access-lists ABF-TEST hardware ingress location 0/0/CPU0  

ipv4 access-list ABF-TEST

10 permit ipv4 host 109.159.249.66 host 10.10.10.10 (20 hw matches) (next-hop: addr=0.0.0.0, vrf name=DROP_VRF)

20 permit ipv4 any any (146412 hw matches)

RP/0/RSP0/CPU0:asr9010-e11-1#

 

ABF Support Matrix

 

 

Following provide the support matrix for ABF IPv4 and ABF IPv6 on different linecards on the ASR9k platform. Classification is done based on combination  of different interface types on ingress and egress linecards for ABFv4 and ABFv6. Aforementioned, ABF is ingress only feature, i.e., ABF rule is always applied on the ingress interface. TCAM resources are only allocated on line ingress cards where ABF is configured which is used for classification of incoming traffic. For virtual interface such as GRE TCAM resources will be allocated on all LCs where virtual interface is present.  In the following figure ingress LC determines that packet will be routed using ABF rule. Packet will be sent to egress LC with ABF type in the fabric header, which will be used by Egress LC to route packet based on ABF nexthop instead of packet DA. As a result both ingress and egress LCs need to be aware of the ABF feature that can be supported on a particular LC.

 

Line Cards Support

 

ABFv4 is supported on   Ethernet, Enhanced Ethernet, and SIP-700 Linecards. ABFv6 is supported only on Enhanced Ethernet and SIP-700. Following support matrix provide the different combinations of ingress and egress LCs that can be supported for ABFv4 and ABFv6 with different interface types.

  • Ethernet: ETH
  • Enhanced Ethernet: E-ETH
  • SIP-700: SI

 

 

Untitled1.png

 

Untitled3.png

 

Ethernet as Ingress LC

ABFv4

 

abfv4.jpg

 

 

ABFv6

Untitled5.png

 

SIP-700 as Ingress LC

ABFv4

 

Untitled6.png

 

ABFv6

 

Untitled7.png

 

Scale Support

 

We have tested 1000 unique ABF nexthop (IPv4 and IPv6 combined). Note that this does not include duplicate nexthops, we can use same nexthops across multiple ACLs, ACEs and it will be counted once towards the supported scale. System does not enforce this limitation through configuration, user can configure more than 1000 unique nexthops but any such use-case should be tested thoroughly by the customer before the deployment.

Load Balancing

 

If ABF nexthop has multiple ECMP path then multiple streams matching the corresponding ACE rule will get load balanced across the ECMP paths. The raw-hash will be computed on the ingress LC based on the incoming packet and that will be used to select the outgoing path for the ABF nexthop like regular IPv4 or IPv6 forwarding. This will ensure that all the packets from a given stream will always take the same path when packet is routed using ABF. As a result for multiple streams using same ABF nexthop we should expect to see load balancing across multiple ECMP paths.  Figure 2 shows load balancing for 3 incoming streams matching same ACE rule with ABF nexthop.

 

Untitled8.png

 

Troubleshooting

 

Verify ABF is programmed correctly in the hardware.

 

RP/0/RSP0/CPU0:ios#show access-lists ipv4 abf-1 hardware ingress location 0/1/cpu0

Mon May 13 06:06:58.609 UTC

ipv4 access-list abf-1

10 permit ipv4 any 20.20.20.20 (next-hop: addr=40.40.40.2, vrf name=default)

If nexthop is programmed in the hardware then verify that nexthop is resolved, you can confirm that by pinging nexthop.

Verify that traffic is matching the ACL for which ABF rule need to be applied.

 

ABF rule is applied if the traffic matches corresponding ACE entry

RP/0/RSP0/CPU0:ios#show access-lists ipv4 abf-perf-1 hardware ingress location$

Mon May 13 06:12:16.203 UTC

ipv4 access-list abf-perf-1

10 permit ipv4 any 20.20.20.20 (294096734 hw matches) (next-hop: addr=40.40.40.2, vrf name=default)

Note that if incoming traffic is IPv4 or IPv6 then that will be only matched with ACL rules. However, if incoming traffic is labeled traffic then that traffic won’t match ACL rules and hence no ABF forwarding will take place for that scenario.

  • Verify that Ingress and Egress interface and line card are supported as per support matrix given in Section 2

 

For e.g. ABFv6 is not supported on Ethernet Linecard. Ingress and Egress cannot be Ethernet LC for ABF v6.

 

  • Verify that ABF NH is pingable and is in resolved state.

 

If ABF is programmed correctly in the hardware and traffic is also matching the ACE. Verify that ABF NH is in resolved state. Confirm that you are able to ping ABF nexthop which is programmed

in the hardware.

 

  • Verify path using traceroute

 

Traceroute can be used to determine the end-to-end path taken by the packet stream. If ABF match happen for the traceroute probe packet then you should expect to see ABF nexthop in the traceroute output.

 

Author: Tarun Banka

Comments
Cisco Employee

hi Khaled,

as the new user interface doesn't allow a reply on a comment, this is in response to your comment from "09-19-2017 01:40 PM".

Your expectation is correct. If none of the next-hop address is “UP”, packets go through the standard routing path based on destination address.  

/Aleks

Beginner

Aleks wrote: I don't think that pinging a local IP address will work this way. To understand what really happened, can you run the pings with timeout zero and see which NP counter on the ingress NP matches the pings.

 

I ran pings with timeout=0 and what I saw with

show cef ipv4 drops location

was that “No route drops” was the only counter incrementing.

 

I also checked

show controller np counters

for the ingress NP and found the following fields incrementing continuously based on the number of ping pkts:

PARSE_ENET_RECEIVE_CNT

PARSE_LOOPBACK_RECEIVE_CNT

PARSE_SVC_LPBK_RECEIVE_CNT

PARSE_ABF_LPBK_RECEIVE_CNT

RESOLVE_INGRESS_DROP_CNT

MODIFY_SVC_LOOPBACK_CNT

RESOLVE_INGRESS_L3_PUNT_CNT

IPV4_PLU_DROP_PKT

 

In addition for every 10 increase in the counters logged above, the following two counters increased by one:

PARSE_INGRESS_DROP_CNT

UNKNOWN_L2_ON_L3_DISCARD

 

Thanks,

Hank

Cisco Employee

hi Hank,

 

yes, the "RESOLVE_INGRESS_DROP_CNT" indicates that the NP (happens to be Trident in this case) couldn't resolve the destination. I did some verification in the background and confirmed that ABF is not supported for for-us packets.

/Aleks

Beginner

Thanks Aleks.  This just proves the value of this forum which gives us insight to features that are not documented.

 

Regards,

Hank

Cisco Employee

hi Hank,

that goal is not exactly what we had in mind when we started the forum. ;) I will take the neccessary steps to update the configuration guide on CCO.

/Aleks

Beginner

I posted a similar request to the route-policy forum but perhaps the ABF forum is more appropriate.

I am in need of diverting upd/53 for a number of specific IPs to a different next-hop.  I had hoped to do it via ABF but from my understanding it only works in ingress not egress.  I need to capture the exiting udp/53 requests for only specific IPs and divert them to a different next-hop.  How can I do this with ABF?  If not with ABF or route-policy then how can I accomplish this?

 

Thanks!

Cisco Employee
hi Hank,

is there are reason why you can't do it on ingress? ABF indeed works only on ingress traffic.

/Aleksandar

Beginner

I want to catch the DNS request as it exits our network and not the response when it returns.

 

-Hank

Beginner

I managed to place the ABF on ingress.  But I also wanted to set up IPv6 ABF but your examples show like this:

 

           ipv6 access-list abf-2

     10 permit ipv4 any 2010::1/64 default nexthop1 VRF A ipv6 2020::1

 

whereas what worked for me was:

ipv6 access-list filters-IPv6
10 permit udp any host 2607:f205:201::e eq domain nexthop1 ipv6 2002:be3:900:1::53

 

When I tried a "permit ipv4 any" as your example suggests inside an ipv6 access-l - the syntax wasn't accepted.

Beginner

I have an ABF as follows:

ipv4 access-list filter
  10 permit udp any host 2.2.2.2 eq domain nexthop1 ipv4 1.1.1.1

  20 permit ipv4 any any

 

But I do not want 1.1.1.1 to be affected by the ABF.  Can I add in:

   5 permit udp 1.1.1.1 any

which would cause packets sourced from 1.1.1.1 to not hit the ABF in line 10?

 

Thanks,

Hank

Beginner

Aleksander,
Back a few months ago you stated:
"is there are reason why you can't do it on ingress? ABF indeed works only on ingress traffic."

Suppose I wish to capture all inbound and outbound udp/53 and only have one interface available to me.  ABF ingress works great.  What alternative solution would you recommend for capturing outbound udp/53?

 

Thanks,

Hank

Beginner

Hello,

 

I have several questions regarding ABF feature - it's a little bit confusing me after reading this article:

 

1) ABF checks Next-Hop reachability via RIB or FIB?

2) What happens in case i configure ABF with Next-Hop without default option and destination hits default route entry in RIB, will ABF take place in this case and forward traffic towards the configured Next-Hop or it will forward traffic via default route?

3) In case i configure ABF with Next-Hop with default option - from the example above: If RIB has default route and none of the ABF nexthops are UP then packet will be forwarded to the default route - how it's possible? If i have default route in RIB, then Next-hop can be resolved in this case as they will be hitting the default route entry - confusing. Does it relate to directly connected Next-Hops or indirect Next-Hops?

4) Does ABF supports recursive lookup when specified Next-Hop is not directly connected (loopback of 2-hops away router)?

5) Does it support to resolve Next-Hop with RSVP tunnel and send actually traffic within RSVP tunnel towards that Next-Hop or ABF only works on plain (non-mpls) network segment? Does it work with Next-Hops which are resolved by MPLS (LDP/RSVP-TE)?

6) How does it work with applied BGP Flow-Spec when BGP flow-spec installs dynamic ACLs on the same ingress interface where ABF is applied? Who is taking preference in this case?

7) If i have 2 reachable Next-hops configured in ABF and traffic is using first Next-Hop which is failed at some point, in this case will i see traffic loss or it will be hitless scenario as traffic will be forwarded straight away towards the next reachable Next-Hop?

8) If i have ABF applied on interface (without default option) with 1 configured reachable Next-Hop then all the matched traffic will be counted in that ABF and forwarded towards the mentioned Next-Hop. What happens in case that Next-Hop fails, will this traffic be still matched and counted in the ABF statistics despite of the fact that forwarding decision will be made upon RIB resolution or that ABF will not be taken in place for match statistics at all and traffic will be using only RIB/FIB decisions?

 

Thanks in advance,

Vladimir

Cisco Employee

@voland84 

 

1) In IOS XR architecture RIB is never used for packet forwarding. Packets forwarded in fast path (ASIC or microcode) are forwarded based on the information available in the HW FIB. Packets punted to CPU for slow path forwarding (e.g.: some fragmentation scenarios, packets that trigger adjacency resolution, etc.) are forwarded using the SW FIB.

 

2) ABF will not be used in this case. NP will use the HW FIB entry to forward the packet.

 

3) In above example next-hops were in a different VRF. You have to take that into account when you analyse the explanation. 

 

4) the next-hop configured in ABF doesn't have to be directly attached. As long as there's a prefix in the given VRF that covers the next-hop IP address, ABF should be able to forward the packet.

 

5) if clear IPv4 packets are received on the interface where ABF is configured, you should be able to inject them into MPLS-TE tunnel using ABF.

 

6) if an ABF entry and BGP flow spec entry are describing the same flow, BGP flowspec takes overrides the ABF

 

7) you shouldn't see any drops. 

 

8) if ABF next-hop is not reachable, NP should default to HW FIB for forwarding. 

 

/Aleksandar

CreatePlease to create content
Content for Community-Ad
August's Community Spotlight Awards