cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
86275
Views
20
Helpful
92
Comments
xthuijs
Cisco Employee
Cisco Employee

 

 

Introduction

In this document we'll discuss how the route scale of the ASR9000 linecards work. There is a significant difference between the first generation linecards known as Trident based vs the next gen linecards that are Typhoon based.

 

In this article you'll find guidance to identify if you have a Trident or Typhoon linecard, what the scale type really means and what it affects, how the rotue scale parameters work on Trident and how the route scale is different on Typhoon.

 

Do I have a Trident or a Typhoon linecard?

The following linecards are Trident based:

 

40G series:

A9K-4T

A9K-8T/4 (8 port 10GE oversubscribed linecard)

A9K-2T20G

A9K-40GE

 

80G series:

A9K-8T

A9K-16T/8 (16 port 10GE oversubscribed linecard)

 

Regardless of the scale version denoted by the suffix -L, -B, -E

 

The following linecards are Typhoon based:

A9K-24x10GE

A9K-100G

A9K-MOD80

A9K-MOD160

and the ASR9001

 

Regardless of the scale version denoted by the suffix -TR, -SE

 

SIP-700 is CPP based

 

What is the difference between the L/B/E type of the cards?

All ASR9000 linecards come in different scale versions with different price points. The scale version does NOT affect the route scale.

Also the architectural layout of the linecard is the same and all features supported on one type of the linecard is supported on other scale versions also.

 

So what is precisely different then between these line card scale types?

The following picture gives a layout of the Network Processor (NP) and the memory that is attached to it.

 

Screen Shot 2012-06-28 at 10.25.22 AM.png

The lookup or Search memory is used for the L2 MAC table (in IOS terms "CAM" table) and by the FIB (Forwarding Information Base, eg where the CEF table puts the forwarding info).

 

The L/B/E cards change the Stats, Frame and TCAM size and therefore derive a different scale based on:

 

Stats memory:

     Interface counters, QOS counters, EFP counters.

     The more stats memory I have, the more QOS Policies I can have, the more interfaces/EFP's/Xconnects etc I can support on this card.

 

Frame memory:

     Used for packet buffering by QOS

     The more frame memory I have allows me to buffer more packets in the queues

 

TCAM:

     Used by vlan to interface matching (eg I get a vlan/combo in and to which subinterface does that match the closest), ACL scale and QOS      matching scale.

 

The SEARCH memory is not changing between L/B/E hence the Route or MAC scale remains the same between these cards.

 

To Sum up: The difference between L/B/E (for Trident) or TR/SE (for Typhoon) mainly affects the:

  • QOS scale (Queues and Policers)
  • EFP scale (L2 transport (sub)interfaces and cross connects)

 

What does not change between the types is:

MPLS label scale, Routes , Mac , Arp , Bridgedomain scale

 

What about these hw-module scale profile commands on Trident ?

The Trident linecards provide a great amount of flexibility based on the deployment scenario you have.

As you could see from the above description, the search memory is not affected by the scale type of the linecard.

Considering that the ASR9000 was originally developed as an L2 device, it divided that search memory shared between MAC and Route scale

in "favor" of the MAC scale leaving a limit route capability.

 

With the ASR9000 moving into the L3 space we provided scale profiles to effectively adjust the sharing of the Search memory between L2 and L3 in a more user defined manner. So by using the command

 

RP/0/RSP1/CPU0:A9K-BOT(admin-config)#hw-module profile scale ?

  default  Default scale profile

  l3       L3 scale profile

  l3xl     L3 XL scale profile

 

You can move that search memory in favor of L2 or L3:

 

       "default" or L2 mode                l3xl mode

Screen Shot 2012-06-28 at 10.37.11 AM.pngScreen Shot 2012-06-28 at 10.37.30 AM.png

Which inherently means that the increase FIB scale goes as the cost of the MAC scale and in the following manner:

 

Screen Shot 2012-06-28 at 10.42.11 AM.png

Notes:

1) This scale table is Trident specific.

2) Some values are testing limits eg IGP routes, some are hardware bound

3) The EFP number is dependant on the scale type of the linecard (E/B/L), which this tries to show is that the EFP scale is not affected by the HW profile scale command.

 

Typhoon Specific

Typhoon has a FIB capability of 4M routes. Typhoon uses separate  memory for L2 and L3 and therefore the profile command discussed above  is not applicable to the Typhoon based linecards.

 

 

Understanding IPv4 and IPv6 route scale

As you can see in the scale table above the number of IPv6 routes is half of the number of ipv4 routes. v6 routes consume more space in the FIB structures and the system calculates in this manner that the number of v6 routes consumes twice as much as the number of v4 routes.

 

Now when we state that we have 1M FIB scale in the L3 mode. We should read it as 1M credits.

Then knowing that a v4 route consumes 1 credit and an ipv6 route consumes 2 credits, we can compile the following formula:

 

Number of IPv4 routes + 2 * the number of IPv6 routes <= Number of credits as per scale profile

 

Typhoon Specific

This logic of v4/v6 scale is the same for Typhoon, but with the notion that Typhoon has 4M credits

 

Understanding Subtrees

One concept that was referenced in the scale table is the SUBTREE. Subtree is a method of implementing a Forwarding Information Base. Trident uses this implementation methodology.

 

While the route scale in say the L3 profile is 1M ipv4 routes, it depends which VRF the routes are in based on their tableID and what the subtree size is.

 

Table ID's 0 to 15 have a subtree assigned per /8. That means that they can reach a 1M route scale individually as long as you don't exceed the number of routes per subtree size. In L3 mode the subtree size is 128k. That means in order to reach the 1M route scale I need to assign 8 /8's filled with 128K each to reach that one million routes.

 

Note the route scale mentioned is the sum of all routes together combined of all vrf's

 

VRF table ID's higher then 15 to the max vrf scale only have one subtree total which means that the route scale for those VRF's is 128k tops in L3.

 

IPv6 routes have one subtree period, meaning that V6 cannot have more then the subtree size as directed by the scale profile configured.

 

The following picture visualizes the subtree.

Each subtree can point to either a non recursive leaf (NRLDI) or a recursive leaf (RLDI)

 

  1. You can have 4 (or 8, requires admin config profile and has some pps implications) recursive ECMP (eg BGP paths)
  2. Each of those recursive can point to 32 non recursive paths (eg IGP loadbalancing)
  3. Which in turn can be a bundle path with 64 members max.

Screen Shot 2012-06-28 at 11.15.10 AM.png

 

How to find the tableID of a vrf

 

Note that tableID's are assigned only when you enable an IP address on one interface that is member of the vrf.

 

RP/0/RSP0/CPU0:Viking-Top#show uid data location 0/0/CPU0 gigabitEthernet 0/0/0/2 ingress | i Table
Fri Apr  9 16:24:32.878 EDT
  Table-ID:           256

 

RP/0/RSP0/CPU0:Viking-Top(config)#vrf GREEN

 

RP/0/RSP0/CPU0:Viking-Top(config-vrf)#address-family ipv4 unicast
RP/0/RSP0/CPU0:Viking-Top(config-vrf-af)#commit
RP/0/RSP0/CPU0:Viking-Top(config-vrf-af)#int g0/0/0/12
RP/0/RSP0/CPU0:Viking-Top(config-if)#vrf GREEN

RP/0/RSP0/CPU0:Viking-Top#show uid data location 0/0/CPU0 gigabitEthernet 0/0/0/12 ingress | i Table
Fri Apr  9 16:22:40.263 EDT
  Table-ID:           512
RP/0/RSP0/CPU0:Viking-Top#

 

Table ID assignment:

 

1) when an RSP boots, tableID assignments start at 0. (verified in labs)

2) ID zero is reserved for the global routing table. (given)

3) The first 15 tableID’s can carry > 128K* routes, given that no more than 128k* routes per subtree(/8) (given limitation)

4) Understanding that reconfiguring a VRF would increment the tableID values (verified in lab), and might eventually push it out of the preferred table space.

5) No more than a total of 1M routes per system (or as defined by the scale profile), regardless of in which tableID these routes are.

6) In order to reach that of 1M route scale, the command hw-module profile l3 or l3XL needs to be configured.

 

If you have <15 VRF’s configured, reloading would still result in the fact that these tableID’s will get in the larger table space (tableID assigned <15).

Although one vrf may not get assigned the same tableID value after reload, but this is not interesting from a user perspective.

 

NOTE1: table ID is a 16 bit value, that is byte swapped. So tableId 1 has value 256, table id 2 has value 512 etc

 

NOTE2: We're working on an enhancement to make a vrf "sticky"  to a particular tableId so you can make sure that this vrf will always  get the higher route scale. Track: CSCtg2546 for that.

 

*NOTE3: 128k or 256k depending on the scale profile used. Some older pre 401 releases had a smaller subtree size of 64k.

Typhoon Specific

Typhoon uses the MTRIE implementation for the FIB and therefore the above Subtree explanation and its associated restrictions do NOT apply to linecards using the Typhoon forwarder.

 

 

Monitoring L3 Scale

You can use SNMP for this by pulling the route summaries for EACH vrf or using the CLI command as follows:

 

RP/0/RSP0/CPU0:A9K-TOP#show route vrf all sum

 

VRF: RED

 

Route Source    Routes    Backup    Deleted    Memory (bytes)

connected       1         1         0          272

local           2         0         0          272

bgp 100         0         0         0          0

Total           3         1         0          544

 

VRF: test

 

Route Source    Routes    Backup    Deleted    Memory (bytes)

connected       0         1         0          136

local           1         0         0          136

bgp 100         1         0         0          136

Total           2         1         0          408

 

VRF: private

 

Route Source    Routes    Backup    Deleted    Memory (bytes)

static          0         0         0          0

connected       1         0         0          136

local           1         0         0          136

bgp 100         0         0         0          0

dagr            0         0         0          0

Total           2         0         0          272

 

The number of routes is provided per source (IGP/BGP etc) and for the FIB that doesn't matter.

Also the memory that is presented is XR CPU memory and is not the memory that is used by the hardware.

 

Because of the Trident subtree implementation, if you want to be accurate you need to count the number of routes in the VRF table ID's

0-15 (and tableID0 being the global routing table) on a per /8 bases.

 

Show commands

 

This is the global view as to how things are implemented.

 

  • The RIB, routing information base solely resides on the RSP and is fed by all the routing protocols you have running.
  • The size of the RIB can grow as far as memory scales.
  • The RIB compiled a CEF table which we also call the FIB, forwarding information base which is distributed to the linecards
  • The linecards complete the FIB entries with the L2 Adjacencies (eg ARP entries which are isolated to the linecards only, UNLESS you have BVI's when those L2 ADJ's are shared on all LC's)
  • The complete entry is then programmed into the NPU.

 

 

Screen Shot 2012-06-28 at 11.00.49 AM.png

 

Screen Shot 2012-06-28 at 11.01.04 AM.png

 

Screen Shot 2012-06-28 at 11.01.12 AM.png

Screen Shot 2012-06-28 at 11.01.19 AM.png

Screen Shot 2012-06-28 at 11.01.27 AM.png

 

Summary view

The easiest way to verify and validate resources for the linecard is via this command:

 

RP/0/RSP1/CPU0:A9K-BOT#show cef resource loc 0/1/CPU0

Thu Jun 28 11:02:41.855 EDT

CEF resource availability summary state: GREEN

CEF will work normally

  ipv4 shared memory resource: GREEN

  ipv6 shared memory resource: GREEN

  mpls shared memory resource: GREEN

...

 

 

 

 

Using the ASR9000 as a Route Reflector

This can be done no problem. A route reflector is generally never in the forwarding path of the traffic. This means that we can put all the routes in the RIB and not install them in the FIB based on a policy.

We can use the table-policy under the BGP config to pull in an RPL that denies the installation of routes into the FIB.

Then we can use the RP CPU memory for reflecting routes as far as memory scales.

How far we can go?

Depends on the Paths, attributes, size of the attributes and whether you have the high or low scale RSP memory version.

Numbers can be anywhere from 7M to 20M, depending.

ARP vs MAC scale and understanding adjacencies

This is a topic I noticed leading to some confusing. It is important to understand the difference between MAC vs ARP scale.

MAC vs ARP

While MAC refers to what we know in switches as the "CAM" table, so basically the mac learning forwarding table, ARP scale refers to the number of ARP adjacencies that an L3 router can hold to complete the mac rewrite string to forward traffic to its L3 peers.

MAC scale is defined by the L2 bridging tables and is subject to scale differences on Trident LC's depending on the hw-module profile ran, whereas on typhoon the MAC scale is in a separate forwarding table (switching table) that is limited to 2M macs tops.

ARP scale

The ARP scale is something tricky, while the hardware forwarder has lots of mem available to complete the forwarding adjacency, the software may not. Few details:

ARP is local to the linecard, e.g. it serves no purpose for an ingress linecard to know what the ARP entry for a destination is that lives on another linecard. Since the ingress linecard would just forward the traffic over to the destination LC, it doesn't need to know the ARP/mac address of the destination interface.

This allows for a great scale improvement! ASR9000 has been tested with 128k of those ARP adj per LC! that is massive already, but not bound by that number.

There are a few stipulations when it comes to:

-BVI

-Bundle/Etherchannel

You can figure that if the egress interface is a bundle-E or a BVI the arp adj needs to be present on multiple linecards and specifically those that hold members in that bundle or carry EFP (Ether flow points or l2transport interfaces in a say Bridge domain) in the scenario of BVI.

SW processing of ARP

XR software stores the ARP table in what we call shared memory, this is basically a RAM disk (if you're familiar with the old DOS technology) that can technically grow as far as memory goes on the LC. It means that the limit of 128k ARP entries is a "soft limit" and can go farther, but requires the right amount of memory on the LC to be avaiable. Some deployment profiles take more LC memory and therefore may not grow as far as we'd like.

ARP is received by the NPU as a "for me" packet and punted by LPTS to the LC CPU for processing.

There is a single queue which holds all of the arp requests and replies.

The CPU utilization is a factor also to determin the processing capability and scale for ARP.

Packet forwarding with BVI and Bundle

When forwarding over a L3 (routed) adjacency, we obviously need to know the destination mac address to complete the layer 2 header. When the router port is connected to a server, the dmac will be the server's mac address, in the case of a peering router, it will be the mac address of the remote interface. That is all standard and simple. Because an egress linecard does all the encap for this layer 2, there is no need for an ingress linecard to know what the arp entry is for a destination interface, that increases the scale.

When it comes to bundle and bvi, the ARP entry has to be replicated to those locations that have members in the bundle for instance. The following pictures explain:

BVI processing with EFP's on different linecards:

Here you can see that every linecard that has a routed port, needs to know the ARP entry, so in BVI all linecards will see the ARP entry tied to a BVI (that means L2 destinations -> EFP's).

Bundle forwarding with members on multiple linecards

In bundle when running as a L3 routed interface the ingress linecard does the processing as normal, determines the egress interface, which is a bundle in this example and then determines that the member to be selected can be on either one or the other LC. It forwards that packet to the fabric and to the destination linecard that holds the member of the bundle.

In this case, all inecards that have a member in the bundle need to know the ARP address to complete the forwarding mac rewrite and L2 headers.

How to verify

To verify how many ARP entries a linecard holds we can use this command:

RP/0/RSP0/CPU0:A9K-BNG#show adj summary location  0/0/cPU0
Wed Jan 27 17:10:56.465 EDT

Adjacency table (version 1284) has 66 adjacencies:
     28 complete adjacencies <<<< learnt adj's from remote destinations
      0 incomplete adjacencies
     38 interface adjacencies <<<< local adj's that are bound to my interfaces
      0 deleted adjacencies in quarantine list
     19 adjacencies of type IPv4 <<< by AFv4 address family
         19 complete adjacencies of type IPv4
          0 incomplete adjacencies of type IPv4
          0 deleted adjacencies of type IPv4 in quarantine list
         14 multicast adjacencies of type IPv4
      4 adjacencies of type IPv6 <<<< and here for ipv6
          4 complete adjacencies of type IPv6
          0 incomplete adjacencies of type IPv6
          0 deleted adjacencies of type IPv6 in quarantine list
          4 multicast adjacencies of type IPv6

Ultimately

Ultimately the ARP scale for an asr9000 device or XR in general is dependent on a few things:

  • the number of L2 devices attached (which will grow in bridge domains)
  • if there are bundle or BVI in play (that will need to hold AND replicate entries across linecards)
  • how much free memory is available on the linecard

ARP scale *can* grow, but there is another factor: LPTS. LPTS defines a policer for ARP requests (and learning). XR does passive learning meaning that if we see a request FROM a host, we "learn" that host and add it to our table. This is awesome because now we don't need to ask for the hw-addr for that host that we learnt and saves messaging. However that will increase the table size. And then we're back with the stipulations mentioned earlier; how much memory do we have available.

Although ARP has significant improvements in XR 5.3.1, since it is a single threaded process more can be done, which we are planning for. For instance, making ARP multi-threaded (though work), or applying queue management for handling better the volume of incoming requests.

Remember that LPTS policers values are set per NPU, so if you have an LC with multiple NPU's, the total forwarded rate can be a multiplication of the configured value byt he number of NPUs on that LC.

Tuning of LPTS policers may be necessary to help XR deal better with the volume of ARP requests.

N/A.

 

 

Comments
xthuijs
Cisco Employee
Cisco Employee

Hi Deniz,

because the ucode space in the Trident NPU's is limited and people requested too many features :) we had rip something out in favor of another feature.

So the hw module profile feature l2 (as opposed to default) gives you PBB (legacy PBB) at the cost of netflow, v6 urpf and ipv6.

There are also other feature or profiles that provide for some code optimization for instance as an LSR router, but coming at the cost of something then also.

In general, except for the hw-module profile scale l3xl (to make sure trident has enough route scale when running is an inet edge) you generally dont need to tune these.

cheers!

xander

Deniz AYDIN
Level 1
Level 1

Hi Xander,

We are using IOS XR version 5.1.3, enhanced line cards and using l3 scale. We need full route table support (internet in a vrf). But it says depricated for l3 scaling! I don't find any information on the docs, neither in conf or command reference guide. I guess default scaling will not be enough as current internet table is more than 500K. 

hw-module profile scale ?
  bng-max  BNG max scale profile
  default  Default scale profile
  l2       L2 scale profile
  l3       L3 scale profile (depricated)
  l3xl     L3 XL scale profile
  lsr      LSR scale profile
  sat      nV Satellite scale profile

Best Regards,

Deniz.

xthuijs
Cisco Employee
Cisco Employee

Hi Deniz, the profile scale l3/l3xl are for Trident linecards only. The enhanced ethernet cards are typhoon which have 4M fib routes and dont have the subtree restriction discussed here above.

The only thing to note is that the l3xl changes the memory heap size a bit for Intel Processors (eg RSP440), so BGP has a bit more scale in terms of its table (eg when you have millions of paths).

But the l3/l3xl to reach 1M or 1.3M FIB is for Trident only, Typhoon has 4M regardless of that command.

xander

Deniz AYDIN
Level 1
Level 1

Ahh I missed,sory for that. You have already explained this on the doc:(

Thanks a lot.

Won Lee
Level 1
Level 1

Hi Xander,

How does the PIC(prefix Independant Convegence) Edge affects the FIB scaling? (ASR9k RSP2/Trident LC/L3 profile in SP network). Would eachPIC backup paths be counted as a seperate FIB entry? if does, that means the FIB entry max will be lower than the listed max.

 

Thanks in advance!

-Won 

xthuijs
Cisco Employee
Cisco Employee

hey won,

PIC does not affect the prefix scale, due to the way the fib is implemented this is by the nature of it.

A route itself, when installed in the FIB cost you a credit, but it is not that PIC amplifies or multiplies it. When you have a route that has multiple next hops (eg BGP path), or a next hop has multiple exit points (eg IGP route to that next hop) it is just that number of routes. PIC doesn't multiply it. PIC just shifts a pointer from one path or next hop to another if they are available allowing you to converge a number of prefixes that leverage that path/hop in one go.

cheers

xander

Won Lee
Level 1
Level 1

Thanks a lot for details again. 

Please allow me ask one more question. in our implementation of PIC Edge, using add-path with best 3 add-path to advertise, ASR9k BGP session to IPv4 RR shows 1.4 million routes received (3 x full internet table). in my understanding this seems due to each path with path-id counts as one route. if this is the case, this 1.4 milltion countes against  the ASR9k BGP table limit? I heard 32bit XR BGP table can take only up to around 3 million at max, if i am correct. We have more transit peering session and wondering if ok to advertise all routes paths (add-path advertise all) instead of current setting (add-path advertise best 3).

Regards,

-Won

xthuijs
Cisco Employee
Cisco Employee

BGP uses a concept called path lists and attribute sets. So multiple prefixes using the same nexthop or same attribute set can "share" that path list, that saves memory and is very fast to update (prefix independent convergence).

with a 32bit OS that XR is today, you cant access more then 4G of mem, so a process itself cannot access more then 4G. with 4G of mem your heap is about 2.1-2.7G. With this much memory available you can easy get to 17M prefixes. Now if there is a use variety of path atts and next hops etc then you consume more memory so how far you can go is not easy to say.

In your example 3M is well below what BGP can handle in XR.

Note however that the FIB what gets installed in HW is max 1.3M for trident and 4M for typhoon. So if you have a trident based RR, you will need to use table-policy to filter what getsinstalled in the FIB. an RR generally doesnt need the full table in hw since it is not in the forwarding path.

regards

xander

Emperor2000
Level 1
Level 1

Hello Xander.

We have now managed to get more than 256k routes in on a Trident linecard but are insteed runing into an issue where the router is complaining that it cannot allocate enough labels for all the prefixes. 

We tried to extend the label range from 24 000 - 289 999 to 24 000 - 650 000 but we then get this in our logs. 

LC/0/0/CPU0:Feb 18 15:33:10.517 : fib_mgr[175]: %PLATFORM-PLAT_FIB-3-ERROR_NOT_SUPP : Out of range label key:290000, H/W supported max label key:289999  : pkg/bin/fib_mgr : (PID=471146) :  -Traceback= 4a297140 4c33417c 4c342f94 4bf079cc 4bf41550 4bf14044 4bf029f4 4bba4810 4009d0ac 400a131c 400a2238 4018a128 4018c5e0 4018e234 401938f0 4019c8a8
LC/0/0/CPU0:Feb 18 15:38:29.870 : fib_mgr[175]: %PLATFORM-PLAT_FIB-3-ERROR_NOT_SUPP : Out of range label key:344704, H/W supported max label key:289999  : pkg/bin/fib_mgr : (PID=471146) :  -Traceback= 4a297140 4c33417c 4c342f94 4bf079cc 4bf41550 4bf14044 4bf029f4 4bf1f5ac 4bf25b14 4befc27c 4bef95cc 4bbbc320 40095758 400a376c 400a4140 4018cd70

 

However when checking the table summary it seems that the labels are assigned. 

#sh mpls label table summary
Wed Feb 18 15:40:33.958 CET
Application                  Count
---------------------------- -------
LSD(A)                       4
LDP(A)                       36
BGP-VPNv4(A):bgp-default     542993
---------------------------- -------
TOTAL                        543033

 

What is the expected behaviour of this? It seems that the labels are assigned but the router is unhappy about it?

 

xthuijs
Cisco Employee
Cisco Employee

hi there,

yup you are exhausting the table range of 290k.

If you have typhoon cards then you can use the full 1M range, in this case your local applications want more labels and they can't be handed out anymore.

you can check your label range with

show mpls label range and if you have typhoon you need to configure the expanded range.

if you have trident, then this is the max you'll ever get, and you will need to resort to per-vrf/ce label allocation under the rotuer bgp config section.

xander

a.niko
Level 1
Level 1

Hello Xander,

Arnold (kpn) here, I was reading up on route scaling (RIB/FIB) for IOSXR platforms as we are currently just deploying here, and I was wondering if the same rules you describe here are also the case for other platforms running IOSXR, such as carrier routing system. I realise the hardware is of course different to Trident/Typhoon gear, but can you provide similar information for the CRS systems (CRS3 for example) we deploy ASR's as network mpls/vrf PE edges, and currently we run CRS without VRF's but are considering this for the future.

 

regards

Arnold

xthuijs
Cisco Employee
Cisco Employee

hey arnold,

this info above is very specific to the trident/typhoon hardware and as such doesnt pertain to CRS. Let me have the CRS escalation folks connect with you and or will see if they can write a similar overview for the CRS based hardware.

cheers

xander

harindhafdo
Level 1
Level 1

Hi Xander,

Is there any difference between RSP440-SE and RSP440-TR for the number of BGP peers and number of IPv4/IPv6 prefixes ? if yes, can we have the difference in numbers ?

Rgds

Harin

xthuijs
Cisco Employee
Cisco Employee

hi harin, the SE RSP version has more RP memory (6 vs 12G).

what bgp can allocate is still 4G (32bit OS, but 64bit memory mapper). So the BGP scale doesnt go higher with the SE card, but the multi dimensional scale will increase. So if you have high bgp, high qos, many acl, lots of EFP's etc etc, or when you do BNG, then more processes want to have more memory which is when the SE card comes into play.

cheers!

xander

harindhafdo
Level 1
Level 1

Thanks Xander for your quick response as always!!!

Rgds

Harin

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Quick Links