cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6858
Views
54
Helpful
50
Replies

ASR 512K License

foad jalali
Level 1
Level 1

thank you for all your help and Support

 

about 9000 family routers i have a big question that I can't find any answer

 

I have  a ASR 9006 routerr  that i want to  terminate 512K subscriber on it  but unfortunately  I can't find any license about  512K subscriber

 

The only license that cisco offer is A9K-BNG-LIC-8K for every Slot or line card

 

If i have 2 line card that are A9K-MOD80-SE should  i  pre order  only 2 license  ? or more ?

 

And i want To know what is difference between asr9k-bng-px.pie-5.1.1  and  A9K-BNG-LIC-8K

And If i don’t want to get  license , does my router work as a BNG and how many subscriber does it supports without any license ?

 

Thank you again

50 Replies 50

xthuijs
Cisco Employee
Cisco Employee

Hi Foad, in that hardware configuration you won't be able to reach 512k subs, here is why:

On the NPU we have a hard limit of 32k subs (this is because the uidb table, that is the interface descriptor table) is 32k max. On the Mod 80 we have 2 NPU's. so on the MOD80 you can never go beyond 64k subs.

On the 9006 you have 4 LC slots. 4x64k=256k. And that leaves then no room for uplink interfaces (and no redundancy on access either).

For this scale you probably want to consider the 9010 or 9922 that provide more slots.

 

regards

xander

so on the MOD80 you can never go beyond 64k subs

Xander, are you talking about 64K single stack or 64K dual stack sessions per MOD80?

 

Regards,

Dimitris

64k dual stack dimitris on the MOD80. that is the max that card can every carry (because it only has 2 NPU's).

Here is a good overview I think

Thanks for the quick answer!

Based on your answer, I assume that the term "sessions" in the scaling matrix refers to dual-stack sessions.

Hi Xander,

Sorry for returning back on this, but we need some additional clarifications regarding the ASR9K BNG scalability.
Our main question is if there are any session limits per 10GE port or per MOD80 slot?

For fully understanding the scalability options, I would like to ask you to answer the following questions if possible:

1. I have one MOD80 with one MPA-4x10GE. Can I reach 64K sessions or each NPU is assigned to one slot, so 32K sessions per MOD80 slot?
2. I have one MOD80 with one MPA-4x10GE. Can I distribute the supported sessions unequally between the 10GE ports and reach the maximum supported sessions (e.g. if the max is 64K, can I distribute them like this: 40 in the first, 24 in the fourth and 0 in the second/third)?
3. I have one MOD80 with one MPA-4x10GE. Assuming I can reach 64K sessions (question 1), can I use 2 x 10GE as access/customer facing interfaces (terminating the PPPoE sessions) and the other 2 x 10GE as uplink interfaces?
4. I have one MOD80 with two MPA-4x10GE. Can I distribute the 64K sessions unequally between them and reach the maximum supported sessions (e.g. 20 in the first MPA and 44 in the second)?
5. I have a full 9006 (4 x MOD80, 8 x MPA-4x10GE) and I want to reach 256K sessions. Why are you saying that I don't have room for uplink interfaces? Can't I use some of the GE interfaces for uplink?

When I say "sessions" I always mean dual-stack PPPoE sessions :)

Thank you very much in advance,
Dimitris

Hi Dimitris:

Each NPU has a limit of 32k. A MOD80 has 2 NPU's. One per bay.

So the total sum of sessions on that MPA can NOT exceed 32k in a mod80. There is no per port limit. However if you have shaping QOS, you need to configure the QOS resources such as to where you need them on which port. Because the parent shapers are in 8k chunks, and if you have one interface with 10k sessions, you need to assign 2 chunks to that interface. This chunking stuff only applies to sessions that have shaping needs (policing not affected by this).

2) Yes you can (but 32k per NPU here then right). Especially when there is no QOS need, there is no additional config needed. If shaped QOS is needed, "1" above applies.

3) Yes you can! Say you have 16k sessions EACH on port 0 and 1, you can use port 2 and 3 for uplinks no problemo!

4) Yes you can! :) The LC can hold 64k max, that means 2 fully loaded NPU's with sessions, but if you say only need 40k, then the first NPU can hold 32k and the other one 8k, not a problem.

5) 4 MOD80's with 2 per slot accounting for 32k each. This is 256k total. If you have the 4x10 MPA and you're not using ALL your interfaces for bng access, then yeah you have some to spare for uplinks, sure thing. To reach this scale you need LC based subs, so can't use bundle access (as that would pull them to the RSP).

having 16k subs over a 10G is about 80k each, doesnt seem like a lot of bw, but hey that is a different story.

Dual-stack: see chart above, yes we can do the 128k today, target is to go higher and higher! (but that likely means that you need more LC's because we need to keep distributing them to the LC processing as you can understand).

cheers!

xander

Thank you very much Xander. You were very enlightening once again :)

To sum up, in case I have an 9006 with 2 x MOD80 + 2 x MPA (one MPA in each MOD80), I can reach up to 64K dual stack PPPoE sessions (RP or LC based).

In order to reach 128K dual stack PPPoE sessions, I need 2 additional MPAs and the corresponding licenses, but the sessions must be LC based.

 

haha thanks Dimitris :) and you are correct! that summary is accurate!

xander

http://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r5-1/bng/configuration/guide/b_bng_cg51xasr9k/b_bng_cg51xasr9k_chapter_0101.html#concept_0847714569CA4261B47679B263C0E974

Benefits and Restrictions of Line Card Subscribers

 

Benefits of line card subscribers

These are some of the benefits of LC subscribers:

  • Subscribers built on bundle interfaces and line card physical interfaces can co-exist on the same router.

 

Does this mean that I can have 64K sessions on the RP and additional 64K on 1 x MOD80 + 2 x MPA so 128K totally???!!!

haha, no you can't do that Dimitris :) You still have the limit of 32k (uIDB's) per NPU.

So the MOD-80 with 2 NPU's is always limited to 64k max total ever.

 

regards

xander

But if we have another LC, say 9010+36x10G, can we get 128K DS sessions (64K LC based subs + 64K RP based subs) from such a box ? 

We currently support 64k subs per LC, the testing (not technical) limit on the 36x10 is that although you can theoretically support more then 64k subs, that we havent validated.

This regardless of whether the subscriber is on the RP or LC.

In order to go >64k subs, you need 2 LC's at minimum.

Where if the LC used is a MOD80 it is capped, due to the explained hw limit to 64k, however an LC with more then 2 NPU's can go beyond the 64k (eg 36x10, 24x10, MOD160).

regards

xander

Dear Xander,

Your comments were really helpful. 

As I looked into the hardware architectures of 36x10G, 24x10G and MOD160, it's realized they've got 6 , 8 and 4 NPUs respectively. However it's a bit strange that 36x10G has less NPUs than 24x10G. (source: Cisco Live! BRKARC-2003, 2013)

1- Can I simply conclude that they support 192k, 256k and 128k subs ? Do the tests confirm that?

2- I want to put two ASR9006s as BNGs in two different sites and connect them via nV, each supporting 512K subs, fully redundant. Having obtained the cluster licenses, do I need to worry about anything else?

 

Regards

Amin  

 

the 24x10 has 3x10G per npu and the 36x10 has 6x10G per npu, so effectively double loaded. the choice is here between price and performance.

24x10 has more horsepower, but is price wise more expensive per port. the 36x10 is more dense, lower price per port, but also has lesser horse power as now the full 45mpps per direction is shared over 6 instead of 3 interfaces.

a linecard supports 64k subs total, regardless of the number of npu's it has. the limitation is not the npu (npu can hold max 32k subs), but the LC CPU power which has been tested up to 64k, but can go to 128k. So to reach say 512k subs you need the subs over 4 LC's.

cluster wouldnt increase the scale, it only provides for stateful redundancy. note that the 512k subs is an RP scale limitation in that case, andsince the control plane is fully shared between 2 nodes in a cluster, you dont have more control plane power.

note also that with 512k subs you will want LC based subscribers (eg terminated on gig or 10gig subifs and not bundles). Bundle interfaes will pull control of the session to the RP, phy (sub)ints will leave the control on the LC CPU.

cluster is particularly useful when you have bundles whereby one member is on one node and the other member on the other node.

So what I am trying to say is at 512k scale, you dont want to use cluster technology anymore and resort to stateless redundancy.

regards

xander