cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1844
Views
5
Helpful
7
Replies

ASR9000 Architecture

k_abuasal
Level 1
Level 1

Hi,

i have the following questions regarding ASR9000 family architecture:

 

1- as per Cisco documentation and presentations, ASR9922 has 1.2Tbps peer slot (bidirectional), meaning that the full chassis has 48Tbps. so how can this capacity acheived if each line card is connected to each fabric line with 110Gbps links, so each  line card has 770Gbps to the fabric, so the total device capacity should be 770Gbps * 20 = 15.4 Tbps (bidirectional)

 

2- As per Cisco presentations, ASR9000 and ASR9900 AC power supplies configuration is N+N and DC power supply is N+1, does that mean that the chassis can only tolerate one DC power supply failure and half of the AC power supplies??

 

3- how can the ASR9904 chassis achieve 770Gbps/per slot, where the RSP440 used and the 2n generation line cards used are the same used on ASR9010 and ASR9006 which can only provide 440Gbps/slot, does that mean that the limitation is on the chassis of ASR9010 and ASR9006, and the RSP440 and 2nd generation line cards can support 770Gbps/slot?

 

4- as per cisco presentation, it says that the multicast traffic is passing through a different plane other than the unicast traffic, does that mean that the multicast traffic is passing through other fabric chips (sacramento) which are on the RSP 440?

 

5- for multicast traffic on ASR9K series, what is the difference between MGID on line cards and FGID on the fabric?

 

 

7 Replies 7

smilstea
Cisco Employee
Cisco Employee

Hello,

 

1.

I recommend taking a look at this document

http://www.cisco.com/c/en/us/support/docs/routers/asr-9000-series-aggregation-services-routers/117718-technote-asr9000-00.html

 

The 1.2Tbps is the chassis limit, as newer LCs and FCs come out we will be able to utilize more of the potential bandwidth.

 

 

2.

For power supplies you need N power modules to power all the HW in a particular router.

 

What if a single power supply fails? In order to prevent the system from shutting down a card due to lack of power its recommended to implement N+1 power supplies

 

What if your A or B feed fails?
For DC you have two power connectors to each power supply so you can connect both feeds, so if feed A goes out feed B can handle the load and therefore N+1 is okay for DC.
For AC you can only use a single feed, therefore is feed A goes all power supplies connected to feed A will no longer work. So for feed-level protection you need N+N protection.

 

3.

Correct, the 9904 has an enhanced backplane and a higher theoretical limit than the 9010 or 9006 which have been out for considerably longer.

 

4.

I am not sure where the term plane comes in effect here, can you share a link to the presentation that shows this? Unicast and multicast packets all go through the same ASICs but we have different queues for these traffic types.

 

5.

The MGID and FGID you should never need to worry about, these are calculated automatically.

 

HTH,

Sam

xthuijs
Cisco Employee
Cisco Employee

bit more detail if useful on a few of the Q's.

FGID is the fabric group ID, it tells the fabric to which linecards to replicate the mcast traffic to.

MGID is the multicast groupID, it tells the forwarding asics on the LC to which NPU's should receive the replicated traffic. CL id 2904 from both orlando/2013 and sanfran/2014 have some interesting detail on that if you like to follow that programming

The enhanced backplane that Sam is referring to on the 9904 is simply because the RSP's in the 9904 have many "unused" ports, since the number of LC's is limited. So what we did is we wired extra fabric ports to the 2 slots of the 9904, that is why we can offer more BW per slot on the 9904 vs the 9006/9010.

The DC modules have dual feed, that is why there is N+1 redundancy there, as opposed ot the AC supplies which are single feed and therefore called 1:1 in case you are running 2 feeds of AC from 2 different sources.

 

xander

Hello,

Just need a confirmation about the switching capacity per slot for ASR 9010, if I insert two 440 RSPs into the chassis, the active active switch fabric setup is done by default and doesn't need any extra configuration, my 36 10G line cards should work with full switching capacity, can you confirm ?

many Thanks,

Sarmed

Hi Sarmed,

 

ASR-9010 equipped with 2 x RSP440's shall give you 440Gbps (full-duplex) per line card slot.

The active-active switch fabric is enabled by default and no configuration is required to achieve the same.

The 36 10G line card (A9K-36X10GE-SE/TR) installed in the above configuration should give you line rate performance per port.

 

Regards.

Note that the fabric chips of the RP are always enabled and operate independently of the CPU/Control plane. That means that even if the RSP falls back to rommon, the fab chips are initialized and will still be forwarding. Only when you OIR the actual RSP that is when the fabric is disabled (obviously).

When that happens, the FIA's on the LC are all getting an equal amount of back pressure to throttle the total BW per card back to 220G max.

Note that in the 99xx chassis we have extra fabric channels wires to the linecards. Which means that you'll have 550G per slot per direction.

regards

xander

Many thanks for the clarification, straight to the point indeed.

Regards,

Sarmed

Hi xthuijs,

Thanks. What will be bandwidth per slot if we use ASR 9910 with 2 x A99-RSP-SE? Accordingly, we need to add more fabric cards to achieve line rate for new LCs such as 4 x 100 GE or 12 x 100 GE? If we have all 5 x SFC-S cards, does it mean it will give 1150 Gbps to each line card?

 

The Cisco datasheet has below details, but not much explanation how to calculate that.

 

  12 Tbps (non-redundant) switching capacity per ASR 9910 router
  11 Tbps (N+1 redundant) switching capacity per ASR 9910 router
  1610 Gbps (non-redundant) switching capacity per ASR 9910 line card slot
  1380 Gbps (N+1 redundant) switching capacity per ASR 9910 line card slot
  230 Gbps bidirectional switching capacity per RSP to each line card slot
  Control of up to seven switch fabrics (two located on the RSP and five on dedicated switch fabric cards)
  Offers traffic load balancing simultaneously across up to seven fabrics

 

Regards,

Sumanta.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: