i have the following questions regarding ASR9000 family architecture:
1- as per Cisco documentation and presentations, ASR9922 has 1.2Tbps peer slot (bidirectional), meaning that the full chassis has 48Tbps. so how can this capacity acheived if each line card is connected to each fabric line with 110Gbps links, so each line card has 770Gbps to the fabric, so the total device capacity should be 770Gbps * 20 = 15.4 Tbps (bidirectional)
2- As per Cisco presentations, ASR9000 and ASR9900 AC power supplies configuration is N+N and DC power supply is N+1, does that mean that the chassis can only tolerate one DC power supply failure and half of the AC power supplies??
3- how can the ASR9904 chassis achieve 770Gbps/per slot, where the RSP440 used and the 2n generation line cards used are the same used on ASR9010 and ASR9006 which can only provide 440Gbps/slot, does that mean that the limitation is on the chassis of ASR9010 and ASR9006, and the RSP440 and 2nd generation line cards can support 770Gbps/slot?
4- as per cisco presentation, it says that the multicast traffic is passing through a different plane other than the unicast traffic, does that mean that the multicast traffic is passing through other fabric chips (sacramento) which are on the RSP 440?
5- for multicast traffic on ASR9K series, what is the difference between MGID on line cards and FGID on the fabric?
I recommend taking a look at this document
The 1.2Tbps is the chassis limit, as newer LCs and FCs come out we will be able to utilize more of the potential bandwidth.
For power supplies you need N power modules to power all the HW in a particular router.
What if a single power supply fails? In order to prevent the system from shutting down a card due to lack of power its recommended to implement N+1 power supplies
What if your A or B feed fails?
For DC you have two power connectors to each power supply so you can connect both feeds, so if feed A goes out feed B can handle the load and therefore N+1 is okay for DC.
For AC you can only use a single feed, therefore is feed A goes all power supplies connected to feed A will no longer work. So for feed-level protection you need N+N protection.
Correct, the 9904 has an enhanced backplane and a higher theoretical limit than the 9010 or 9006 which have been out for considerably longer.
I am not sure where the term plane comes in effect here, can you share a link to the presentation that shows this? Unicast and multicast packets all go through the same ASICs but we have different queues for these traffic types.
The MGID and FGID you should never need to worry about, these are calculated automatically.
bit more detail if useful on a few of the Q's.
FGID is the fabric group ID, it tells the fabric to which linecards to replicate the mcast traffic to.
MGID is the multicast groupID, it tells the forwarding asics on the LC to which NPU's should receive the replicated traffic. CL id 2904 from both orlando/2013 and sanfran/2014 have some interesting detail on that if you like to follow that programming
The enhanced backplane that Sam is referring to on the 9904 is simply because the RSP's in the 9904 have many "unused" ports, since the number of LC's is limited. So what we did is we wired extra fabric ports to the 2 slots of the 9904, that is why we can offer more BW per slot on the 9904 vs the 9006/9010.
The DC modules have dual feed, that is why there is N+1 redundancy there, as opposed ot the AC supplies which are single feed and therefore called 1:1 in case you are running 2 feeds of AC from 2 different sources.
Just need a confirmation about the switching capacity per slot for ASR 9010, if I insert two 440 RSPs into the chassis, the active active switch fabric setup is done by default and doesn't need any extra configuration, my 36 10G line cards should work with full switching capacity, can you confirm ?
ASR-9010 equipped with 2 x RSP440's shall give you 440Gbps (full-duplex) per line card slot.
The active-active switch fabric is enabled by default and no configuration is required to achieve the same.
The 36 10G line card (A9K-36X10GE-SE/TR) installed in the above configuration should give you line rate performance per port.
Note that the fabric chips of the RP are always enabled and operate independently of the CPU/Control plane. That means that even if the RSP falls back to rommon, the fab chips are initialized and will still be forwarding. Only when you OIR the actual RSP that is when the fabric is disabled (obviously).
When that happens, the FIA's on the LC are all getting an equal amount of back pressure to throttle the total BW per card back to 220G max.
Note that in the 99xx chassis we have extra fabric channels wires to the linecards. Which means that you'll have 550G per slot per direction.
Thanks. What will be bandwidth per slot if we use ASR 9910 with 2 x A99-RSP-SE? Accordingly, we need to add more fabric cards to achieve line rate for new LCs such as 4 x 100 GE or 12 x 100 GE? If we have all 5 x SFC-S cards, does it mean it will give 1150 Gbps to each line card?
The Cisco datasheet has below details, but not much explanation how to calculate that.