05-28-2013 01:52 PM - edited 08-29-2017 12:19 PM
With Xander Thuijs
During the event, Cisco expert Xander Thuijs provides an in-depth overview of the Cisco ASR 9000 Series Aggregation Services Routers. He will also show the packet walkthrough and explain troubleshooting best practices and tips. Xander will also discuss quality of service (QoS) implementation and forwarding architecture.
Xander Thuijs is a principal engineer for the Cisco ASR 9000 Series and Cisco IOS-XR product family at Cisco. He is an expert and advisor in many technology areas, including IP routing, WAN, WAN switching, MPLS, multicast, BNG, ISDN, VoIP, Carrier Ethernet, System Architecture, network design and many others. He has more than 20 years of industry experience in carrier Ethernet, carrier routing, and network access technologies. Xander holds a dual CCIE certification (number 6775) in service provider and voice technologies. He has a master of science degree in electrical engineering from Hogeschool van University in Amsterdam.
The following experts were helping Xander to answer few of the questions asked during the session: Aleksandar Vidako, Sadananda Phadke, and Krishna Eranti. Aleksandar, Sadananda, and Krishna are members of the ASR9000 Escalation team and have vast knowledge.
Webcast related links:
A. This is answered in the Ask the Expert Event.
A. No, super-frames is implemented in the hardware of the Fabric Interface ASIC (FIA). Also, there is not a show command that provides the number of packets aggregated into super-frames. The question has been raised before whether it makes sense from a troubleshooting point of view to create counters. Cisco determined that this was not value-added, so that ability is not available. Super-framing by itself provides the efficiency of fabric forwarding and it does not have an impact on performance.
A. That depends on the QoS configuration. Each network processing unit (NPU) has frame memory attached to it and this frame memory is where packet buffering is seen. The Trident -L and -B cards send a 50ms burst of traffic.The Trident -E card sends a maximum 150ms burst of traffic. The Typhoon base line card is about three times as much as the Trident, so it sends 300ms per NPU and it is serve first-come first-served. So, if on a Trident card there is one interface on NPU, that interface can use either 50ms or 150ms of buffering. If you created two sub-interfaces, then 150ms will shared by those two sub-interfaces and then it depends on the queue limit configuration in the Q0S policy as to how much of that buffer you want to be assigned for the interface. You can allow oversubscription, but then you run into packet anarchy.
A. This is answered in the Ask the Expert Event.
A. NBAR is not supported.
A. Yes, both Fabric connections from LC to RSP are used to send data at the same time in a loadbalancing fashion
A. This gives 1:1 redundancy on feed failure (like the AC modules). You need to ensure in this mode that if you lose one feed that the available power bricks can still provide enough power required for the cards.
A. This is answered in the Ask the Expert Event.
A. There are no plans for the EOL of the RSP2 as of now. The extra RAM is for control plane processing, the routing table, and so on.
A. Yes. The Fabric of both RSPs can be used simultaneously to forward traffic. It is active/active Fabric.
A. Multicast is replicated on the MOD160 the same as on other types of cards. Modular port adapters (MPAs) do not make L2/L3 forwarding decisions. In general, multicast is replicated by Fabric (to egress LCs) and within egress LC, by Fabric Interface ASIC ( FIA), bridge (in the case of trident cards), and the network processor for egress ports.
A. This is not required. If a packet needs to be dropped due to virtual queue index (VQI) overflowing (or flow-control from the egress LC), then it will be dropped in ingress FIA itself. High priority traffic is always preserved as marked on the ingress interface. VQI flow off happen on a per priority basis.
A. The SIP-700 type of line card supports shared port adapters (SPAs).
A. The decision is made on the ingress LC based on a hash computation derived from packet header contents.
A. In per-prefix allocation, the label is directly associated with a forwarding adjacency which cannot be on BVI. Therefore, a lookup must be enforced after the MPLS label is popped for that reason we need to have per-ce or per-vrf labels to force that extra lookup.
A. Fat-pw is more useful in IP routers since without fat-pw, even if there are multiple ECMP links, the L2 traffic uses only one ECMP link.
Note that on a PE router even if you use fat PW we still select the egress path based on inner label before the fat label is inserted, so the decission is made on PW label.
A. Cisco does not plan to allow a CLI command to disable any drop counter.
A. This is answered in the Ask the Expert Event.
A. This is answered in the Ask the Expert Event.
A. No ISSU is supported starting XR 4.3.0
A. The biggest super-frame size is 9K.
A. No, Address Resolution Protocol (ARP) tables, and hence the adjacency tables, are local to a line card. The ingress card needs to know only that the prefix is associated with a certain output interface. Adjacency lookup and L2 rewrites are performed on the egress card. MAC addresses have egress LC/ports associated on which these address are learned.
For L2VPN, all learnt macs in every bridge domain is sent to all NPU's L2 tables.
A. This is answered in the Ask the Expert Event.
A. No, each LC only keeps the ARP and Adjacency entries of addresses attached to the line cards. This is not exchanged between line cards.
A. It will be line-rate if it is put into MUX160 because MUX160 provides two NPUs. Then you can do one NPU per 4X10G out of that MPA. This MPA is scheduled to be released along with XR 4.31 in May 2013.
A. In the ASR 9001, there are two NPUs and these NPUs each serve two of the four on-board 10G ports. Since Typhoon NPU can do about 16G and 44 million packets per second, that basically leaves you with two 10G ports that are fixed and provide 40G of bandwidth out of that NPU's which is still available for the bay. You can either use 1x40G or 4x10G or 2x10G or 20x1G and you would not oversubscribe the NPUs.
A. It is not oversubscribed, but just like the Mod80 LC it is dependent on the features you enable. You may not get the same performance.
with a 4 port 10G MPA you have 6x10G per NPU same as the 36x10 LC.
A. Yes, you can use different generation of line cards in the same chassis.
A. Cluster is a system-wide functionality. From a logical perspective it is still a single router. VRFs are configured in the same way as on a single chassis configuration.
A. The 8T/L card is an oversubscribed card and will be linerate for larger packet sizes and limited features with dual RP.
The second RP can be used to minimize the oversubscription level to 1:2. On a signle RP the bW is limited to 46G, with dual RP's to 92G..
The NPU is limited to about 15G, so with a single RP the bottleneck is the fabric links, with dual RP the limit is caused by the NPU.
A. This is answered in the Ask the Expert Event.
A. Yes, all the line cards support IP over DWDM. CWDM can also be completed since there are different optics. You can use either color optics with a fixed wavelength or there are tunable optics for which you can configure the wavelength. Also, we have the ability to do G709 FEC. However that requires a software license and that support on all Typhoon line cards, but only on the Trident line card A9K-2T20G and 8x10.
A. This is answered in the Ask the Expert Event.
A. This is answered in the Ask the Expert Event.
A. Generally, two nodes of the cluster will do what we call Rack locality. The low call member of the bundle is preferred, or low call ECMP down to CE. In such cases, the inter-chassis bandwidth requirements are very minimal. Only single-homed devices use Inter Rack Link (IRL) when packet receiving on the peer node.This depends on the amount of CE and the bandwidth requirement on those. It also depends on how many are single-homed, which would basically constitute your IRL requirement. It is advisable to have two minimum 10G for redundancy because IRL also uses both for keep alive to detect their aliveness.
A. There is a tremendous amount of interest in nV edge clustering as it is a very popular concept. The clustering is rather new, but a lot of customers are deploying it. Especially in the US region where a lot of providers leverage this capability. This is definitely something worth considering and is embraced by the community.
A. Inter-chassis traffic for downstream--> No.
A. All ASR 9000 Series chassis, ASR 9922, ASR 9010, ASR 9006, and ASR 9001, can work as an nV host.
When you said "Since Typhoon NPU can do about 16G and 44 million packets per second" i guess you meant 60G. Is that correct?
Thanks
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: