With Xander Thuijs
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn how to Cisco ASR 9000 Series Aggregation Services Routers with Cisco expert Xander Thuijs. The Cisco ASR 9000 Series Aggregation Services Routers product family offers a significant added value compared to the prior generations of carrier Ethernet routing offerings. The Cisco ASR 9000 Series is an operationally simple, future-optimized platform using next-generation hardware and software. The ASR 9000 platform family is composed of the Cisco ASR 9010 Router, the Cisco ASR 9006 Router, the Cisco ASR 9922 Router, Cisco ASR 9001 Router and the Cisco ASR 9000v Router.
This is a continuation of the live Webcast.
Xander Thuijs is a principal engineer for the Cisco ASR 9000 Series and Cisco IOS-XR product family at Cisco. He is an expert and advisor in many technology areas, including IP routing, WAN, WAN switching, MPLS, multicast, BNG, ISDN, VoIP, Carrier Ethernet, System Architecture, network design and many others. He has more than 20 years of industry experience in carrier Ethernet, carrier routing, and network access technologies. Xander holds a dual CCIE certification (number 6775) in service provider and voice technologies. He has a master of science degree in electrical engineering from Hogeschool van University in Amsterdam.
Remember to use the rating system to let Xander know if you have received an adequate response.
Xander might not be able to answer each question because of the volume expected during this event. Remember that you can continue the conversation on the Service Providers community XR OS And Platforms shortly after the event. This event lasts through Friday, May 24, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
Webcast related links:
I have noticed a distinct lack of data from Cisco about the performance of the ASR 9000 routers. This has posed a challenge for me, as I do not really know at what kinds of loads I should be looking to step up from something like an ASR 1000 or 7600 to the ASR 9000.
Could you give us some hard numbers regarding how much data the router can really push?
Gregory, assuming you mean pps performance right? Ok that number is hard to give as it is very much feature (set) dependent. I have some hopefully helpful write up here: https://supportforums.cisco.com/docs/DOC-32025
The pps performance per NPU also depends on the sw release in question. Because we are fixing up sw paths for certain swtiching scenarios constantly where we can.
Any case to give you some ball park numbers:
Trident NPU: ~17Mpps per direction, ~15G bw limitation.
Typhoon NPU: ~44Mpps per direction, 60G bw limitation.
For instance in terms of pps hits:
ingress ACL on typhoon gives about a 28% performance hit.
ABF (access list based forwarding) gives you about 32% performance hit.
The capacity for ASR9000 with the RSP440 is 440G per slot.
Next generation NPU's and fabric will go to the extend of providing an easy 6x100G cards at linerate.
Choosing between 7600 and A9K? You'll definitely want to pick the ASR9000 when you have plans for high density 10G aggregation, 40GE needs or 100G needs. Also IOS-XR provides for a lot of robustness and improvements over classic IOS.
If Watts per Gig is a concern, then A9K is also the right choice.
Did I address the main concerns/questions in terms of performance or did I leave something open?
crosspost from a different forum here but I'll give it a try
We are building a DCI with vpls. I was wondering how mac address learning works after a vmotion. The minimum time-out which can be configured on the VPLS devices (ASR9000) is 120 seconds. So only after 120 second the original mac entry times out and the mac addres is switched to the correct site. I know VMWare does a gratuitous arp after a vmotion but I can't find confirmation that the VPLS routers will update their mac entries based on this gartuitous arp. Can someone confirm I can't be the first to do a vmotion on a vpls network.
When we see a packet, whatever that is with an smac that we know on a different port, we instantiate a mac move.
This operation can be policed and prevented if necessary.
The grat arp, if having the right smac, will do that for us then.
If there is an event that causes a mac flush, eg an STP convergence, we withdraw all the macs from the bridge domain and send a vpls mwd out the pw's. This results in (temporary) flooding until the macs are learnt again.
xander thanks for the quick response. The following was confusing me
A MAC address in the MAC table is considered valid only for the duration of the MAC address aging time. When the time expires, the relevant MAC entries are repopulated. When the MAC aging time is configured only under a bridge domain, all the pseudowires and attachment circuits in the bridge domain use that configured MAC aging time.
A bridge forwards, floods, or drops packets based on the bridge table. The bridge table maintains both static entries and dynamic entries. Static entries are entered by the network manager or by the bridge itself. Dynamic entries are entered by the bridge learning process. A dynamic entry is automatically removed after a specified length of time, known as aging time, from the time the entry was created or last updated.
If hosts on a bridged network are likely to move, decrease the aging-time to enable the bridge to adapt to the change quickly. If hosts do not transmit continuously, increase the aging time to record the dynamic entries for a longer time, thus reducing the possibility of flooding when the hosts transmit again.
the bold sentences made me think the MACs first had to timeout on the old PW/AC before they are learned via a different PW/AC
Ah I see where the confusion comes from! yeah that is correct, but not accurate. Because what this document is not talking about is the "mac move" concept. This can be controlled by "mac security". If we are seeing a known MAC in a BD on a different EFT (l2transport interface in teh same BD), then either we can relearn the mac to the new port and flush the old "binding", shutdown the efp or drop the packet. That is configured with mac security under the bridge domain config placed in the l2vpn config mode. By default we will allow the mac move, and that could be triggered by that grat arp, so you should be fine.
You will want to control the mac moving however. Because everytime we learn about a MAC, we send a "copy" to ALL NPU's in the system to inform them of the new mac (as seen by the MAC_NOTIFY Np counter). These packets are processed and dropped, but consume pps obviously. As all npu's have the same fib and mac table, regardless of whether they need it or not, such updates can affect performance unnecessarily, hence the need to control mac moves.
(this concept described is known as hardware mac learning, which is awesome and fast, but has a gotcha too as described)
thanks for clarifying this. One last question we use ingress HSRP filtering on the N7K devices at the different end sites. This will result in the same HSRP macs being learned by the ASR devices on both ends which is, based on your post, an unwanted situation. Would an ACL filtering the HSRP messages on the incoming EFP prevent the ASR from learning the MAC and copying them to all other ASR devices?
The virtual MAC will be programmed as active only in the active HSRP router's MAC filtering table. That is from an L3 perspective, so I am assuming you have a Bridge Domain with a BVI on which you run HSRP.
The EFP's that are in that BD, part of that hsrp enabled BVI, will have that (L3) mac filter for the vmac of HSRP.
(this is btw why you can't reuse HSRP group-ID's on the same NPU for HSRP; because overlapping groups use the same vmac and if they fail over independently from each other, inconsistent mac filtering will cause that HSRP group to fail whom was still active on that node who had an HSRP failover, hence vmac removed, to the peer node).
I think I am getting carried away here with my answer but the short answer to your Q directly is that ACL comes BEFORE mac learning. So if we deny a packet with ACL, than eventhough that mac is new to us, we will not learn it in the BD. The ACL being applied to the EFP in the bridgedomain. This is irrespective of using an L2 or L3 ACL.
You are indeed getting a bit carried away The ASR is only responsible for L2 transport with VPLS. The HSRP/L3 function is on the Nexus 7000 devices. So I will simply add an acl on the incoming EFP to filter out the HSRP messages so they will not be flooded over the PW's.
Thanks for the extensive replies and the enthousiam
haha, sorry about that yeah, you can definitely add an ACL to filter the HSRP messages and that will prevent the mac to be learned based on those filtered packets. All good!
When a packet comes from one port to another port in the same LC even in the same NPU, why this packet has to travel until the fabric in RSP?. Can´t the NPU do this task?
Theoretically yes that is possible, however we want to use the central fabric because then the arbiters know the load per entity. If it were locally switched the fabric had no notion of this and might send more traffic down to the interface or NPU that it may not be able to handle.
I already post the question in Other Service Provider Subjects, but now I see that this is a topic for ASR questions...
I wil C/P my qestions here:
I have two questions regarding ASR 9010 in dual chassis, with dual RSP per chassis, with full redundant connections between.
First question is when I had upgrade ASR from 4.3.0. to 4.3.1. version, and activate installation packages:
RP/0/RSP0/CPU0:router (admin)#install activate disk0:*4.3.1* sync
Router went down for about ten minutes... In upgrade procedure package:
Note: The Router will reload at the end of activation to start using the new packages. This operation will impact traffic. Typically this operation may take at least 20 minutes to complete.
Is there any way to upgrade router in dual chassis configuration without any impact on traffic forwarding?
Second question is when can we expect upgrade that will enable etherchannel on 9000v satellites, when we are using etherchannel from ASR to 9000v for redundancy?
cluster ISSU (in service upgrade) is being worked on. One of the interim steps will be to separate/isolate one of the nodes of the cluster, ugprade it, bring it back in service and have the other one being upgraded by the now new active.
this still results in downtime, but not as heavy as it is right now.
More to be worked on btw on this.
As for the bundle/bundle on the satellite, that is at this point an IOS-XR 51 deliverable. Its definitely on the roadmap.