on 07-14-2013 05:11 AM
This document provides a guide on how to use the satellite (ASR9000v) with the ASR9000 and ASR9900 series routers. It will be discussed what you can and cannot do, how to verify the satellite operation and use cases.
This document is written assuming that 5.1.1 or greater software release will be used.
Satellite is a relatively new technology that was introduced in XR 4.2.1. Satellite provides you a great and cheap way to extend your 1G ports by using this port-extender which is completely managed out of the ASR9000. The advantage is that you may have 2 devices, but 1 single entity to manage. All the satellite ports are showing up in the XR configuration of the ASR9000.
Another great advantage of the Satellite is that you can put it on remote locations, miles away from the ASR9000 host!
Although there is a limit to the number of satellites you can connect to an ASR9000 (cluster), the Satellite general concept of ASR9000 is shown here in this picture:
The physical connections are very flexible. The link between the Satellite and the ASR9000 is called the ICL or "Inter Chassis Link".
This link transports the traffic from the satellite ports to the 9000 host.
In the ASR9000 host configuration you define how the satellite ports are mapping between the ICL and the satellite ports.
You can statically pin ports from the Satellite to a single uplink (that means there is no protection, when that uplink fails, those satellite
ports become unusable), or you can bundle uplinks together and assign a group of ports to that bundle. This provides redundancy,
but with the limitation that the satellite ports that are using an uplink bundle, can't be made part of a bundle themselves.
We'll talk about restrictions a bit later.
In the picture below you see Access Device A1 connecting with a bundle that uses uplink (green) to the 9k host LC-2.
A second satellite has all their ports in a bundle ICL.
Note that there is no bandwidth constraints enforced, so theoretically you can have a 2 member ICL bundle and 30 Satellite ports mapped to it, but that would mean there is oversubscription.
While the ASR9000v/Satellite is based on the Cisco CPT-50, you cannot convert between the 2 devices by loading different software images.
You can't use the 9000v as a standalone switch, it needs the ASR9000 host.
Visual differences include that the 9000v starts the port number at 0, where the CPT starts at 1. Also the CPT has 4 different power options
and the ASR9000v only 3: AC-A, DC-A, DC-E (A for Ansi, E for ETSI).
Satellite packet format over L1 topologies looks like this; there is a simple sneaky dot1q tag added which we call the nV tag:
In L2 topologies, such as simple ring, we use dot1ad.
There is a license required to run the ASR9000v. There are 3 licenses for 1, 5, or 20 Satellites per 9k host named:
While licenses are not hard enforced, this meaning the system will still work even though a license may not be present, however you are urged to obtain the proper license, syslog messages will show the "violation of use".
Note when using simple ring, a host license for each satellite is needed on each host. E.g. a simple ring with three satellites requires six A9K-NVSAT1-LIC licenses.
A variety of optics are supported on the ASR9000v, they may not be always the same as the ASR9000. Reference this link for the supported optics for ASR9000/9000v.
When using Tunable optics for the 9000v, pay attention to the following:
(*) note for the tunable optic on the IRL you need to set the wavelength the first time via the 9000v shell on insertion of the optic before shipping it to the destined location.
Handling of Unsupported Optics
For the 9000v ports we do not support the 'service unsupported-transceiver' or 'transceiver permit pid all' commands.
The satellite device simply flags an unsupported transceiver without disabling the port or taking any further action. As long as the pluggable is readable by the satellite the SFP may work, but there are no additional 'hacks' such as the hidden commands beyond what is shown as supported in the tables from the supported optic reference link.
The following software and hardware requirements exist for the ASR9000v. Although support started in XR4.2.1 My personal recommendation is to go with XR43 (latest) as many initial restrictions are lifted from the first release:
Minimum version is XR 4.2.1
Note: If the wrong port is used for ICL then the link will stay down on the 901. Once the correct ICL port is used and the 9K configured then a reload of the 901 will need to occur for the link to come up and the 901 become recognized as a satellite.
Generally speaking all features supported on physical GigE ports of ASR9K are also automatically supported on the GigE ports of the
satellite switch such as L2, L3, multicast, MPLS, BVI, OAM … (everything that works on a normal GigE).
–L1 features: applied on the satellite chassis
1) The following features are not supported on satellite ports in 5.1.1
*Need to update this*
2) When the ICL is a link bundle, there are some restrictions :
QoS can be applied on the ASR9000 host (runs on the NP where the satellite ports have their interface descriptors) or offloaded to the satellite
When you have oversubscription, that is more then the number of 1G ports compared to the ICL total speed there could be a potential issue. However there is an implicit trust model for all high priority traffic.
Automatic packet classification rules determine whether a packet is control packet (LACP, STP, CDP, CFM, ARP, OSPF etc), high priority data (VLAN COS 5,6,7, IP prec 5, 6, 7) or normal priority data and queued accordingly
For the downstream direction, that is 9000 host to the Satellite, the "standard" QOS rules and shaping are sufficient enough to warrant the delivery of high priority packets to the satellite ports. (e.g. inject to wire etc).
As the ICL link between the satellite and host may be oversubscribed by access interfaces, configuring QoS on the satellite itself is optimal for avoiding the lose of high priority traffic due to congestion. This feature was introduced in 5.1.1
3 steps to configuring QoS offload
INPUT access interface (CLI config) example:
class-map match-any my_class
match dscp 10
end-class-map
!
policy-map my_policy
class my_class
set precedence 1
!
end-policy-map
!
interface GigabitEthernet100/0/0/9
ipv4 address 10.1.1.1 255.255.255.0
nv
service-policy input my_policy
!
Traffic is hashed across members of this ICL LAG based on satellite access port number. No packet payload information (MAC SA/ DA or IP SA/ DA) used in hash calculation. This ensures that QoS applied on ASR9K for a particular satellite access port works correctly over the entire packet stream of that access port. Current hash rule is simple (port number modulo # of active ICLs)
Plug-and-play installation: No local config on satellite, no need to even telnet in!
It is recommended to use the auto-IP feature, no loopback or VRF need to be defined. A VRF **nVSatellite will be auto-defined and does not count towards the number of VRFs configured (for licensing purposes).
Optional config secret password for satellite login. Note that the username is 'root'
There are two options for ICL:
That is static pinning; designate some ports from the satellite to use a dedicated uplink.
Using a bundle ICL that provides for redundancy when one uplink fails.
All interfaces mapped to an ICL bundle:
ASR9000 TenG interface putting into bundle mode ON (No LACP support)
Define the bundle ethernet on the ASR9000 host, and designate which ports will be mapped to the bundle:
Because of the order and batching in which things get applied in XR there are some things that you need to know when it comes down to negating certain config which additions of others.
Examples of this are:
In such cases, failures are expected to be seen; generally speaking, failures are expected to be deterministic, and workarounds available
(re-apply the configuration in two commit batches)
Recommendation to users is to commit ICL configuration changes in separate commits to Satellite-Ether configuration changes
Starting in 5.1.1 many new features were added to expand upon the basic single host hub-and-spoke model. These features take more configuration than the base satellite configuration and will be discussed below.
Starting in 5.1.1 the ability for a satellite (hub-and-spoke) or a ring of satellites (simple ring) to be dual-homed to two hosts was added. (nV Edge acts as one logical router)
With this configuration one ASR9K host is the active and the other is standby. Data and control traffic from the satellite will flow to the active host, but both hosts will send and receive management traffic via the SDAC protocol. This is used to determine connectivity, detect failures, sync the configuration, etc.
The two hosts communicate management information via ICCP with a modified version of SDAC called ORBIT.
Supported Topologies:
Hub-and-spoke dual hosts
9000v with 10G ICL or bundle ICL
901 with 1G ICL
9000v (10G) or 901 (1G) using L2 fabric sub-interfaces
Satellites may be partitioned
Simple ring dual hosts
9000v with 10G ICL
901 with 1G ICL
Satellites may not be partitioned
Note: Partitioning is when you carve out certain access ports to be used by certain ICL interfaces
Current limitations:
Must be two single chassis, no clusters
Load balancing is active/standby per satellite, per access port planned
No user configuration sync between hosts
Configuration Differences:
The most notable changes when coming from a simple hub-and-spoke design is ICCP, and adding the satellite serial number.
Example
Router 1
redundancy
iccp
group 1
member
neighbor 172.18.0.2
!
nv satellite
system-mac <mac> (optional)
!
!
!
!
nv
satellite 100
type asr9000v
ipv4 address 10.0.0.100
redundancy
host-priority <priority> {optional)
!
serial-number <satellite serial>
!
vrf nv_mgmt
!
interface loopback 10
vrf nv_mgmt
ipv4 address 10.0.0.1
!
interface Loopback1000
ipv4 address 172.18.0.1 255.255.255.255
!
interface GigabitEthernet0/1/0/4
ipv4 address 192.168.0.1 255.255.255.0
!
interface ten 0/0/0/0
ipv4 point-to-point
ipv4 unnumbered loopback 10
nv
satellite-fabric-link [network satellite <> | satellite <>]
redundancy
iccp-group 1
remote-ports gig 0/0/0-43
!
!
!
mpls ldp
router-id 172.18.0.1
discovery targeted-hello accept
neighbor 172.18.0.2
!
!
!
router static
address-family ipv4 unicast
172.18.0.2/32 192.168.0.2
!
!
Starting in 5.1.1 we have the ability to support more than just simple hub-and-spoke. The ring topology allows for satellite chaining, cascading, and in general a more advanced satellite network.
Requirements and Limitations:
Configuration:
This is essentially the same as the dual hosts setup, but the network option must be used when entering 'satellite-fabirc-link'
This is treated as special ring and works the same way as simple ring.
The biggest difference is that in 5.1.1 cascading supports single host while simple ring does not.
Starting in 5.1.1 we have the ability to extend the ICL across an EVC. Normally an IRL is a L1 connection. This increases the flexibility of satellite by allowing for greater distances between the ASR9K host and the satellite device.
Requirements and limitations:
Configuration:
On Active-Host:
interface TenGigE0/1/0/23.200
encapsulation dot1q 200
!
nv
satellite-fabric-link satellite 200
redundancy
iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
On Standby-Host:
interface TenGigE0/1/1/0.200
encapsulation dot1q 220
!
nv
satellite-fabric-link satellite 200
redundancy
iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
Note: L2 cloud configuration not shown
'show nv satellite status'
Checking Version: The version of the software running on the satellite is being checked for compatibility with the running version of IOS-XR on the host
Configured Serial Number: (If configured) the serial number configured for the satellite, checked against that presented by the satellite during control protocol authentication
Configured Satellite Links: One entry for each of the configured satellite-fabric-links, headed by the interface name. The following information is present for each configured link:
Discovered Satellite Fabric Links: This section is only present for redundant satellite-fabric-links. This lists the interfaces that are members of the configured link, and the per-link discovery state.
Conflict: If the configured link is not conflicted, the satellite discovered over the link is presenting data that contradicts that found over a different satellite-fabric-link.
'show nv satellite protocol discovery'
Host IPv4 Address: The IPv4 address used for the host to communicate to this satellite. Should match the IPv4 address on all the satellite-fabric-links
For Bundle-Ether satellite-fabric-links, there are then 0 or more 'Discovered links' entries; for physical satellite-fabric-links, the same fields are present but just inline.
'show nv satellite protocol control'
Authenticating: The TCP session has been established, and the control protocol is checking the authentication information provided by the Satellite
Connected: The SDACP control protocol session to the satellite has been successfully brought up, and the feature channels can now be opened.
For each channel, the following fields are present:
Open(In Resync - Awaiting Client Resync End) The Feature Channel Owner (FCO) on the host has not finished sending data to the FCO on the Satellite. If this is the state then typically the triage should continue on the host by the owner. The owner of the Feature Channel should be contacted.
Open(In Resync - Awaiting Satellite Resync End) The FCO on the Host is awaiting information from the FCO on the Satellite. If this is the state then typically the triage should continue on the satellite.
Notes:
icpe_gco[1148]: %PKT_INFRA-ICPE_GCO-6-TRANSFER_DONE : Image transfer completed on Satellite 101
A few issues can cause this:
Conflict Messages
Examples:
BNG access over satellite is only qualified over bundle access and isn’t supported over bundle ICLs.
BNG access over ASR9k host and NCS5k satellite specifically is in the process of official qualification in 6.1.x. Please check with PM for exact qualified release.
Access bundles across satellites in an nV dual head solution are generally not recommended. The emphasis is not to bundle services across satellites in a dual head system as if they align to different hosts, the solution breaks without an explicit redundant path. An MCLAG over satellite access is a better solution there.
Bundle access over bundle fabric / ICL require 5.3.2 and above on ASR9k. For NCS5k satellite, bundle ICL including bundle over bundle is supported from 6.0.1 and nV dual head topologies are planned to be supported only from 6.1.1
MC-LAG over satellite access might be more convergence friendly and feature rich than nV dual head for BNG access from previous case studies. For non BNG access, nV dual head and MC-LAG are both possible options with any combinations of physical or bundle access and fabric.
In an MC-LAG with satellite access, the topology is just a regular MC-LAG system with the hosts syncing over ICCP but with satellite access as well. Note that the individual satellites aren’t dual homed/hosted here so there is no dual-host feature to sync over ICCP beyond just MC-LAG from CE.
As a deployment recommendation, unless ICL links (between satellite and host) are more prone to failure over access, MC-LAG might be preferable over nV dual head solution. However, if ICL links have higher failure probability and the links going down can affect BW in bundle ICL cases, then MC-LAG may not switch over unless the whole link goes down or access goes down.
Xander Thuijs CCIE#6775
Principal Engineer, ASR9000
Sam Milstead
Customer Support Engineer, IOS XR
I believe the VOQ is per satellite access port not per ICL but I dont work on that project any more so @xthuijs can double check and confirm for you. From my recollection, if you send > 1 Gbs to a satellite access port, it should not impact other ports and further even if you send > 10 Gbs to a satellite access port, even then it should not impact other ports but needs to be double checked to be sure ... In case you see it having an impact, you may still be able to play with the egress port buffer to drop packets only of that access port and not trigger VOQ for the entire ICL. Since I am not currently working on that project, I'll let xthuijs double check and respond. If you have a testbed where you can try it out, please feel free to do so to confirm the behavior.
assuming this: 10g/core---ingress(a9k)egress---ICL---satellite
the VOQ is held for the satellite ICL interface on the ingress LC side.
it will allow 10G towards the egress LC.
if there is a single 10G stream targetting a satellite 1G port, the egress NPU will start to drop packets on xmit as it can only drain 1G.
however this is a very artificial scenario because windowing and the like would never allow a transmitter to send at 10G constant rate towards a 1G receptor.
xander
Does it have to be a single large stream to cause VOQ drop, or an aggregated >10G to a satellite port would have the same effect?
If VOQ uses WRED, the impact might not be as severe. Is there anything configurable on ingress side to relieve VOQ congestion due to a single satellite access port receiving too much traffic (especially malicious traffic, which could be a lot more than 1G). Thanks.
a single large stream can do it. there is no wred on the fia/voq scheduler. it is merely a shaper with a bit of buffering. a voq represents a 10G entity in the system.
under normal circumstances a sender cant send 10G down to a 1G receiver for that windowing etc. if there is a udp attack of some sort, of course than we can have that scenario. But this is no different then any other DOS situation whereby netflow, policers, acls, and flowspec will find their purpose/use.
there is no single knob on an ingress lc to protect a satellite port by nature as such is what it sounds like you were hoping for :)
xander
Just to clarify couple of things,
In the outlined scenario:
10g/core---ingress(a9k)egress---ICL---satellite
10Gbps NTP reflection DDoS landing on the egress LC NPU is hardly any pps load even on Trident based NPU.
So as Xander said the egress NPU will happily process all packets arriving from fabric (DDoS traffic as well as regular traffic) and place them in their respective WAN delay-bw-buffer queues and only the egress queue where the DDoS traffic lands on will drop packets in a WRED fashion ,if configured to use WRED.
So only one queue of the customer under DDoS will be affected. No other queues or customers will be affected in this case.
Only in scenario:
Xg/core---ingress(a9k)egress---ICL---satellite
Where X amount of traffic coming from all ingress core facing cards is too much pps or bps for the egress LC’s NPU to cope with and classify correctly, or is more than the ICL link BW (10GE entity), only then the egress NPU initiates a backpressure that will result in ingress LCs not getting fabric grants and eventually running out of fabric buffer in the VOQ holding the NTP-reflection DDoS traffic.
But one important thing to note is that there are multiple VOQs per 10GE entity (or multiple priority levels in one VQI depending on which presentation you’re looking at, 10GE entity=VQI=4 VOQs, Priority1, Priority2, Normal, Multicast), so while one of the customers hanging off of the satellite/10GE entity is under an NTP-reflection DDoS which is so powerful that it is actually overloading the egress NPU, since this internet traffic will be classified as fabric-low-priority, while the voip calls destined to all the customers hanging off of the satellite (or 10GE entity) will be marked as fabric-high-priority1 and their business critical traffic as fabric-high-priority2, then only low priority traffic of all customers sharing a common 10GE entity will be affected by this DDoS attack.
adam
netconsultings.com
::carrier-class solutions for the telecommunications industry::
Hi Adam,
The concern here is bps, not pps. We had to move some customer ports around to avoid drops on other customer ports pinned on the same ICL when there's a >10Gbps attack.
Hmm that doesn’t seem right, unless there’s oversubscription on the ICL link or on the egress NPU.
Where did you see the drops happening? On ingress/core-facing card or on the egress/ICL facing card please?
If on egress NPU,
How many customers do you serve via that ICL link? What is the sum of shape rates of each of the customers on the ICL link? Is it more than 10Gbps?
If on ingress NPU,
Then the egress NPU got probably oversubscribed i.e. all the other traffic that it processes normally + 10Gbps DDoS > than the NPU can process, resulting in backpressure towards ingress NPU(s).
But with correct QOS only low priority traffic should have been affected.
adam
netconsultings.com
::carrier-class solutions for the telecommunications industry::
As Xander pointed out, VQI is per ICL, not per satellite access port. Ingress NP can send more than 1Gbps destined to a single access port across fabric to egress NP. When the 10G VQI for ICL fills up, default queue would start to drop, including traffic to other access ports on the same ICL.
This is not constrained by egress NP's pps performance.
Yup agree VQI is per 10GE entity, my mistake got confused swinging between MX and ASR architectures.
Yes backpressure will kick in if ingress is sending more than 10Gbps towards 10GE entity on egress, then ingress drops are based only on coarse fabric priorities.
If however ingress is sending less than or around 10Gbps (I think it allows little over 10Gbps in short spikes to allow for egress port shaping buffering), then all that traffic is allowed through and possible drops are done on egress based on granular egress QOS.
Though now I’m not sure if it’s possible to have per customer QOS on ICL links (same way as I’d have on a port to an aggregation switch) please?
adam
Egress QoS configured on satellite (sub)interfaces is executed on the NP (i.e. before the traffic is sent over the ICL). If this is what you consider as "per customer" QoS, then yes, this is possible.
Yes that’s exactly what I meant, thank you Aleksandar,
So just to confirm, then the config would look like the one below?
interface Bundle-Ether10
ipv4 point-to-point
ipv4 unnumbered Loopback123
nv
satellite-fabric-link satellite 1
service-policy output test-icl-policy <<<<<<<<<<<<<<< global icl policy right?
redundancy
iccp-group 10
!
remote-ports GigabitEthernet 0/0/0
service-policy output test-host-policy <<<<<<<<<<<<per nv-port/customer policy right?
!
!
adam
For service-policy configured on satellite access port, is there any difference between service-policy under nv and service-policy directly under gi10x/0/x (without nv)?
hi Adam,
not really. QoS policy on the ICL interface is not supported. This is how you can do:
interface GigabitEthernet200/0/0/1
service-policy output Policy1
service-policy input Policy2
nv
service-policy input Policy3
interface Bundle-Ether10
nv
satellite-fabric-link network
redundancy
iccp-group 1
!
satellite 200
service-policy output Policy4
remote-ports GigabitEthernet 0/0/1-2
Policy1: executed on the NP (before packets hit the ICL)
Policy2: execute on the NP (when packets have already traversed the ICL)
Policy3: executed on the satellite on packets received from the wire
Policy4: executed on the satellite before sending the packets over ICL towards the host
please see the post below:
https://supportforums.cisco.com/document/9868421/asr9000xr-using-satellite-technote#comment-11963776
Hi Alex,
Is BFD included in control protocol in auth-qos?
Thanks,
Mei
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: