on 07-14-2013 05:11 AM
This document provides a guide on how to use the satellite (ASR9000v) with the ASR9000 and ASR9900 series routers. It will be discussed what you can and cannot do, how to verify the satellite operation and use cases.
This document is written assuming that 5.1.1 or greater software release will be used.
Satellite is a relatively new technology that was introduced in XR 4.2.1. Satellite provides you a great and cheap way to extend your 1G ports by using this port-extender which is completely managed out of the ASR9000. The advantage is that you may have 2 devices, but 1 single entity to manage. All the satellite ports are showing up in the XR configuration of the ASR9000.
Another great advantage of the Satellite is that you can put it on remote locations, miles away from the ASR9000 host!
Although there is a limit to the number of satellites you can connect to an ASR9000 (cluster), the Satellite general concept of ASR9000 is shown here in this picture:
The physical connections are very flexible. The link between the Satellite and the ASR9000 is called the ICL or "Inter Chassis Link".
This link transports the traffic from the satellite ports to the 9000 host.
In the ASR9000 host configuration you define how the satellite ports are mapping between the ICL and the satellite ports.
You can statically pin ports from the Satellite to a single uplink (that means there is no protection, when that uplink fails, those satellite
ports become unusable), or you can bundle uplinks together and assign a group of ports to that bundle. This provides redundancy,
but with the limitation that the satellite ports that are using an uplink bundle, can't be made part of a bundle themselves.
We'll talk about restrictions a bit later.
In the picture below you see Access Device A1 connecting with a bundle that uses uplink (green) to the 9k host LC-2.
A second satellite has all their ports in a bundle ICL.
Note that there is no bandwidth constraints enforced, so theoretically you can have a 2 member ICL bundle and 30 Satellite ports mapped to it, but that would mean there is oversubscription.
While the ASR9000v/Satellite is based on the Cisco CPT-50, you cannot convert between the 2 devices by loading different software images.
You can't use the 9000v as a standalone switch, it needs the ASR9000 host.
Visual differences include that the 9000v starts the port number at 0, where the CPT starts at 1. Also the CPT has 4 different power options
and the ASR9000v only 3: AC-A, DC-A, DC-E (A for Ansi, E for ETSI).
Satellite packet format over L1 topologies looks like this; there is a simple sneaky dot1q tag added which we call the nV tag:
In L2 topologies, such as simple ring, we use dot1ad.
There is a license required to run the ASR9000v. There are 3 licenses for 1, 5, or 20 Satellites per 9k host named:
While licenses are not hard enforced, this meaning the system will still work even though a license may not be present, however you are urged to obtain the proper license, syslog messages will show the "violation of use".
Note when using simple ring, a host license for each satellite is needed on each host. E.g. a simple ring with three satellites requires six A9K-NVSAT1-LIC licenses.
A variety of optics are supported on the ASR9000v, they may not be always the same as the ASR9000. Reference this link for the supported optics for ASR9000/9000v.
When using Tunable optics for the 9000v, pay attention to the following:
(*) note for the tunable optic on the IRL you need to set the wavelength the first time via the 9000v shell on insertion of the optic before shipping it to the destined location.
Handling of Unsupported Optics
For the 9000v ports we do not support the 'service unsupported-transceiver' or 'transceiver permit pid all' commands.
The satellite device simply flags an unsupported transceiver without disabling the port or taking any further action. As long as the pluggable is readable by the satellite the SFP may work, but there are no additional 'hacks' such as the hidden commands beyond what is shown as supported in the tables from the supported optic reference link.
The following software and hardware requirements exist for the ASR9000v. Although support started in XR4.2.1 My personal recommendation is to go with XR43 (latest) as many initial restrictions are lifted from the first release:
Minimum version is XR 4.2.1
Note: If the wrong port is used for ICL then the link will stay down on the 901. Once the correct ICL port is used and the 9K configured then a reload of the 901 will need to occur for the link to come up and the 901 become recognized as a satellite.
Generally speaking all features supported on physical GigE ports of ASR9K are also automatically supported on the GigE ports of the
satellite switch such as L2, L3, multicast, MPLS, BVI, OAM … (everything that works on a normal GigE).
–L1 features: applied on the satellite chassis
1) The following features are not supported on satellite ports in 5.1.1
*Need to update this*
2) When the ICL is a link bundle, there are some restrictions :
QoS can be applied on the ASR9000 host (runs on the NP where the satellite ports have their interface descriptors) or offloaded to the satellite
When you have oversubscription, that is more then the number of 1G ports compared to the ICL total speed there could be a potential issue. However there is an implicit trust model for all high priority traffic.
Automatic packet classification rules determine whether a packet is control packet (LACP, STP, CDP, CFM, ARP, OSPF etc), high priority data (VLAN COS 5,6,7, IP prec 5, 6, 7) or normal priority data and queued accordingly
For the downstream direction, that is 9000 host to the Satellite, the "standard" QOS rules and shaping are sufficient enough to warrant the delivery of high priority packets to the satellite ports. (e.g. inject to wire etc).
As the ICL link between the satellite and host may be oversubscribed by access interfaces, configuring QoS on the satellite itself is optimal for avoiding the lose of high priority traffic due to congestion. This feature was introduced in 5.1.1
3 steps to configuring QoS offload
INPUT access interface (CLI config) example:
class-map match-any my_class
match dscp 10
end-class-map
!
policy-map my_policy
class my_class
set precedence 1
!
end-policy-map
!
interface GigabitEthernet100/0/0/9
ipv4 address 10.1.1.1 255.255.255.0
nv
service-policy input my_policy
!
Traffic is hashed across members of this ICL LAG based on satellite access port number. No packet payload information (MAC SA/ DA or IP SA/ DA) used in hash calculation. This ensures that QoS applied on ASR9K for a particular satellite access port works correctly over the entire packet stream of that access port. Current hash rule is simple (port number modulo # of active ICLs)
Plug-and-play installation: No local config on satellite, no need to even telnet in!
It is recommended to use the auto-IP feature, no loopback or VRF need to be defined. A VRF **nVSatellite will be auto-defined and does not count towards the number of VRFs configured (for licensing purposes).
Optional config secret password for satellite login. Note that the username is 'root'
There are two options for ICL:
That is static pinning; designate some ports from the satellite to use a dedicated uplink.
Using a bundle ICL that provides for redundancy when one uplink fails.
All interfaces mapped to an ICL bundle:
ASR9000 TenG interface putting into bundle mode ON (No LACP support)
Define the bundle ethernet on the ASR9000 host, and designate which ports will be mapped to the bundle:
Because of the order and batching in which things get applied in XR there are some things that you need to know when it comes down to negating certain config which additions of others.
Examples of this are:
In such cases, failures are expected to be seen; generally speaking, failures are expected to be deterministic, and workarounds available
(re-apply the configuration in two commit batches)
Recommendation to users is to commit ICL configuration changes in separate commits to Satellite-Ether configuration changes
Starting in 5.1.1 many new features were added to expand upon the basic single host hub-and-spoke model. These features take more configuration than the base satellite configuration and will be discussed below.
Starting in 5.1.1 the ability for a satellite (hub-and-spoke) or a ring of satellites (simple ring) to be dual-homed to two hosts was added. (nV Edge acts as one logical router)
With this configuration one ASR9K host is the active and the other is standby. Data and control traffic from the satellite will flow to the active host, but both hosts will send and receive management traffic via the SDAC protocol. This is used to determine connectivity, detect failures, sync the configuration, etc.
The two hosts communicate management information via ICCP with a modified version of SDAC called ORBIT.
Supported Topologies:
Hub-and-spoke dual hosts
9000v with 10G ICL or bundle ICL
901 with 1G ICL
9000v (10G) or 901 (1G) using L2 fabric sub-interfaces
Satellites may be partitioned
Simple ring dual hosts
9000v with 10G ICL
901 with 1G ICL
Satellites may not be partitioned
Note: Partitioning is when you carve out certain access ports to be used by certain ICL interfaces
Current limitations:
Must be two single chassis, no clusters
Load balancing is active/standby per satellite, per access port planned
No user configuration sync between hosts
Configuration Differences:
The most notable changes when coming from a simple hub-and-spoke design is ICCP, and adding the satellite serial number.
Example
Router 1
redundancy
iccp
group 1
member
neighbor 172.18.0.2
!
nv satellite
system-mac <mac> (optional)
!
!
!
!
nv
satellite 100
type asr9000v
ipv4 address 10.0.0.100
redundancy
host-priority <priority> {optional)
!
serial-number <satellite serial>
!
vrf nv_mgmt
!
interface loopback 10
vrf nv_mgmt
ipv4 address 10.0.0.1
!
interface Loopback1000
ipv4 address 172.18.0.1 255.255.255.255
!
interface GigabitEthernet0/1/0/4
ipv4 address 192.168.0.1 255.255.255.0
!
interface ten 0/0/0/0
ipv4 point-to-point
ipv4 unnumbered loopback 10
nv
satellite-fabric-link [network satellite <> | satellite <>]
redundancy
iccp-group 1
remote-ports gig 0/0/0-43
!
!
!
mpls ldp
router-id 172.18.0.1
discovery targeted-hello accept
neighbor 172.18.0.2
!
!
!
router static
address-family ipv4 unicast
172.18.0.2/32 192.168.0.2
!
!
Starting in 5.1.1 we have the ability to support more than just simple hub-and-spoke. The ring topology allows for satellite chaining, cascading, and in general a more advanced satellite network.
Requirements and Limitations:
Configuration:
This is essentially the same as the dual hosts setup, but the network option must be used when entering 'satellite-fabirc-link'
This is treated as special ring and works the same way as simple ring.
The biggest difference is that in 5.1.1 cascading supports single host while simple ring does not.
Starting in 5.1.1 we have the ability to extend the ICL across an EVC. Normally an IRL is a L1 connection. This increases the flexibility of satellite by allowing for greater distances between the ASR9K host and the satellite device.
Requirements and limitations:
Configuration:
On Active-Host:
interface TenGigE0/1/0/23.200
encapsulation dot1q 200
!
nv
satellite-fabric-link satellite 200
redundancy
iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
On Standby-Host:
interface TenGigE0/1/1/0.200
encapsulation dot1q 220
!
nv
satellite-fabric-link satellite 200
redundancy
iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
Note: L2 cloud configuration not shown
'show nv satellite status'
Checking Version: The version of the software running on the satellite is being checked for compatibility with the running version of IOS-XR on the host
Configured Serial Number: (If configured) the serial number configured for the satellite, checked against that presented by the satellite during control protocol authentication
Configured Satellite Links: One entry for each of the configured satellite-fabric-links, headed by the interface name. The following information is present for each configured link:
Discovered Satellite Fabric Links: This section is only present for redundant satellite-fabric-links. This lists the interfaces that are members of the configured link, and the per-link discovery state.
Conflict: If the configured link is not conflicted, the satellite discovered over the link is presenting data that contradicts that found over a different satellite-fabric-link.
'show nv satellite protocol discovery'
Host IPv4 Address: The IPv4 address used for the host to communicate to this satellite. Should match the IPv4 address on all the satellite-fabric-links
For Bundle-Ether satellite-fabric-links, there are then 0 or more 'Discovered links' entries; for physical satellite-fabric-links, the same fields are present but just inline.
'show nv satellite protocol control'
Authenticating: The TCP session has been established, and the control protocol is checking the authentication information provided by the Satellite
Connected: The SDACP control protocol session to the satellite has been successfully brought up, and the feature channels can now be opened.
For each channel, the following fields are present:
Open(In Resync - Awaiting Client Resync End) The Feature Channel Owner (FCO) on the host has not finished sending data to the FCO on the Satellite. If this is the state then typically the triage should continue on the host by the owner. The owner of the Feature Channel should be contacted.
Open(In Resync - Awaiting Satellite Resync End) The FCO on the Host is awaiting information from the FCO on the Satellite. If this is the state then typically the triage should continue on the satellite.
Notes:
icpe_gco[1148]: %PKT_INFRA-ICPE_GCO-6-TRANSFER_DONE : Image transfer completed on Satellite 101
A few issues can cause this:
Conflict Messages
Examples:
BNG access over satellite is only qualified over bundle access and isn’t supported over bundle ICLs.
BNG access over ASR9k host and NCS5k satellite specifically is in the process of official qualification in 6.1.x. Please check with PM for exact qualified release.
Access bundles across satellites in an nV dual head solution are generally not recommended. The emphasis is not to bundle services across satellites in a dual head system as if they align to different hosts, the solution breaks without an explicit redundant path. An MCLAG over satellite access is a better solution there.
Bundle access over bundle fabric / ICL require 5.3.2 and above on ASR9k. For NCS5k satellite, bundle ICL including bundle over bundle is supported from 6.0.1 and nV dual head topologies are planned to be supported only from 6.1.1
MC-LAG over satellite access might be more convergence friendly and feature rich than nV dual head for BNG access from previous case studies. For non BNG access, nV dual head and MC-LAG are both possible options with any combinations of physical or bundle access and fabric.
In an MC-LAG with satellite access, the topology is just a regular MC-LAG system with the hosts syncing over ICCP but with satellite access as well. Note that the individual satellites aren’t dual homed/hosted here so there is no dual-host feature to sync over ICCP beyond just MC-LAG from CE.
As a deployment recommendation, unless ICL links (between satellite and host) are more prone to failure over access, MC-LAG might be preferable over nV dual head solution. However, if ICL links have higher failure probability and the links going down can affect BW in bundle ICL cases, then MC-LAG may not switch over unless the whole link goes down or access goes down.
Xander Thuijs CCIE#6775
Principal Engineer, ASR9000
Sam Milstead
Customer Support Engineer, IOS XR
Hi Xander,
Thank you for this very nice technote on ASR9K satellites. I have some followup questions:
-) Can you please elaborate on the roadmap regarding CDP, BFD and EOAM support?
-) You wrote that there is no LAG behing LAG support on the satellites. Nevertheless in your figure you show an end device using a LAG connected to both satellites. Does this mean that LAG behing LAG is supported using more than one satellite? Please explain this restriction in more detail.
-) Can you please elaborate on the roadmap regarding future satellite architectures like rings and chaining?
Regards,
Florian
Hi Florian,
ah I just noticed that that picture is a bit misleading indeed. I'll update that shortly to remove that ambiguity.
The cdp/bfd etc should be in 52 (tentatively, but let me confirm).
thanks
xander
Hi Xander,
Is there a plan to lift the LAG behind LAG restriction in the future?
Regards,
Florian
Hi Florian, yeah, although not officially committed to that release, it is being looked at for XR5.2
regards
xander
Hi Alexander.
Does ASR9000V support GRE packets? I'm have GRE tunnel configured and I'm getting the message below.
LC/0/3/CPU0:Jul 26 00:44:01.896 : tunl_gre_ea[329]: %PLATFORM-TUNL_GRE_EA_PD-4-UNSUPPORTED_CONFIG : GRE Tunnel not supported on this linecard. Set the network topology to avoid this line card for GRE Tunnel packets
Thank you !
Renato
Hi Renato, I had it checked out and it seems that our GRE development folks are not putting any restrictions on this.
The case is that the satellite doesnt do anything feature wise and merely puts an NVtag on the packet between access and host. So theoretically this can be supported transparently.
What RSP version are you using? What LC type is the Satellite connected to? and what version are you running?
Satellite can only run with RSP440 and Typhoon LC, that may be part of the problem?
Test scenario verified was:
Remote-RTR--------SAT-----------Host
Tunnel between Remote-Rtr and Host.
interface tunnel-ip901
ipv4 address 98.10.10.1 255.255.255.0
tunnel mode gre ipv4
tunnel source GigabitEthernet901/0/0/4 --> gre source as sat-ether port
tunnel destination 98.3.3.2
regards
xander
Hi Alexander,
I'm using RSP440. I didnt realized the log message was related to the ISM module, I guess it has nothing to do with the satellite module. I have a GRE tunnel UP on both sides but I cant ping even my own IP address.
Tks
Renato Reis
Ah haha, yeah ISM dont do that
very cool glad to hear it is working!
xander
Hi Alexander,
I know this topic is not realated to my question, but can you help me with this issue?
https://supportforums.cisco.com/message/3998784#3998784
Thanks
Renato Reis
I am not big on ISM, but I will forward this off to a few folks to have a look along for you.
regards
xander
Thank you !
Xander,
Can you comment on the availability of satellite ICLs over L2 links? It is mentioned in one of the docs with the following verbiage "The connection between satellite and host is called “nv fabric link”, which could be L1 or over L2 virtual circuit (future)". Specifically, I am looking to identify whether support for the nV satellite will be supported over something like a QinQ link where the service provider doesn't own the transport across the entire network.
Thanks-
Philip
Hi Philip,
I am checking for the precise test cases that would make this officially supported.
Technically there is no restriction that you can't do this. The ICL is really like a simple dot1q link whereby the VLAN is the nv-tag as we call it, so there should be no bearing on this being a L1 or L2 (eg VPWS) link.
Since the Sat is configured via this link, the specific test requirements surround the situations of negative test cases specifically to make sure the sat and host see the same config/state in case the vpws has delays or hiccup...
regards
xander
Xander,
Thanks. Please let me know what comes of your internal checks. I have a customer we need to communicate with and the only verbiage I am able to go off of at present is that which I posted previously. Please keep me apprised. Thanks
Philip
Philip, it occurred to me that I hadnt updated you on the investigation I did over the last few days.
So here is the deal:
Satellite uses LLDP for discovery and untagged for control traffic on the ICL.
Now we dont have official dev test cases for this particular model, using E-LINE (better state that then PW) yet.
For sure the 9k cannot terminate the PW as an ICL uplink, so the E-LINE service needs to start and terminate between PE nodes attaching the a9k and the satellite.
If the E-LINE service cannot guarantee proper delivery of the control and lldp traffic for whatever reason, loss, oversubscr etc, then the satellite solution will be unsuable and unstable.
When the lldp discovery fails, both state and forwarding will get a hiccup!
So if you were using that deployment model, make super duper sure that this service is reliable as otherwise you will see an unstable satellite operation.
We are adding test cases for this as we speak btw to make sure recovery is fine etc (although it was part of the regular testing also without ELINE ICL)
regards
xander
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: