on 07-14-2013 05:11 AM
This document provides a guide on how to use the satellite (ASR9000v) with the ASR9000 and ASR9900 series routers. It will be discussed what you can and cannot do, how to verify the satellite operation and use cases.
This document is written assuming that 5.1.1 or greater software release will be used.
Satellite is a relatively new technology that was introduced in XR 4.2.1. Satellite provides you a great and cheap way to extend your 1G ports by using this port-extender which is completely managed out of the ASR9000. The advantage is that you may have 2 devices, but 1 single entity to manage. All the satellite ports are showing up in the XR configuration of the ASR9000.
Another great advantage of the Satellite is that you can put it on remote locations, miles away from the ASR9000 host!
Although there is a limit to the number of satellites you can connect to an ASR9000 (cluster), the Satellite general concept of ASR9000 is shown here in this picture:
The physical connections are very flexible. The link between the Satellite and the ASR9000 is called the ICL or "Inter Chassis Link".
This link transports the traffic from the satellite ports to the 9000 host.
In the ASR9000 host configuration you define how the satellite ports are mapping between the ICL and the satellite ports.
You can statically pin ports from the Satellite to a single uplink (that means there is no protection, when that uplink fails, those satellite
ports become unusable), or you can bundle uplinks together and assign a group of ports to that bundle. This provides redundancy,
but with the limitation that the satellite ports that are using an uplink bundle, can't be made part of a bundle themselves.
We'll talk about restrictions a bit later.
In the picture below you see Access Device A1 connecting with a bundle that uses uplink (green) to the 9k host LC-2.
A second satellite has all their ports in a bundle ICL.
Note that there is no bandwidth constraints enforced, so theoretically you can have a 2 member ICL bundle and 30 Satellite ports mapped to it, but that would mean there is oversubscription.
While the ASR9000v/Satellite is based on the Cisco CPT-50, you cannot convert between the 2 devices by loading different software images.
You can't use the 9000v as a standalone switch, it needs the ASR9000 host.
Visual differences include that the 9000v starts the port number at 0, where the CPT starts at 1. Also the CPT has 4 different power options
and the ASR9000v only 3: AC-A, DC-A, DC-E (A for Ansi, E for ETSI).
Satellite packet format over L1 topologies looks like this; there is a simple sneaky dot1q tag added which we call the nV tag:
In L2 topologies, such as simple ring, we use dot1ad.
There is a license required to run the ASR9000v. There are 3 licenses for 1, 5, or 20 Satellites per 9k host named:
While licenses are not hard enforced, this meaning the system will still work even though a license may not be present, however you are urged to obtain the proper license, syslog messages will show the "violation of use".
Note when using simple ring, a host license for each satellite is needed on each host. E.g. a simple ring with three satellites requires six A9K-NVSAT1-LIC licenses.
A variety of optics are supported on the ASR9000v, they may not be always the same as the ASR9000. Reference this link for the supported optics for ASR9000/9000v.
When using Tunable optics for the 9000v, pay attention to the following:
(*) note for the tunable optic on the IRL you need to set the wavelength the first time via the 9000v shell on insertion of the optic before shipping it to the destined location.
Handling of Unsupported Optics
For the 9000v ports we do not support the 'service unsupported-transceiver' or 'transceiver permit pid all' commands.
The satellite device simply flags an unsupported transceiver without disabling the port or taking any further action. As long as the pluggable is readable by the satellite the SFP may work, but there are no additional 'hacks' such as the hidden commands beyond what is shown as supported in the tables from the supported optic reference link.
The following software and hardware requirements exist for the ASR9000v. Although support started in XR4.2.1 My personal recommendation is to go with XR43 (latest) as many initial restrictions are lifted from the first release:
Minimum version is XR 4.2.1
Note: If the wrong port is used for ICL then the link will stay down on the 901. Once the correct ICL port is used and the 9K configured then a reload of the 901 will need to occur for the link to come up and the 901 become recognized as a satellite.
Generally speaking all features supported on physical GigE ports of ASR9K are also automatically supported on the GigE ports of the
satellite switch such as L2, L3, multicast, MPLS, BVI, OAM … (everything that works on a normal GigE).
–L1 features: applied on the satellite chassis
1) The following features are not supported on satellite ports in 5.1.1
*Need to update this*
2) When the ICL is a link bundle, there are some restrictions :
QoS can be applied on the ASR9000 host (runs on the NP where the satellite ports have their interface descriptors) or offloaded to the satellite
When you have oversubscription, that is more then the number of 1G ports compared to the ICL total speed there could be a potential issue. However there is an implicit trust model for all high priority traffic.
Automatic packet classification rules determine whether a packet is control packet (LACP, STP, CDP, CFM, ARP, OSPF etc), high priority data (VLAN COS 5,6,7, IP prec 5, 6, 7) or normal priority data and queued accordingly
For the downstream direction, that is 9000 host to the Satellite, the "standard" QOS rules and shaping are sufficient enough to warrant the delivery of high priority packets to the satellite ports. (e.g. inject to wire etc).
As the ICL link between the satellite and host may be oversubscribed by access interfaces, configuring QoS on the satellite itself is optimal for avoiding the lose of high priority traffic due to congestion. This feature was introduced in 5.1.1
3 steps to configuring QoS offload
INPUT access interface (CLI config) example:
class-map match-any my_class
match dscp 10
end-class-map
!
policy-map my_policy
class my_class
set precedence 1
!
end-policy-map
!
interface GigabitEthernet100/0/0/9
ipv4 address 10.1.1.1 255.255.255.0
nv
service-policy input my_policy
!
Traffic is hashed across members of this ICL LAG based on satellite access port number. No packet payload information (MAC SA/ DA or IP SA/ DA) used in hash calculation. This ensures that QoS applied on ASR9K for a particular satellite access port works correctly over the entire packet stream of that access port. Current hash rule is simple (port number modulo # of active ICLs)
Plug-and-play installation: No local config on satellite, no need to even telnet in!
It is recommended to use the auto-IP feature, no loopback or VRF need to be defined. A VRF **nVSatellite will be auto-defined and does not count towards the number of VRFs configured (for licensing purposes).
Optional config secret password for satellite login. Note that the username is 'root'
There are two options for ICL:
That is static pinning; designate some ports from the satellite to use a dedicated uplink.
Using a bundle ICL that provides for redundancy when one uplink fails.
All interfaces mapped to an ICL bundle:
ASR9000 TenG interface putting into bundle mode ON (No LACP support)
Define the bundle ethernet on the ASR9000 host, and designate which ports will be mapped to the bundle:
Because of the order and batching in which things get applied in XR there are some things that you need to know when it comes down to negating certain config which additions of others.
Examples of this are:
In such cases, failures are expected to be seen; generally speaking, failures are expected to be deterministic, and workarounds available
(re-apply the configuration in two commit batches)
Recommendation to users is to commit ICL configuration changes in separate commits to Satellite-Ether configuration changes
Starting in 5.1.1 many new features were added to expand upon the basic single host hub-and-spoke model. These features take more configuration than the base satellite configuration and will be discussed below.
Starting in 5.1.1 the ability for a satellite (hub-and-spoke) or a ring of satellites (simple ring) to be dual-homed to two hosts was added. (nV Edge acts as one logical router)
With this configuration one ASR9K host is the active and the other is standby. Data and control traffic from the satellite will flow to the active host, but both hosts will send and receive management traffic via the SDAC protocol. This is used to determine connectivity, detect failures, sync the configuration, etc.
The two hosts communicate management information via ICCP with a modified version of SDAC called ORBIT.
Supported Topologies:
Hub-and-spoke dual hosts
9000v with 10G ICL or bundle ICL
901 with 1G ICL
9000v (10G) or 901 (1G) using L2 fabric sub-interfaces
Satellites may be partitioned
Simple ring dual hosts
9000v with 10G ICL
901 with 1G ICL
Satellites may not be partitioned
Note: Partitioning is when you carve out certain access ports to be used by certain ICL interfaces
Current limitations:
Must be two single chassis, no clusters
Load balancing is active/standby per satellite, per access port planned
No user configuration sync between hosts
Configuration Differences:
The most notable changes when coming from a simple hub-and-spoke design is ICCP, and adding the satellite serial number.
Example
Router 1
redundancy
iccp
group 1
member
neighbor 172.18.0.2
!
nv satellite
system-mac <mac> (optional)
!
!
!
!
nv
satellite 100
type asr9000v
ipv4 address 10.0.0.100
redundancy
host-priority <priority> {optional)
!
serial-number <satellite serial>
!
vrf nv_mgmt
!
interface loopback 10
vrf nv_mgmt
ipv4 address 10.0.0.1
!
interface Loopback1000
ipv4 address 172.18.0.1 255.255.255.255
!
interface GigabitEthernet0/1/0/4
ipv4 address 192.168.0.1 255.255.255.0
!
interface ten 0/0/0/0
ipv4 point-to-point
ipv4 unnumbered loopback 10
nv
satellite-fabric-link [network satellite <> | satellite <>]
redundancy
iccp-group 1
remote-ports gig 0/0/0-43
!
!
!
mpls ldp
router-id 172.18.0.1
discovery targeted-hello accept
neighbor 172.18.0.2
!
!
!
router static
address-family ipv4 unicast
172.18.0.2/32 192.168.0.2
!
!
Starting in 5.1.1 we have the ability to support more than just simple hub-and-spoke. The ring topology allows for satellite chaining, cascading, and in general a more advanced satellite network.
Requirements and Limitations:
Configuration:
This is essentially the same as the dual hosts setup, but the network option must be used when entering 'satellite-fabirc-link'
This is treated as special ring and works the same way as simple ring.
The biggest difference is that in 5.1.1 cascading supports single host while simple ring does not.
Starting in 5.1.1 we have the ability to extend the ICL across an EVC. Normally an IRL is a L1 connection. This increases the flexibility of satellite by allowing for greater distances between the ASR9K host and the satellite device.
Requirements and limitations:
Configuration:
On Active-Host:
interface TenGigE0/1/0/23.200
encapsulation dot1q 200
!
nv
satellite-fabric-link satellite 200
redundancy
iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
On Standby-Host:
interface TenGigE0/1/1/0.200
encapsulation dot1q 220
!
nv
satellite-fabric-link satellite 200
redundancy
iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
Note: L2 cloud configuration not shown
'show nv satellite status'
Checking Version: The version of the software running on the satellite is being checked for compatibility with the running version of IOS-XR on the host
Configured Serial Number: (If configured) the serial number configured for the satellite, checked against that presented by the satellite during control protocol authentication
Configured Satellite Links: One entry for each of the configured satellite-fabric-links, headed by the interface name. The following information is present for each configured link:
Discovered Satellite Fabric Links: This section is only present for redundant satellite-fabric-links. This lists the interfaces that are members of the configured link, and the per-link discovery state.
Conflict: If the configured link is not conflicted, the satellite discovered over the link is presenting data that contradicts that found over a different satellite-fabric-link.
'show nv satellite protocol discovery'
Host IPv4 Address: The IPv4 address used for the host to communicate to this satellite. Should match the IPv4 address on all the satellite-fabric-links
For Bundle-Ether satellite-fabric-links, there are then 0 or more 'Discovered links' entries; for physical satellite-fabric-links, the same fields are present but just inline.
'show nv satellite protocol control'
Authenticating: The TCP session has been established, and the control protocol is checking the authentication information provided by the Satellite
Connected: The SDACP control protocol session to the satellite has been successfully brought up, and the feature channels can now be opened.
For each channel, the following fields are present:
Open(In Resync - Awaiting Client Resync End) The Feature Channel Owner (FCO) on the host has not finished sending data to the FCO on the Satellite. If this is the state then typically the triage should continue on the host by the owner. The owner of the Feature Channel should be contacted.
Open(In Resync - Awaiting Satellite Resync End) The FCO on the Host is awaiting information from the FCO on the Satellite. If this is the state then typically the triage should continue on the satellite.
Notes:
icpe_gco[1148]: %PKT_INFRA-ICPE_GCO-6-TRANSFER_DONE : Image transfer completed on Satellite 101
A few issues can cause this:
Conflict Messages
Examples:
BNG access over satellite is only qualified over bundle access and isn’t supported over bundle ICLs.
BNG access over ASR9k host and NCS5k satellite specifically is in the process of official qualification in 6.1.x. Please check with PM for exact qualified release.
Access bundles across satellites in an nV dual head solution are generally not recommended. The emphasis is not to bundle services across satellites in a dual head system as if they align to different hosts, the solution breaks without an explicit redundant path. An MCLAG over satellite access is a better solution there.
Bundle access over bundle fabric / ICL require 5.3.2 and above on ASR9k. For NCS5k satellite, bundle ICL including bundle over bundle is supported from 6.0.1 and nV dual head topologies are planned to be supported only from 6.1.1
MC-LAG over satellite access might be more convergence friendly and feature rich than nV dual head for BNG access from previous case studies. For non BNG access, nV dual head and MC-LAG are both possible options with any combinations of physical or bundle access and fabric.
In an MC-LAG with satellite access, the topology is just a regular MC-LAG system with the hosts syncing over ICCP but with satellite access as well. Note that the individual satellites aren’t dual homed/hosted here so there is no dual-host feature to sync over ICCP beyond just MC-LAG from CE.
As a deployment recommendation, unless ICL links (between satellite and host) are more prone to failure over access, MC-LAG might be preferable over nV dual head solution. However, if ICL links have higher failure probability and the links going down can affect BW in bundle ICL cases, then MC-LAG may not switch over unless the whole link goes down or access goes down.
Xander Thuijs CCIE#6775
Principal Engineer, ASR9000
Sam Milstead
Customer Support Engineer, IOS XR
Yeah we need to work on that documentation, but in the interim, let me try and answer those questions.
Because this is ICCP based, it functions the same way as MCLAG.
interface TenGigabitEthernet 0/1/0/2
nv
satellite-fabric-link network
satellite 100
remote-ports GigabitEthernet 0/0/0-43
satellite 200
remote-ports GigabitEthernet 0/0/0-43
regards
xander
Hi, Xander:
On the note you write (HW/SW reqs):
Can you explain this a little further, please? Will I be able to offer 8 queues to a port-based customer when using a TR host card?
What does TM queue mean?
If time allows, can you please offer any advice/consideration we should take into account when thinking of provisioning customers on an 9000v connected to a TR host-card? Obviously the cost difference between TR and SE is considerable. No chance we get an all copper LC for the 9000?
:D
Thanks,
c.
I saw that exact line in a presentation regarding NV. TR is suppose to give 8 queues per 9000V access port. I started testing that yesterday and I have different port shaper speeds on 17 different ports right now and it seems ok. Test a few of the ports and they seem to be working. Going to try child policies next but it looks like they apply to the interfaces. Need to push some real traffic across though. 5.1.1 on 9000Vs
Hi Xander,
I am looking at the ICL Lag picture, Specifically A2 - LAG between two 9000Vs. I am trying that now on a set of 9000Vs. 1 9000V is ICL Bundle and the other is a single non bundled ICL. I am not able to bundle ports across 9000Vs like the diagram suggests. Is there a restriction I am missing? I can bundle both ports on the NonBundled 9000V.
We still have the restriction that we cannot do bundle-on-bundle (ICL and access bundles) at the same time. This and other exceptions are listed in section 6.
You can still partition the satellite to where ports 0-20 are associated with a bundle ICL and ports 21-40 are associated with a non-bundle ICL. If you do this then you cannot have an access bundle on 0-20 but you can have an access bundle on 21-40.
Thanks,
Sam
So the A2 configuration on the picture above is not possible yet?
Correct.
A2 shows a physical ICL to satellite 1 and only a bundle ICL to satellite 2. LAG on LAG is not yet supported.
Thanks,
Sam
Hello,
Since with 5.1.1 There is "no user configuration sync between hosts" in dual host topology, when can we expect that feature?
Kind regards,
Ivan
Hi Ivan,
This was at one time targeted for 5.3.1 but it is now off the roadmap due to other more pressing features. Your account team can work with marketing to tell you what is coming and prioritize what features you and the rest of the satellite community need/want to see.
Thanks,
Sam
Hello
We are considering evaluating the satellite Nv units in our network (already using asr9000). Can you help me better understand dual homed topology for the satellites?
The scenario we are interested resembles the dual-host one satellite connected to 2 hosts AS9000 for redundancy. From what I understand the two ASR9000 must be configured in a cluster setup (nv-edge, single control plane) for this to work?
Is there any simple dual-home setup for satellite units that does not involve clustering? (more specifically is common control plane for the two asr9k a requirement for the dual homed setup?) ? E.g the satellite directly connecting to both hosts but having only one uplink in active state?
Thanks Victor
Hi Victor,
You can go 2 ways about that. Either you dual home the satellite to each rack member of the cluster, this is active/active whereby all fab links are used. Or you can dual home to 2 separate devics not part of a cluster and this will give you active/standby operation, handled via ICCP.
So a common control plane is not necessary for the dual home, but it will affect the loadbalancing operation on the fabric uplinks (ICL).
cheers
xander
Thanks for the prompt reply Xander
Now that it have taken a closer look to your original post you had already mentioned in paragraph 13 Current Limitations "Must be two single chassis, no clusters"
One last clarrification again for the scerarion with 2 separate devices. The ICCP link is used just to syncronize management information or it also used to synchronize information about state for limited set of control protocols (e.g. ARP entries, MAC addresses received through satellite unit , pppoe session state (realize especially the last is a long shot))
Thanks again Victor
No prob Victor, a single host can be dual homed to racks of a cluster (fully stateful) or homed to 2 separate nodes, but need to be linked via ICCP which is only used for mgmt. PPPoE state is not synced, so if there is a fabric link failover those sessions need to restart.
Good news is that we get this func called GEO redundancy, which will leverage ICCP and then some to sync the state of the sessions between the 2 separate nodes. This is 52 functionality.
regards
xander
Thank you Karwan and also for the detailed description which makes it easier to give an assessment of the situation!
We've had some satellite operational issues in 511 that have been addressed in 512. Based on the info you have here I am suspecting CSCuo11506 primarily.
XR 512 is out today. 513 is coming in august. If it is easy for you to upgrade, and this issue is a biggy for oyu, then 512 is best play right now and potentially consider 513 also at some point.
regards
xander
Hi , ouch I didnt expect that! Can you share the config of the loop101 you are using, want to make sure it is in the same subnet and that the satellite is pingable. It likely is because the control channel says stable.
Also try to see if you configure 5 or 10 ports on this fab link if that commit goes through.
If still no dice, then I think weneed to investigate this deeper, and for that a tac case may be the best as we likely need to pull some details from it like debugs and techs etc.
regards!
xander
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: