Introduction
As a comprehensive ASR9000 nV Edge (Cluster) deployment guide already exists, this document will serve as an addendum. This guide will not extensively repeat what already exists in the original deployment guide. This document will only cover steps where the ASR9001 (a.k.a., Iron Man) chassis differs. The original nV Edge deployment guide is here:
1. Glossary
- nV - Network Virtualization
- nV Edge (a.k.a. Cluster) - Cisco's virtual chassis solution where a pair of ASRK class routers form a virtual chassis, acts as a single unit and provides Active/Active AC failover between primary and secondary units.
- Control Plane – the hardware and software infrastructure that deals with messaging / message passing across processes on the same or different nodes route processors
- EOBC - Ethernet Out of Band Channel - are the ports used to establish the control plane extension between chassis
- Data Plane – the hardware and software infrastructure that deals with forwarding, generating and terminating data packets.
- DSC – Designated Shelf Controller (the Primary RSP for the nV edge system)
- Backup-DSC – Backup Designated Shelf Controller
- Iron Man - Nickname for the ASR9001
2. Converting Single ASR9001 Units to nV Edge
As previously stated, this is an addendum to the original ASR9000 NV Edge Deployment guide so the steps there are still applicable (i.e., the software configuration and bring-up steps). However, when wiring up the control plane between chassis, the following is specific to ASR9001 units:
2.1 ASR9001 EOBC (Control Plane) Wiring
The standard ASR9000 design allows for up to four processors which are used in classic nV Edge (cluster) solutions. The Iron Man chassis, however, has a single fixed processing design and, thus, differs in connection requirements.
- Ports to Use - The EOBC ports for nV Clustering are labeled "Cluster 0" and "Cluster 1"
- Port Types - These ports are 1Gig SFP ports (not SFP+)
- How to Connect - These are directly connected as follows:
- Cluster 0 port of Rack 0 is directly wired to Cluster 0 port of Rack 1
- Cluster 1 port of Rack 0 is directly wired to Cluster 1 port of Rack 1

2.2 Supported Hardware and Caveats
- EOBC Ports - Are SFP only (1 Gbit/s). 10 Gbit/s ports are not supported in the EOBC role as of the writing of this document.
- Supported Modular Port Adapters (MPA):
- A9K-MPA-20X1GE
- A9K-MPA-2X10GE or A9K-MPA-4X10GE
- Cluster Chassis Restrictions - Only the same types of chassis can be clustered together. Therefore, an Iron Man chassis will not properly work with an ASR9000 chassis if paired.
2.3 ASR9001 Inter Rack Link (Data Plane) Wiring
The Inter Rack Links (or IRL) are the data plane extension between nV Edge cluster systems.
- Ports Used - Any available 10 Gbit/s links on the chassis (minimum of 2)
- This includes any combination of 10 Gbit/s ports either on modular port adapters or any of the four SFP+ ports built into the front panel of the chassis
- Other than these Iron Man clarifications, the original ASR9K NV Edge Deployment Guide will complete the remaining instructions/tasks. Please visit the original document for the remaining IRL instructions.
3. ASR9001 nV Edge Redundancy Model
Non-ASR9001 clusters will typically have a totoal of four processors (2 in each chassis). The Iron Man however, has built-in single processors.
What do these differnces mean for Iron Man?
- ASR9001 will only have Active RPs (and will not have Standby RP)
- One in Rack 0 and the other in Rack 1
- The lack of standby redundancy can lead to slightly longer traffic outages.
Processor Roles Explained (for all chassis types)
- Active - The Active RP in a single chassis
- Standby - The Standby RP in a single chassis (not applicable to Iron Man)
- Primary DSC - When clustered together, this is the Active RP in the chassis that is in control of the entire cluster system
- Backup DSC - When clustered together, this the the Active RP in the chassis that is NOT in charge of the cluster (but waiting to take over if anything goes wrong)
ASR9001 (Iron Man) Output
RP/0/RSP0/CPU0:im_cluster#admin show dsc --------------------------------------------------------- Node ( Seq) Role Serial State --------------------------------------------------------- 0/RSP0/CPU0 ( 0) ACTIVE FOC1710N0YE PRIMARY-DSC 1/RSP0/CPU0 ( 1166830) ACTIVE FOC1710N0YA BACKUP-DSC RP/0/RSP0/CPU0:im_cluster# |
Non-ASR9000 Output
RP/0/RSP0/CPU0:Valkyrie3#admin show dsc --------------------------------------------------------- Node ( Seq) Role Serial State --------------------------------------------------------- 0/RSP0/CPU0 ( 0) ACTIVE FOX1228GOVJ PRIMARY-DSC 0/RSP1/CPU0 ( 9860) STANDBY FOX1228GOVJ NON-DSC 1/RSP0/CPU0 ( 9106) ACTIVE FOX1438GTQR BACKUP-DSC 1/RSP1/CPU0 ( 9065) STANDBY FOX1438GTQR NON-DSC RP/0/RSP0/CPU0:Valkyrie3# |
- Other than these Iron Man clarifications, the original ASR9K NV Edge Deployment Guide will complete the remaining instructions/tasks. Please visit the original document for the remaining redundancy instructions.
4. Reverting the ASR9001 nV Edge Cluster to Single Chassis System
(Note: console access is required to both racks)
- Prepare system to stop at the ROMMON prompt after a reload performed in later steps
- This can be done by sending a break signal to the RP after it reload
- But the preferalbe method is to configure it in admin mode as follows:
- (admin)#config-register 0x0
- Remove all "nv edge" commands in admin-config mode
- Remove cables (in the following order)
- Shut down all IRL links
- (Optional) Remove the working Exec-level configuration using:
- Configure the following in admin-config mode
- (admin-config)#nv edge control control-link disable
Reload all RPs in the system
- (admin)#reload location all
- If you configured the registers to 0x0 in Step 1 you should be at the ROMMON prompt
- If not, you should send the break signal to each console in order to reach the ROMMON prompt
- From the ROMMON prompt type the following on each chassis:
- unset CLUSTER_RACK_ID
- sync
Unplug all EOBC and IRL cables running between the two chassis
Reset the configuration registers to 0x102
- Reload the systems from ROMMON by typing:
- The systems will now come up as individual systems
5. ASR9001 Software Installations
Iron Man software installations do NOT vary from any other AR9K system. Please visit the general information page here for installation instructions (Note: May require cisco.com login):
-= End of Document =-