Showing results for 
Search instead for 
Did you mean: 

Tech-Talk: NCS 6008 Software and Hardware Introduction with Demo

Cisco Employee
I am Rahul Rammanohar (CCIE R&S, SP, #13015) and am a Technical Leader with Cisco Technical Services. My colleague Hitesh Kumar (CCIE SP #38757) who works for the Cisco Sales as a Systems Engineer and myself have created this blog and the tech-talk video providing a Software and Hardware Introduction to the NCS6008, followed by a demo.


Watch the Tech-Talk Video


The Internet of Everything is an explosion of connectivity among people, process, data, and things on the network. This explosion of traffic increases the need for more and more bandwidth in our core devices too. Cisco introduced the Network Convergence System (NCS) last year which is a network family designed to serve as the foundation of a massively scalable, smarter and more adaptable Internet. There are three platforms in this family, the NCS 2000 which is a next gen ROADM, NCS 4000 which is a Converged Optical Service platform and the NCS 6000 which is the next Generation IP Transport router and one of the industry’s most powerful router. The NCS 6008 which includes the industry's first 1 Tbps line card with capabilities of scaling up to 5 Tbps per slot in the future. The NCS 6008 runs virtualized XR where XR is running as a VM and is able to provide nonstop operations during software image upgrades or module changes.


Software Introduction – Virtualized XR

Cisco has been in the business of developing routers for 20 plus years. The initial IOS based architectures were based on Monolothic OS design, which means that it runs as a single image. In such a system, the processes have to share the same memory space and there is no memory protection between processes, hence IOS bugs can potentially corrupt data used by other processes and bring the entire router down.
Next, Cisco brought in IOS-XR which is no more a single image but a modular system that is divided into software packages so that you can select which features run on your router. A QNX micro kernel based architecture where the software processes are running in their own protected memory address space. The failure in one application is contained and does not bring the entire system down. Most processes can be restarted at will without affecting the entire system. The system provides the option to install additional packages to a running router using PIE files and bug fixes to be applied to a running router via PIE files called SMUs.
We have now brought virtualization into the operating system. The proven XR operating system runs in a virtualized environment where many software entities can be maintained individually which increases the stability and scalability. The virtualized XR uses a 64 bit Linux kernel as the host OS that supports large amounts of addressable memory and uses the virtualization mode of modern day CPUs. On top of the host OS we have a KVM based hypervisor that allows multiple operating systems to run together in separate virtual machine contexts. The router system administration from IOS XR now runs in a separate VM called the admin VM on a linux microkernel. The proven XR applications with the same commands, MIBs runs as another VM called the XR VM.
Virtualized XR is also key to providing zero packet loss and zero topology loss ISSU. While the router is running on one version of code, we can have another VM running the new version of code and then perform a switchover from one VM to the other. The line card hardware resources can be carved up such that the new version of the software can program a set of hardware resources while the old version is actively forwarding traffic. This allows ISSU to proceed while the platform continues to forward traffic without dropping any control plane or data plane packets.


NCS 6008 Chassis, Route Processor and Line Cards

It is an 8 slot chassis, 48RU with wider slots that helps in cooling. The box has been designed to meet the demands of future expansion as it is very difficult for any service provider to change the device within their network once in place, with multi chassis its capable of going beyond 1Petabits per system, Multichassis can scale upto 128 line cards per system which will be supported in future.
The system supports 24 21Kw DC Power modules in a 12+12 redundant configuration or 18 3Kw AC Power modules. A fully loaded chassis with its 1st Gen line card and processor can operate on 14Kw.  System has been designed to support future generations line cards and route processor. So you can introduce more power in the system based on requirements. Power can be upgraded in service without shutting down the system. Each of the 12 modules connect to a power control module, the PCMs are connected to the RPs. Minimal config is 4+4 power modules, if you want the system to have redundancy in power modules.
The Craft Panel is the only non-redundant component of the chassis as is not critical for the chassis operation but is for providing status and display of some of the components.
It has two fan trays with each fan tray having two fans for Fabric/RP card cage and four fans for LC card cage.  Fan speed is determined using temp sensors on every line card. The system can run on a single fan tray but it will make a hell lot of noise. If a fan tray fails or is removed from the system the remaining tray is commanded to full speed. Air flow is from front to the back. The system has 6 power trays. Power trays are also FRUs but it is required for the system to be shut down for its replacement.
Eight slots for line cards on the front and eight slots on the rear for the RP and fabric cards. The system can support 6 fabric cards with one fabric card in redundancy i.e. in 5+1 redundancy mode. System can operate on 5 fabric cards without any issue.
RP features the high performance Intel Sandy Bridge 8-Core processor with 48Gigs of fast memory. It has two Solid State Drives for fast software installation and fast logging. As seen earlier, the chassis has dedicated slots for the RPs and we can have 2 RPs placed in the system. Each route processor has dedicated point to point bus connections to the Craft panel, power shelf, fan tray, each line card, fabric card and to the peer RP slot.
On the RP, we have multiple ports, starting from the left of the image,
  1. ports to connect the chassis to other chassis to create a multi-chassis system
  2.  3 console ports. Port 0 used for console to the admin, port 1 to console to the XR VM and port 2 is unused.
  3. A USB port which can be used for file storage and loading software during installation.
  4. A copper management port MGMT0 accessible from the admin VM and an optical management port which is MGMT1 accessible from the XR VM.
  5. Ports for system and network timing.
  6. Software controlled alarm port.
The line cards of the NCS 6008 are powered by the X1 and X1e versions of the nPower ASICs. The ASIC has 4 billion transistors in a single chip and is the world’s first true 400 Gigabits per second forwarding integrated ASIC. The X1e provides enhanced scale for QoS and interfaces. Multiple X1 ASICs power the 10 port 100Gig line cards and each card supports up to 700 million packets per second. On the top right we have the 10 port 100Gig CPAK optics based line cards. The line card comes in two flavours label switch router and multi service. The difference between these two is no of queues and memory, the LSR optimized for a lean core. The 10 port 100Gig also comes with CXP optics. This card also comes in two flavours label switch router and multi service. The 60 port SFP+ 10Gig line card uses the nPower X1e ASIC. It also comes in two flavours LSR and MS and the card can support up to 480 Million packets per second.


NCS 6008 Breakout Option

One of the major advantages of the 100Gig NCS line cards is the option of breakout. 100Gig interfaces can be configured through a single line command to be either 10, 100 or 40 gig interfaces. On the same line card you can have a mix of either 10, 100 or 40 gig interfaces and in some cases you don’t have to change the optics too. Earlier products had cards for every Ethernet speed and customers had to verify each type of card. Now it’s a lot easier as they have to certify just one card. A 100Gig interface can be split into a 10Gig interface by separating the signal between the individual fiber pairs. These fiber pairs can be connected to far end devices via 2 methods.

The first method is by using a special cable that has an MPO24 connector at one end and 12 individual LC connectors on the other end. The cable has 24 individual fiber strands, which are connected to the MPO24 on one end and split in pairs on the LC connector. Two of the 12 LC connectors are not used, while the remaining ten can be used as individual 10G connections.

In the case of short reach, the multimode version of this cable connects to the CXP SR10 or the CPAK SR10 and in the case of the long reach, the SM cable connects to the CPAK 10x10G LR transceiver. The LC end of the cables can connect to SFP+ transceivers.

The second method is by using a breakout patch panel. This device is 3 RU in size and is passive. The patch panel needs to be bought separately and it comes in either short reach or long reach version. The CPAK SR10, CXP SR10 and the CPA 10x10G LR support this method. These transceivers are connected via special cables that have the MPO24 connector on both ends of the cable. These cables are connected to the breakout patch panel. The MPO24 connectors are internally mapped on the patch panel to 10 LC connectors which can be individually connected via an LC to LC cable to an SFP+ transceiver.

NCS 6008 Demo

We capture the following outputs in the demo, which is at the end of the video.

From the XR VM

show version

show install active

show platform vm

show interface summary

show ip interface brief

show uidb index tenGigE 0/0/0/0/0 location 0/0/CPU0

show uidb index location 0/0/CPU0


From Admin VM

show platform

show inventory

show vm location 0/rp0

show runn sdr

show environment power

show environment fan

show environment temperature


For Packet Capture from the XR VM

show controller TenGigE0/0/0/4/0 stats

clear controller TenGigE0/0/0/4/0 stats

show controllers plim asic statistics interface TengigE0/0/0/4/0

clear controller plim statistics location 0/0/cpu0

show interface bundle-ether100

show interface bundle-ether100 acc

show controllers pse statistics summary instance 2 location 0/0/CPU0

debug icmp ipv4 location 0/0/CPU0

show controllers fia statistics instance 2 location 0/0/CPU0

clear controller fia statistics instance 2 location 0/0/CPU0


Hope this blog and tech-talk video on the NCS 6008 Software and Hardware Introduction was useful to you and we look forward to your feedback. You can send-in your feedback using the comments section below.

Thank you!

1 Comment

Nice explanation ..Rahul .. smiley

Content for Community-Ad

This widget could not be displayed.