With Vivek Baveja and Shawn Wargo
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Catalyst 6800 VSS Quad-Sup SSO and Instant Access with Cisco experts Vivek Baveja and Shawn Wargo. During the live event, Cisco subject matter expert Vivek Baveja provided an overview of the key components, design details, and operational benefits of using a Cisco Catalyst 6800 Virtual Switching System (VSS) with quad-sup SSO, along with the new campus FEX technology Cisco Catalyst Instant Access (CIA).
This is a continuation of the live webcast.
Vivek Baveja Vivek Baveja brings more than 17 years of networking technology and management experience across enterprise and service provider verticals. He is currently a technical marketing engineer with Cisco Catalyst Series switching products with a focus on backbone, core, and distribution across Layer 2, Layer 3, Multiprotocol Label Switching (MPLS), DCN, telecommunications, and newer technologies across hardware and software, enabling the enterprise for the next generation of networks. He holds a bachelor of electronics engineering, CCIE, and management degree from the Wharton School of Business.
Shawn Wargo has been working at Cisco for more than 14 years (since 1999). Shawn has been a Cisco Catalyst 6500/6800 technical marketing engineer (TME) since 2010, with special focus on hardware architecture and design, IPv4/IPv6 routing and switching, and IP multicast technologies, among other things. Previously, Shawn worked within BU Engineering, with the Customer Operations Group (systems testing and design verification) and also with BU Development Testing (Cisco Catalyst 6500 and VSS), and started out in Customer Advocacy (CA) in the Cisco TAC (core LAN and LAN switching).
Remember to use the rating system to let Vivek and Shawn know if you have received an adequate response.
Vivek and Shawn might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation in Network Infrastructure community, sub-community, LAN, Routing and Switching discussion forum shortly after the event. This event lasts through February 14, 2014. Visit this forum often to view responses to your questions and the questions of other community members.
Webcast Related Links:
In very simple terms, both VSS and vPC both achieve the same goal of one logical switch however the way they achieve thsi is different. Lets break it down to Control Plane and Data Plane implementation of the two.
The VSS has single control plane across 2 C6k chassis, the control plane of the second chassis is in standby SSO mode over VSL link. Which means that you configure and manage a single and there is single point of management and operations.
In Nexus vPC implementation, It maintains active control plane on both the chassis and the two control planes remain in sync with each other a vPC peer link. From configuration and management point, both the switches need to be confiugured and managed.
In regards too Data Plane, In both VSS and vPC implementation, the Data Plane is active.
Hi Vivek, thanks for your reply. Just to be 100% sure, you just confirmed that vPC is NOT supported on Cat6800, right? Following that thought, will Cat 6800 be able to understand third party devices that speak LACP?
Appreciate this is not directly related to quad sup but looking at the slides i have a few questions -
1) the new 10Gbps linecards. The 32 x 10Gbps linecard gets 160Gbps per slot with a 6807 chassis. Is this with the current sup2T ?
2) whether it is or not why does the 16 x 10Gbps linecard only get 80Gbps per slot in the 6807 chassis. It's a new linecard so why not give it the connectors for 160Gbps and then no oversubscription.
3) the 6807 can apparently scale to 880Gbps per slot. How is this going to be achieved ie. will there be a new supervisor to do this in the future ?
4) as far as i know with the 6500 the chassis and supervisor are what determines how much capacity each slot gets. Then it is up to the connectors on the linecards as to how much of the capactiy can be used. Is this going to be the same with the new 6800s ie.
you buy a 32 x 10Gbps linecard and you can use 160Gbps per slot. If the slot capacity is then upgraded (because of a new supervisor) is there no way to upgrade the linecard to be able to take advantage of the increased slot capacity or do you then have to purchase new linecards ?
Many thanks for the answers. A quick follow up question -
so with a single sup2T in the 6807 chassis 220Gbps per slot capacity is available now. So is the limitation on linecards in terms of slot capacity down to the limitations of ASIC design as it currently stands ?
Again with the 6500 the connections to the backplane is done via ASICs ie. port groups are tied to a specific ASIC which then has a certain amount of throughput to the backplane (switch fabric). Is there a limitation on how many of these ASICs you can actually get onto a card or is it more to do with cost ie. it would just become too expensive to have more ASICs per ports ?
Apologies for keep asking about this but i've always wondered about these sort of things when using the 6500 so i thought i'd take the chance to ask
Hi Jon, Good questioning,
To your first question Yes, the Sup2T will operate at 220Gbps on C6807 with the current release.. So answer is Absolute Yes. it works. Now to really fully avail the higher throughput made available by Sup2T on C6807, you need line cards which has backplane speed going above the current 80Gbps... and those are coming as I shared earlier.
Your understanding is correct, the throughput is really a function of hardware(ASIC's) and clock speed.. you can not put more bits on wire than it is wired to do and clocked at....Here is way to think which will help in understanding and complete the picture. From hardware perspective, there are couple of things that determine the throughput on a modular chassis like C6k... A) The Fabric back place speed for inter card (Card 1 to Card 2) traffic, B) The Port ASIC's at which the front panel ports (Port Group) terminate and the Switching ASIC's which does the switching for a set of Port ASIC's and then based on switching decision (CAM table lookup, FIB..etc), put the packet on to backplane or put it back to another local port on the card. All of this decides the total throughput for a line card and sum total for a chassis. The total number of ASIC that needs to be put on a line card is a function of, market requirement, cost and performance (over subscription). Its a balance of three. The newer line cards you will see higher Fabric ASIC throughput to backplane (as enabled by Sup2T), Higher Switching ASIC with higher capacity to do lookup and switch packets per second (pps) giving your overall higher throughput. Hope this helps.. Cheers. Vivek
For some reason, I have not been receiving notifications about new posts...
Anyway... you answers are spot on. I would like to add some more background.
First, regarding 220G/Slot with Sup2T in C6807-XL. The most simple explanation is that C6807-XL provides 4 Fabric Channels to each Supervisor (8 total), and the Sup2T is capable of running at speeds up 55Gbps, across 1 to 4 Channels.
Thus, with no change of Supervisor (Fabric ASIC) 4 x 55G = 220G
Second, regarding Linecard architecture (and related throughput capabiltiies). I would add that another important consideration is the evolution of the ASICs themselves.
Increasingly, chip manufacturers are able to reduce the size of capacitors, as well as produce lower-defect ASICs... which allows them to reduce the overall die-size of the ASIC itself, while also boosting its overall performance.
This, in turn, allows vendors such as Cisco to (1) increase port-count and / or channel count, because each ASIC is smaller (2) combine multiple ASIC functions into a single ASIC, and (3) reduces the costs of materials.
Thus, we are able to double the number of ASICs (and performance) for almost the same costs
I have a design question regarding migration from HSRP to VSS.
I would like to know your thoughts on migration process to minimise disruption -
Currently running HSRP on 6500 and they are coming to EoL. We plan to upgrade to 6800 and introduce VSS.
What will be the best practise for the migration without much disruptions?
I understand VSS requires a reboot after applying the configurations.
Is the safest option - is to Trunk 6500 to 6800 and slowly move services to new and then apply VSS or is there a better/ best practise way?
First of all, I would like to put to rest the confusion that Catalyst 6500E series chassis are coming to EoL. Catalyst 6500E series is not going EoL and will continue its journey alongside the newer chassis Catalyst 6807. Again. Both C650E and C6807 chassis are available and will continue to be.. There are "NO" plans to have Catalyst 6500E EoL any time soon for foreseeable future.
Regarding the the design recommendations for converting your existing HSRP/Standalone switches to VSS. I do not think i can do justice here without knowing all the aspects of your network. The overall approach of doing this migration would be to enable VSS on one switch and then transition the configuration and policies of interfaces to the VSS converted switch before enabling the VSS on the other switch, this would minimize the downtime and ensure a smooth migration. I found this doc on Cisco.com which is not the latest in terms of the supervisor and other hardware specs, however does make a good point on how to achieve migration from standalone chassis running HSRP to VSS pair. check it out
Ultimately, the best way to have a design recommendation is to have a word with the Technical Services/ Advanced Services team who can look at your network and implementation details and suggest you the best methodology to do the migration. Hope this helps.
Vivek is right! It is very difficult & precarious to provide recommendations w/o clearly understanding the environment.
That said... I can provide some personal experience, that may help guide you!
First, there are 2 questions to ask (for you)... and the answers / steps depend on the answer.
This is important to determine whether its possible to build up the entire C6807-XL (and VSS) separately, and then introduce it into the network... or you are taking down a working C6500-E setup, and migrating that to C6807-XL.
Below you will find a basic sequence of events, but (per Vivek's caution) you will want to carefully consider each & every step, and modify the details according to your own environment!
Case1: New C6807-XL + Sup2T, alongside Existing C6500-E + Sup720 (or other system)
NOTE: This is the easiest approach (in my opinion)...
Case2: Migrating from an existing C6500-E + Sup2T (x 2) system to a new C6807-XL & VSS
NOTE: This is similar to Case1, but reusing the existing Cards...
Hello Vivek & Shawn
I have one question regarding Instant Access (6800ia):
In the presentation you sad that the other switches can be cascaded downstream of 6800ia FEX.
That is great news, because the Nexus FEX solution lacked this feature, with the default BPDU guard enabled on FEX ports.
Do you need to have some special sort of config (enable STP, BPDU propagation) on 6807/6800ia to get this design to work, or is it simple plug & play.
Also, in the design where I have a stacked pair of 6800ia, can I use port channel from different stack members towards downstream switch, with support for LACP?
No special configuration. In fact, the default mode for the 6800IA ports is "switchport" (aka L2) mode.
In this mode, you can do any & all of the standard switchport configurations, such as switchport mode trunk, native vlan, allowed vlan, etc. Note that the same best-practice L2 design principles still apply, such as Root Guard, Loop Guard, Portfast, etc.
switchport trunk native vlan 101
switchport trunk allowed vlan 1,101,1000,2000
switchport mode trunk
logging event link-status
spanning-tree guard root
We currently do not support EtherChannel (LACP or PAGP) connected to the FEX. This capability is scheduled for the IOS software release in Q4 CY14. Once that is available, you will be able to have EC members on any of the FEX stack members.