cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3278
Views
0
Helpful
1
Replies

Facebook Forum - Catalyst 6500 Switch Architecture

ciscomoderator
Community Manager
Community Manager

Live chat with Cisco expert, Akshay Balanagur on Catalyst 6500 Switch Architecture

July 17th, 2012

Cisco Catalyst 6500 is one of the most widely deployed Switches in the world. The “Swiss Army knife of network”, can do routing, switching,Facebook_Forum_Template_067.pngsecurity, wireless and almost everything that you would want your core switch to do.

In this forum we will cover topics like 6500 architecture, capacity, over-subscription, copper/fiber modules, Supervisors (Sup 720 and Sup 32). Please note that we will NOT be covering supervisor SUP 2T in this discussion. We request you to post your queries related to LAN Switching during the forum event.

Our expert is Akshay Balaganur, Engineer at Cisco Technical Assistance Center. Akshay has been with LAN-Switching for last 2 years, working extensively on Cisco Catalyst 6500 platform. He can answer any questions that you may - harder questions are welcome!

When: July 17

9:00 AM PDT (San Francisco; UTC -8 hrs)

12:00 PM EDT (New York; UTC -5 hrs)

3:00 PM CEST(Paris; UTC +1 hr)

9:30 PM IST (India; UTC +5 hrs)

To RSVP Click Here

What is Facebook Forum?

Facebook forums are online conversations, held at a ore-arranged time on our Facebook page. It gives you an opportunity to interact with a live Cisco expert and get more information about a particular technology, service or product.

How do I participate?

On the day of the event, go to our Facebook page http://www.facebook.com/CiscoSupportCommunity

Like us on Facebook

1 Reply 1

ciscomoderator
Community Manager
Community Manager

Here's a condensed summary of our July Facebook forum in a Q&A format.

What is the difference between classic linecards and fabric linecards?

The basic difference is the medium they use to move data. Classic Line cards use the 32Gig Bus , where as the fabric Line cards use the 8/20 Gig Fabric channel to move the data traffic.

Also note the 32Gig Bus is a shared medium.

More information can be found here: https://supportforums.cisco.com/thread/2146410

What are some of the key features of the Supervisor 2T as compared to Supervisor 720?

Kindly note that we are not covering SUP2T in this discussion as it a huge topic in itself. We will probably come up with a new discussion entirely focused on SUP2T soon. But as of now , I hope this link will do. http://www.cisco.com/en/US/products/ps11878/index.html

What is the black plane capability of Sup2t ? and how it's different from sup720..??

In Sup2T as the name describes, we have 2 Terabit of fabric. We basically have 26 channels of 40 Gig each. So that gives us 26x40= 1 Terabyte. It is full duplex. Meaning traffic could go both ways same time. So effectively gives you 2Tb of Fabric switching capacity. Kindly note that we are focusing only SUP720 and SUP32 in today's discussion.

Since sup 720 supports VSS does it required special HW to support VSS ??

You would need a Virtual Switching Supervisor 720 10GE for the VSS

Switch fabric has ‎26 channel of 40 gig each ? Could you please explain .. or could you point me to some doc?

The Switch fabric on Sup2t has 26 channels. Each Slot in he chassis gets two channels. So if you have a 6513-E chassis ( Non-E series doesn't work with SUP2T), each of the 13 slots will have 2 fabric channels. Each fabric channel is capable of 40 Gig. So in total you have 26x40= 1T . Full duplex makes it 2T. We have the new 69XX series line cards which have 2x40 Gig fabric channel connecters.

Also note that you can use your existing 67XX line cards with SUP2T , but they will operate at 20 GIG per channel. The Fabric has the intelligence to Sync the clock rate based on the type of LC installed.

Why does 6500 support GRE, but other switches doesn't?

Its not true. 4500 Switch also supports GRE, but in software. If you are talking about GRE support in 'hardware', then yes, you are right.

It all boils down to the architecture of these switches. When 6500 was designed, we put in a peice of hardware called as EARL ( Enhanced Address Resolution Logic) with intention to support multiple features in hardware like L2,L3 lookups(ipv4 and ipv6), ACL , QOS , Neflow, MPlS ( FROM sup720-3B onwards), GRE and etc. EARL chip resides on your PFC/DFC cards and that enables you to do all fancy features in hardware.

Can you tell if the CFC card do less features in hardware because they lack this EARL processor?

Thats a very good point. CFC just adds the Bus connector capability to the 67XX Line cards. CFC doesn't have the EARL Chip and the TCAMs so it does not have the forwarding capability like DFCs. If someone wants to install a DFC to thier 67XX Line card, then they have to remove the CFC daughter board first. Once the DFC is installed , there is no connectivity to Bus.

How can I detect Arp Spoofing Attack???

A very good question. Cisco has a feature called "Dynamic ARP Inspection" which helps you mitigate the ARP spoofing/ Man-in-the-middle attacks. Please go through the following link. It has detailed explaination of the feature and also the steps to configure.

http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/dynarp.html

We have a couple of 6500s with all classic linecards on them. Whenever we had performance/oversubscription issues, we were advised to add some DFC supported linecards and DFCs. I want to understand what are the benefits of using a DFC?

Performance is the biggest and the most obvious reason for implementing DFCs. You move from a 30 Mpps centralized forwarding system anywhere up to a 400Mpps distributed forwarding system. I mean each DFC would give you 48 Mpps per slot.

Visit the link for more details.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_qanda_item09186a00809a7673.shtml#

What are the software/hardware requirements to be able to terminate IPSec tunnels directly into the 6500?

I am really not a VPN guy , but I hope this link answers your question.

http://www.cisco.com/en/US/docs/interfaces_modules/shared_port_adapters/configuration/6500series/76ovwvpn.html

For more details I would suggest posting the query to Security Section of our Cisco Support Community.

https://supportforums.cisco.com/community/netpro/security/vpn

Here are some bonus hints and tips prepared by our expert on this topic in a Q&A format

We have a couple of 6500s with all classic linecards on them. Whenever we had performance/oversubscription issues, we were advised to add some DFC supported linecards and DFCs. I want to understand what are the benefits of using a DFC?

I am glad you asked. Performance is the biggest and the most obvious reason for implementing DFCs. You move from a 30 Mpps centralized forwarding system anywhere up to a 400Mpps distributed forwarding system.

Visit the link for more details.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_qanda_item09186a00809a7673.shtml#

I have a system running SUP720 with PFC3C and Line cards with DF3Cs.  One of our newly installed LC doesn’t have a DFC yet. We have a spare DFC3C-XL lying around.. can I mount that to the line card and use it in the same chassis ? Will I have any issues ?

A system with a mixture of forwarding engines only operates with the capabilities of the least-capable forwarding engine in the chassis. We cannot allow each forwarding engine to operate independently in its own mode. The BXL to run in BXL mode, the B to run in B mode, etc. within the same chassis is not allowed. This is because forwarding tables cannot be synchronized if we allow each PFC3/DFC3 to have different capabilities within the same system.

Visit this link for more details.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_qanda_item09186a00809a7673.shtml#qa1

Do you have a document or a link that explains troubleshooting steps for High CPU issues  on 6500?

Yes. We have plenty !!

http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a00804916e0.shtml

https://supportforums.cisco.com/docs/DOC-15602

https://supportforums.cisco.com/docs/DOC-22037

We see some unicast traffic in ports where it is not intended.  Looks like it is flooding.  But we don’t understand why it is flooding!! ?

You are most likely running into the “ Unicast Flooding “ Scenario. Basically the switch unicast floods when it has aged out a particular mac-address table entry but still retains the ARP entry.

There are multiple causes for this. I would suggest you to go through the following link. Let me know if you still have any queries.

http://www.cisco.com/en/US/customer/products/hw/switches/ps700/products_tech_note09186a00801d0808.shtml

And especially with 6500 and DFC Line cards, you might run into this problem if you don’t have

“ mac-address-table synchronization” enabled.  Please refer the link for more details.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a00807347ab.shtml

When you go to order software for let's say the Supervisor 720 with the 10G uplinks that also comes with a 1G internal Compact Flash by default. When you go to order software it'll say that the minimum required flash is 1G and the image is only 90M or whatever, why am I requiring 1G??

Well because SUP-720-10G comes with 1G of flash by default. You have a 1G switch processor flash, you can't make it any smaller. Regardless of the size of the image we're just going to say it requires that 1G because that's all that comes with it. Even though the image might be small that requirement on the software download page is really more a representation of the fact that this is what comes by default.

Generally speaking what we recommend for the size of a bootflash is at least enough to take two images so that if you're upgrading and something goes bad you can have the old image there to downgrade to and then extra space for crash info files or other type files. We have the system event archive now that takes 32M by default and you can make that larger. We have other information you can store on the bootflash. Generally these days we're recommending probably 512 for everything because even at 90 to let's say 100M per image that puts you right at about 200, then if you add in the system event archive that's about 232, and then if you add in a few other that puts you right in your 256. We're generally recommending 512 and larger even though the image might be a lot smaller than that. That's where those recommendations come from.

I am always confused between bootflash: and bootdisk: I don’t find any doncumentation.  What really is the difference?

There is a small difference. Initially we shipped a 64MB bootflash for both SP and RP.  We called it bootflash. But with SUP720 ( starting with sup720-3B I think ) we added a Compact Flash adapter internally for the SP CPU and it is shipped with the 512MB Compact flash card. That’s where the “bootdisk:” came into scene..    Have look at the following picture.

But this not something you need to be bothered about.. The CLI lets you use both bootflash: and bootdisk:

Can all the uplink ports on the Supervisor be active at the same time or do we have a  command to activate them, like the 4500 switches?

First thing, we don’t have a CLI command to activate the ports. Now whether you can use all the uplink ports or not depends on what supervisor you are using. Sup32…. YES.  On both the Sup32-8GE and Sup32-10GE, all uplink ports can be active at the same time.

On Sup 720 …No ! On Sup720 modules, there are three GE front ports - two x GE SFP and one x GE-TX. Port 1 is designated as a GE SFP port. Port 2 can be either the GE SFP or GE-TX port. Activating one of these ports disables the use of the other. In this regard, the Sup720 can either have 2 x GE SFP ports active, or, 1 x GE SFP + 1 x GE-TX active. VS-S720-10GE  has two additional 10 Gig ports, which can also be active at the same time.

I have heard TAC tell me not use 6148/6348 for server farm. Especially whenever we had output drops issue. Can you explain?

Sure. The 6148-GE-TX and 6548-GE-TX are positioned for the wiring closet , i.e., gigabit to the desktop deployments. These cards are not positioned for the data center or other high-performance applications.

They are oversubscribed 8:1 i.e. 8 front panel 10/100/1000 ports share 1Gbps of bandwidth into the system, regardless of whether you are using the bus or the fabric. So if you are pumping anything more than a Gig of traffic for each ASIC (set of 8 ports in this case) you are bound to see output drops.

These cards have the following key characteristics that also need to be considered:

  • They do not support jumbo frames
  • They do not support broadcast/multicast/unicast suppression
  • They do not support ISL trunk encapsulation
  • They do support Cisco inline power & (future) 802.3af inline power w/the appropriate daughter card
  • They do not support distributed forwarding
  • They do support the time-domain reflectometry (TDR) function
  • They are priced for access-layer deployments

I have read that we have two CPUs  inside the Supervisor ? Are they for separate functions or for internal redundancy ?

Yes we have two CPUs inside SUP720 and all it predecessors. The RP (Route Processor) is one of two CPU's on the Supervisor that performs layer 3 Control Plane functions on the switch. A Control Plane function is a feature that is run in software. The RP is responsible for running Layer 3 Control Plane features such as Routing Protocols (BGP, OSPF, EIGRP, etc.), Logging, Data Link Switching, Multicast Routing Protocols, Netflow Data Export, SNMP and more. The SP (Switch Processor) is the second of two CPU's . The SP is responsible for running Layer 2 Control Plane features such as Spanning Tree Protocols (PVST+, MST, 802.1s, etc.), VLAN Trunking Protocol (VTP), Cisco Discovery Protocol (CDP) and more.

(Although we are not discussing about anything on SUP 2T here, just FYI, in SUP2T we have a single Dual Core processor ! )

I have a system running SUP720 with PFC3C and Line cards with DF3Cs.  One of our newly installed LC doesn’t have a DFC yet. We have a spare DFC3C-XL lying around.. can I mount that to the line card and use it in the same chassis ? Will I have any issues ?

A system with a mixture of forwarding engines only operates with the capabilities of the least-capable forwarding engine in the chassis. We cannot allow each forwarding engine to operate independently in its own mode. The BXL to run in BXL mode, the B to run in B mode, etc. within the same chassis is not allowed. This is because forwarding tables cannot be synchronized if we allow each PFC3/DFC3 to have different capabilities within the same system.

Visit this link for more details.

http://www.cisco.com/en/US/products/hw/switches/ps708/products_qanda_item09186a00809a7673.shtml#qa1

Do you have a document or a link that explains troubleshooting steps for High CPU issues  on 6500?

Yes. We have plenty !!

http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a00804916e0.shtml

https://supportforums.cisco.com/docs/DOC-15602

https://supportforums.cisco.com/docs/DOC-22037

I have some confusion with the way “mls cef maximum-routes ip” command works. Whenever I try to set an allocation for IPv4 TCAM with this command, I also see an increase in the size of MPLS table. What adds to strangeness is number increased in the IP + MPLS table does not equate to the number of decreased in the IPv6 table. Look at the following example.

BGL.Q.05-6500-4#show mls cef maximum-routes

FIB TCAM maximum routes :

=======================

Current :-

-------

IPv4 + MPLS         - 192k (default)

IPv6 + IP Multicast - 32k (default)

BGL.Q.05-6500-4#show platform hardware pfc mode

PFC operating mode : PFC3B

BGL.Q.05-6500-4(config)#mls cef maximum-routes ip 200

BGL.Q.05-6500-4#show mls cef maximum-routes

FIB TCAM maximum routes :

=======================

Current :-

-------

IPv4                - 200k

MPLS                - 8k (default)

IPv6 + IP Multicast - 24k (default)

If you notice that I took 8K entries from the ipv6 and put them to ipv4 tcam space. But I also ended up with 8K space for MPLS ???  

This is a very good question I would say! Let’s first look at how FIB TCAM is designed. Each FIB TCAM entry is 72 bits in size. But the IPV6 address is 128 bits, hence we can’t fit it in one FIB TCAM entry. So we gave two FIB TCAM entries for each IPV6 address.

Whenever you are try to increase the IPv4 space by borrowing some from the IPv6, this is what happens.

a)          For each entry you take out from IPv6 , it is actually two entries.

b)          The extra entries are added to the MPLS space.

In your case you choose to make the IPv4 table of size 200K. So you borrowed 8K entries from IPv6, which in reality creates a space of 16K. So the extra 8K entries are added to the MPLS space.

Let’s look at another example here.

Switch#show mls cef max

FIB TCAM maximum routes :

=======================

Current :-

-------

IPv4 + MPLS         - 192k (default)

IPv6 + IP Multicast - 32k (default)

So, let's say I increase the IP entries to their maximum value of 239k. 

Core-1(config)#mls cef maximum-routes ip 239

Upon reload, I see the following:

Core-1#show mls cef maximum-routes

FIB TCAM maximum routes :

=======================

Current :-

-------

IPv4                - 239k

MPLS                - 1k (default)

IPv6 + IP Multicast - 8k (default)

Let's break this down:

I need to increase the the IP entry count by 47. So we need to borrow 24 entries from

the IPv6 + Multicast count (remember, they use 2 entries each that's why 47 is being

divided by 2 to give an whole number of 24).

Since we borrowed 24 (which gave us 48k entries) and we only used 47k entries.. There

is still 1k left over. This is sent over to the MPLS entries which becomes 1k in total.

We see this problem with multicast traffic only. I would like to get some clarifications on how having DFCs on CEF720 LCs would increase/help multicast replication capacity of the switch.

Topology

------------

Multicast feed 1 -------------------LC2--[6509]----LC7-------------------Multicast feed 2

I have a 6509 with Sup 720 running 12.2(33)SXH1 and centralized forwarding. All slots with 67XX cards with CFC. (show module attached )

Currently I have two multicast sources/feeds . One goes to the LC 2 and the other goes to LC 7.

Current replication mode is Ingress.We see lot of drops with Mulitcast stream. If we turn off the Feed no 2 everything is fine. Then no drops are seen.

We were recommended to change to Egress mode of replication OR to get DFCs for these line cards in slot 2 and 7……. at least for one of them.

But I would like to clearly understand how DFC would help ?

You are probably dropping packets at the replication engine (ASIC). That explains why you don’t see issues with the Unicast traffic.

When we have centralized forwarding, the replication engine at the LC has to hold to the copy of packet till it receives the lookup result from the Supervisor.

And if we have a Chassis loaded with 67XX cards, we might easily oversubscribe the Supervisor Earl Lookup capacity of 30 Mpps.

When you use DFCs, the forwarding lookups are fully distributed, and would certainly reduce the load.

Best mcast performance (both from forwarding & fabric bandwidth usage) you can get is with fully distributed DFCs, and egress replication mode.

You can check forwarding & mcast performance (I believe it is in available in SXI train) using:

"show platform hardware capacity forwarding"

"show platform hardware capacity rewrite-engine performance"

The second command can be used to check whether we can dropping the packets in the fabric path from the rewrite engine.

----------------------------------------------------------------------------------------------------------------------------------------------------

Here's a link to the forum archive on Facebook:

http://www.facebook.com/notes/cisco-online-support-community-netpro/facebook-forum-summary-catalyst-6500-switch-architecture/427449180627024

Here are the links to the forum question threads on our wall:

http://www.facebook.com/CiscoSupportCommunity/posts/10151109761421412

http://www.facebook.com/CiscoSupportCommunity/posts/10151109754716412

http://www.facebook.com/CiscoSupportCommunity/posts/10151109677536412

http://www.facebook.com/CiscoSupportCommunity/posts/10151109549316412

http://www.facebook.com/CiscoSupportCommunity/posts/10151109529751412

http://www.facebook.com/CiscoSupportCommunity/posts/10151109643821412

http://www.facebook.com/photo.php?fbid=10151109673586412&set=a.419766491411.208334.133380531411&type=1

Here's the link to the event announcement page on Facebook

http://www.facebook.com/events/393294577384618/

Review Cisco Networking products for a $25 gift card