cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2777
Views
0
Helpful
9
Replies

6500 Backplane

Adnan Khan
Level 4
Level 4

Hi, 

In 6513 if we have two SUP, one will be active another one will be standby. in 6500 switches backplane is lies on SUP engine not like Nexus. so if we have two SUPs one active other one is standby so only one SUP will provide backplane capacity?

KR,

9 Replies 9

Mark Malone
VIP Alumni
VIP Alumni

Hi

if one sup is active you will get the throughput just from that sup if that's what your asking ? the 2 sups don't combine for throughput as one is standby from my understanding of it anyway, that's how ive read it through the docs

http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd80673385.html

Cisco Catalyst 6500 Backplane

The Cisco Catalyst 6500 incorporates two backplanes. From its initial release in 1999, the Cisco Catalyst 6500 chassis has supported a 32-Gbps shared switching bus, a proven architecture for interconnecting line cards within the chassis. The Cisco Catalyst 6500 chassis also includes a second backplane that allows line cards to connect over a high-speed switching path into a crossbar switching fabric. The crossbar switching fabric provides a set of discrete and unique paths for each line card to both transmit data into and receive data from the crossbar switching fabric. The first generation switching fabric was delivered by the switch fabric modules (WS-C6500-SFM and WS-C6500-SFM2), each providing a total switching capacity of 256 Gbps. More recently, with the introduction of the Supervisor Engine 720, the crossbar switch fabric has been integrated into the Supervisor Engine 720 baseboard itself, eliminating the need for a standalone switch fabric module. The capacity of the new integrated crossbar switch fabric on the Supervisor Engine 720 has been increased from 256 Gbps to 720 Gbps. The Supervisor Engine 720-3B and Supervisor Engine 720-3BXL also maintain the same fabric capacity size of 720 Gbps.

Depending on the Cisco Catalyst 6500 chassis, the crossbar switching fabric maps out a series of fabric channels (otherwise known as paths into the crossbar) to each line-card slot in a slightly different layout. Each chassis fabric layout is detailed in Table 1.

Table 1.       Chassis Slot Options

Chassis

Supervisor Engine 32/ Supervisor Engine 720 Slots

Classic Line-Card Slots

Single Fabric Connected Line Cards

Dual Fabric Connected Line Cards

Cisco Catalyst 6503

1 and 2

2 and 3

2 and 3

WS-X67XX line cards not supported

Cisco Catalyst 6503-E

1 and 2

2 and 3

2 and 3

2 and 3

Cisco Catalyst 6504-E

1 and 2

2 thru 4

2 thru 4

2 thru 4

Cisco Catalyst 6506

5 and 6

1 through 6

1 through 6

1 through 6

Cisco Catalyst 6506-E

5 and 6

1 through 6

1 through 6

1 through 6

Cisco Catalyst 6509

5 and 6

1 through 9

1 through 9

1 through 9

Cisco Catalyst 6509-E

5 and 6

1 through 9

1 through 9

1 through 9

Cisco Catalyst 6509-NEB

5 and 6

1 through 9

1 through 9

1 through 9

Cisco Catalyst 6509-NEB-A

5 and 6

1 through 9

1 through 9

1 through 9

Cisco Catalyst 6513

7 and 8

1 through 13

1 through 13

9 through 13

 

In all but the thirteen slot chassis, each line-card slot has two channels in and out of the switching fabric. The thirteen slot chassis has one fabric channel to each slot in slots 1 through 8 and two fabric channels to each slot in slots 9 through 13. The crossbar switching fabric allows each line card to forward and receive data to every other line card over a unique set of transmission paths.

Thanks Mark really helpful. One more thing. In Nexus we have 10G interfaces up to 48 port on F2e Line cards. Do we have 10G support on 6500 with all dedicated 10G interfaces or their bandwidth is  shared based on port grouping. 

its shared , take a look at this for 6500 blades last updated start of January last , most are ratio based , I don't see any 48 port blades available though , max 16 , maybe there some in the pipeline but I think the current sups may struggle with a full 48x10gb that's why they may not be available for the 6500

http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/product_data_sheet09186a00801dce34.html

Main Features and Benefits

Table 1 summarizes the primary features and benefits of the Cisco Catalyst 6500 Series 10 Gigabit Ethernet modules.

Table 1.       Cisco Catalyst 6500 Series 10 Gigabit Ethernet Modules Primary Features Comparison

Feature

4-Port 10GbE Fiber Module

8-Port 10GbE Fiber Module

16-Port 10GbE Fiber Module

16-Port 10GbE Copper Module

Ports

4

8

16

16

Optics

XENPAK

X2

X2

  No optics
  Copper (RJ-45) connectors

Switch fabric connection

40 Gbps (80 Gbps full duplex)

40 Gbps (80 Gbps full duplex)

40 Gbps (80 Gbps full duplex)

40 Gbps (80 Gbps full duplex)

Oversubscription

1:1

2:1

4:1

4:1

Forwarding engine

  Default: Centralized forwarding card (CFC)
  Optional: Distributed forwarding card with DFC3A, DFC3B, DFC3BXL, DFC3C, or DFC3CXL
  WS-X6708-10G-3C: equipped with DFC3C for distributed forwarding, supporting 256,000 routes
  WS-X6708-10G-3CXL: equipped with DFC3CXL for distributed forwarding, supporting 1 million routes
  WS-X6716-10G-3C: equipped with DFC3C for distributed forwarding, supporting 256,000 routes
  WS-X6716-10G-3CXL: equipped with DFC3CXL for distributed forwarding, supporting 1 million routes
  WS-X6716-10T-3C: equipped with DFC3C for distributed forwarding, supporting 256,000 routes
  WS-X6716-10T-3CXL: equipped with DFC3CXL for distributed forwarding, supporting 1 million routes

Queues

  Receive: 8q8t
  Transmit: 1p7q8t
  Receive: 8q4t
  Transmit: 1p7q4t

Oversubscription mode:

  Receive: 1p7q2t per port
  Transmit: 1p7q4t per port group

Performance mode:

  Receive: 8q4t per port
  Transmit: 1p7q4t per port

Oversubscription mode:

  Receive: 1p7q2t per port
  Transmit: 1p7q4t per port group

Performance mode:

  Receive: 8q4t per port
  Transmit: 1p7q4t per port

Queuing mechanisms

  Class of service (CoS)–based queue mapping
  CoS-based queue mapping
  Differentiated services code point (DSCP)–based queue mapping
  CoS-based queue mapping
  DSCP-based queue mapping
  CoS-based queue mapping
  DSCP-based queue mapping

Scheduler

  Deficit Weighted Round Robin (DWRR)
  Weighted Random Early Detection (WRED)
  DWRR
  WRED
  Shaped Round Robin (SRR) at egress

Oversubscription mode:

  DWRR
  WRED

Performance mode:

  DWRR
  WRED
  SRR at egress

Oversubscription mode:

  DWRR
  WRED

Performance mode:

  DWRR
  WRED
  SRR at egress

Port buffers

16 MB per port

200 MB per port

Oversubscription mode:

  90 MB per port group

Performance mode:

  200 MB per port

Oversubscription mode:

  90 MB per port group

Performance mode:

  200 MB per port

Hardware-based multicast replication

(Layer 2)

  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module
  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module
  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module
  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module

Hardware-based multicast replication

(Layer 3)

  Ingress and egress
  Approximately 10 GB per replication engine
  2 replication engines per module
  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module
  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module
  Ingress and egress
  Approximately 20 GB per replication engine
  2 replication engines per module

Jumbo frame support for bridged and routed packets

Up to 9216 bytes

Up to 9216 bytes

Up to 9216 bytes

Up to 9216 bytes

Maximum port density per chassis

34 ports (9-slot chassis)

66 ports (9-slot chassis)

130 ports (9-slot chassis)

130 ports (9-slot chassis)

Maximum port density per VSS

68 ports

132 ports

260 ports

260 ports

Can be used to form virtual switch link

No

Yes

  Performance mode: Yes (supported in a subsequent software release)
  Oversubscription mode: No
  Performance mode: Yes (supported in a subsequent software release)
  Oversubscription mode: No

Supervisor engines supported

Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720 with 10GE uplinks or Supervisor Engine 720 with any policy feature card (PFC; chassis will work in lowest common denominator mode)

Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720 with 10GE uplinks or Supervisor Engine 720 with any PFC (chassis will work in lowest common denominator mode)

Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720 with 10GE uplinks or Supervisor Engine 720 with any PFC (chassis will work in lowest common denominator mode)

Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720 with 10GE uplinks or Supervisor Engine 720 with any PFC (chassis will work in lowest common denominator mode)

Chassis supported

  Any Cisco Catalyst 6500 E-Series chassis, C6509-NEB-A chassis, non-E-Series chassis with fan tray 2, or Cisco 7600 Series or 7600-S Series chassis (NEBS compliant: operating temperature up to 55°C)
  Not supported in Cisco Catalyst 6503 non-E Series chassis
  Any Cisco Catalyst 6500 E-Series chassis, including 6503-E, 6504-E, 6506-E, 6509-E, 6509-V-E, and C6509-NEB-A chassis with dual fan tray, or the Cisco 7604, 7609 with dual fan tray, or 7600-S Series chassis (NEBS compliant: operating temperature up to 55°C), or;
  Non-E-Series chassis with fan tray 2, including Cisco Catalyst 6506, 6509, 6513, and C6509-NEB-A with single fan tray or the Cisco 7606, 7613, and 7609 chassis with single fan tray (non-NEBS compliant: operating temperature up to 40°C)
  Not supported in Cisco Catalyst 6503 non-E Series chassis
  Any Cisco Catalyst 6500 E-Series chassis, including 6503-E, 6504-E, 6506-E, 6509-E, 6509-V-E, and C6509-NEB-A chassis with dual fan tray (NEBS compliant: operating temperature up to 55°C), or;
  Non-E-Series chassis with fan tray 2, including Cisco Catalyst 6506, 6509, 6513, and C6509-NEB-A with single fan tray (non-NEBS compliant: operating temperature up to 40°C)
  Not supported in Cisco Catalyst 6503 non-E Series chassis and 7600 Series chassis
  Any Cisco Catalyst 6500 E-Series chassis, including 6503-E, 6504-E, 6506-E, 6509-E, 6509-V-E, and C6509-NEB-A chassis with dual fan tray (NEBS compliant: operating temperature up to 55°C), or:
  Non-E-Series chassis with fan tray 2, including Cisco Catalyst 6506, 6509, 6513, and C6509-NEB-A with single fan tray (non-NEBS compliant: operating temperature up to 40°C)
  Not supported in Cisco Catalyst 6503 non-E Series chassis and 7600 Series chassis

Slot requirements

  Can occupy any slot in any Cisco Catalyst 6503-E, 6504-E, 6506, 6506-E, 6509, 6509-E, 6509-V-E, or 6509-NEB-A chassis; or Cisco 7604, 7607, 7609, or 7600-S Series chassis
  Can occupy only slots 9 through 13 in a Cisco Catalyst 6513 or Cisco 7613 chassis
  Can occupy any slot in any Cisco Catalyst 6503-E, 6504-E, 6506, 6506-E, 6509, 6509-E, 6509-V-E, or 6509-NEB-A chassis; or Cisco 7604, 7606, 7609, or 7600-S Series chassis
  Can occupy only slots 9 through 13 in a Cisco Catalyst 6513 or Cisco 7613 chassis
  Can occupy any slot in any Cisco Catalyst 6503-E, 6504-E, 6506, 6506-E, 6509, 6509-E, 6509-V-E, or 6509-NEB-A chassis
  Can occupy only slots 9 through 13 in a Cisco Catalyst 6513 chassis
  Can occupy any slot in any Cisco Catalyst 6503-E, 6504-E, 6506, 6506-E, 6509, 6509-E, 6509-V-E, or 6509-NEB-A chassis
  Can occupy only slots 9 through 13 in a Cisco Catalyst 6513 chassis

Onboard memory

256 MB default, upgradable to 512 MB or 1 GB

1 GB default

1 GB default

1 GB default

Thanks Mark

The 6500E chassis is limited to 80Gbps per slot.  I.e., in theory you can support up to 8 10g ports w/o sharing bandwidth.

Some (all?) of the higher density 6500 10g port line cards support an optional "performance mode".  This mode deactivates ports to bring the subscription ratio down, sometimes to 1:1.  When not active, yes, ports on higher density 6500 10g port line cards are in groups that share bandwidth.

BTW, the newer 6807 has a much higher bandwidth design for its line cards slots, but no current supervisor, even the 6T, supports the maximum bandwidth the chassis allows.

Thanks Joseph

In all but the thirteen slot chassis, each line-card slot has two channels in and out of the switching fabric. The thirteen slot chassis has one fabric channel to each slot in slots 1 through 8 and two fabric channels to each slot in slots 9 through 13.

Actually the 6513E supports two fabric channels to all slots.  The fabric channel limitation you describe isn't a limitation of the chassis, it's a limitation of the sup720 fabric.  The sup2T supports the 6513E's extra fabric paths, it also supports double the bandwidth, i.e. 80Gbps to all 6513E slots.

Thats good to know thanks I didn't even see anything about the E series for that model in that doc when I was reading it

Joseph W. Doherty
Hall of Fame
Hall of Fame

When you speak backplane that's usually taken to mean just the chassis connections that "glue" inserted line cards together.  As far as I know, there's no (6500) redundancy for that.  Mark's post better explains.

When a supervisor or supervisors are inserted they support operation of part of the chassis backplane as a system bus.

Later supervisors (i.e. sup 720 and later) provide a switch fabric.  However switch fabric cards (one or two) could be optionally used with the sup2.

With fabric, whether provided by optional fabric cards or supervisiors, you may have one or two, but with two, only one  is actively used.  The other is in standby.

Review Cisco Networking for a $25 gift card