01-23-2014 07:27 AM - edited 03-07-2019 05:45 PM
Hi,
I am hoping someone will be able to clear this up with a deifnite answer since I keep finding contradicting information on line. This is regards to 6500 modules, namely WS-X6148-GE-TX, WS-X6148A-GE-TX and WS-X6748-GE-TX.
How do you find how many asics there are on a card. I have two cmds for this:
=WS-X6148-GE-TX
access#show interfaces capabilities module 2
GigabitEthernet2/1
Dot1x: yes
Model: WS-X6148-GE-TX
Type: 10/100/1000BaseT
Speed: 10,100,1000,auto
Duplex: half,full
Ports on ASIC: 1-24 <-------------------
Port-Security: yes
access#show asic-version slot 2
Module in slot 2 has 3 type(s) of ASICs
ASIC Name Count Version
PINNACLE 2 (4.2)
LEMANS 6 (0.3)
REVATI 6 (0.3)
=WS-X6148A-GE-TX
access#show interfaces capabilities module 1
GigabitEthernet1/1
Dot1x: yes
Model: WS-X6148A-GE-TX
Type: 10/100/1000BaseT
Speed: 10,100,1000,auto
Duplex: half,full
Ports on ASIC: 1-48 <-------------------
Port-Security: yes
access#show asic-version slot 1
Module in slot 1 has 2 type(s) of ASICs
ASIC Name Count Version
VISHAKHA 6 (0.1)
DHANUSH 1 (3.0)
If I look at data sheets for these cards I see:
•Port groups
–WS-X6148-GE-TX—2
–WS-X6148A-GE-TX—6
•Port ranges per port group
–WS-X6148-GE-TX: 1-24, 25-48
–WS-X6148A-GE-TX: 1-8, 9-16, 17-24, 25-32, 33-40, 41-48
Why do the two commands show conflicting details on the number of ASICs/ ports per ASICs. Which should you trust?
I assume that "show asic-version slot" is the correct one, so is it the "PINNACLE" and "VISHAKHA" that are in charge of the switching? What are the other ASICS listed, what are they for?
Also I see that it says "Module oversubscription rate 8:1", so this is 1 gb per 8 ports. This is totally independant of the nimber of ASICs, is that correct?
If I compare this to a different card on another switch it doesnt add up to any of my previous theories.
=WS-X6748-GE-TX
access#show interfaces capabilities module 9
GigabitEthernet9/1
Model: WS-X6748-GE-TX
Type: 10/100/1000BaseT
Speed: 10,100,1000,auto
Duplex: half,full
Ports-in-ASIC (Sub-port ASIC) : 1-24 (1-12) <-------------------
Remote switch uplink: no
Dot1x: yes
Port-Security: yes
access#show asic-version slot 9
Module in slot 9 has 3 type(s) of ASICs
ASIC Name Count Version
PINNACLE 2 (4.2)
LEMANS 6 (0.3)
REVATI 6 (0.3)
•4 port groups
•Port ranges per port group: 1-12, 13-24, 25-36, 37-48
Module oversubscription rate 1.2:1
So really what I want to know is how many ASICs each card actually has? If I know this I can workout how many ports per ASIC.
Is there a data sheet of the capabilites of each asic.
Is there any other commands that will give me a betteroutput on the ASICs?
Is the module subscription rate dependant on the number or type of ASICs, if not waht is it dependant on?
Thanks
Solved! Go to Solution.
01-24-2014 05:38 AM
Stephen
I had to edit this post because i managed to confuse myself and 6500 linecards/oversubscription is something i know relatively well (or thought i did ).
I think trying to look at ASICs etc. as you are will not help because there is no direct mapping between port groupings/ASICs and throughput (or at least you can't always get at that information via the CLI). So i won't be able to answer all of your questions but i can help with the oversubsription issue. In this particular respect the best place for information is the 6500 release notes where they specify for each linecard it's port groupings and it's throughput. The figures for your linecards were taken from them and i have included a link at the bottom for the full 12.2SX release notes.
Apologies if i am telling you things you already know but with your linecards you are not comparing like with like. The WS-X6148 linecards are classic linecards whereas the WS-X6748 is a fabric only linecard. This can have a bearing on throughput/oversubscription. In simple terms there are three types of linecard with the 6500 -
classic linecards - these only have a connected to the 32Gbps shared bus. They cannot use a dedicated connection to the switch fabric. So any classic linecards in your chassis all have to share a single bus across the whole chassis.
fabric enable linecards - these support connections to the shared bus and dedicated connections to the switch fabric if available.
fabric only linecards - these support only dedicated connections to the switch fabric ie. they cannot use a shared bus.
The sup32 only provides a connection to the shared bus so you can use classic and fabric enabled cards in these but not fabric only. The sup720 does provide dedicated connections to the switch fabric and also connections to the shared bus so pretty much all cards are supported. The sup2T supports fabric only and a small subset of non fabric only.
So just knowing how much throughput the linecard has is not the only consideration because they also have to contend with any other linecards in the chassis that are using the shared bus. So it can complicate things although that's not to say the shared bus will be overutilised as that depends on other linecards in the chassis. The WS-6748 is a fabric only card so it does not have to share it's connection to the switch fabric which means for data transfer the only thing you need to take into account is the amount of throughput coming from end devices connected to the linecard.
So from the release notes -
WS-X6148-GE-TX has two port groups (1-24, 25-48). Each set of 8 ports (1-8, 9 -16 etc..) is limited to 1Gbps. So you have an 8:1 oversubscription ratio with and 48 / 8 = 6Gbps gives you a potential throughput of 6Gbps. If you wanted to try and balance ports you coud spread them across the 1Gbps per 8 port ranges. This may help with throughput depending on how utilised the shared bus is.
WS-X6148A-GE-TX has 6 port groups (1-8, 9 -16 etc) and each set of 8 ports is limited to 1Gbps. 48 / 8 = 6Gbps. So with this linecard there is a direct mapping between the port groups and the actual throughput per port group. The same considerations about balancing ports etc. as above applies.
WS-X6748-GE-TX has 4 port groups (1-12, 13-24, 25-36, 37-48) It also has 2 x 20Gbps dedicated connections to the switch fabric. Ports 1 - 24 map to one of the 20Gbps connections and ports 25 - 48 map to the other 20Gbps connection. With this linecard oversusbcription is relatively easy to work out because it is not sharing it's connection to the switch fabric with any other linecards in the chassis so it really is just a question of -
48Gbps in (in = from end devices connected to the linecard)
40 Gbps out (out = to the switch fabric)
If you wanted to ensure there was absolutely no chance of oversubscription then you would only use 20 ports from 1 - 24 and 20 ports from 25 - 48. It is however unlikely that all 48 ports would be transmitting 1Gbps concurrently so you are unlikely to see oversubscription with this module.
Some of the later linecards though eg. the 10Gbps linecards can easily oversubscribe the switch fabric connection so it is more important to balance correctly if you need to avoid oversubscription. For these linecards Cisco introduced a new command to put the linecard into performance mode (as opposed to oversubscription mode). The command simply disables certain ports so that the number of ports per connection to the switch fabric is equal ie if there were 8 x 10Gbps ports using a 40Gbps connection to the switch fabric then performance mode would shut down four of the ports. This can be useful if you need to run all ports with no oversubscription but can be a bit of a blunt tool if you only need to ensure some ports don't get oversubscribed and we had a recent thread in this forum that dealt specifically with that issue.
I hope some of this helps and i haven't just bored you stupid. Here's the link to the release notes for 12.2SX -
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/release/notes/hardware.html
Edit - i should also point out that port groupings are not just about throughput, they can also be about port buffers as some linecards share memory between port groups and again this can have a big impact on the performance of the linecard.
There are also whole areas i have not touched on ie. CFC vs DFC, certain fabric only cards using the shared bus for packet lookups in CFC mode, the mixing of classic and fabric only cards which can degrade the forwarding capacity of the entire chassis. If you need to know more about this just let me know.
Jon
01-24-2014 05:38 AM
Stephen
I had to edit this post because i managed to confuse myself and 6500 linecards/oversubscription is something i know relatively well (or thought i did ).
I think trying to look at ASICs etc. as you are will not help because there is no direct mapping between port groupings/ASICs and throughput (or at least you can't always get at that information via the CLI). So i won't be able to answer all of your questions but i can help with the oversubsription issue. In this particular respect the best place for information is the 6500 release notes where they specify for each linecard it's port groupings and it's throughput. The figures for your linecards were taken from them and i have included a link at the bottom for the full 12.2SX release notes.
Apologies if i am telling you things you already know but with your linecards you are not comparing like with like. The WS-X6148 linecards are classic linecards whereas the WS-X6748 is a fabric only linecard. This can have a bearing on throughput/oversubscription. In simple terms there are three types of linecard with the 6500 -
classic linecards - these only have a connected to the 32Gbps shared bus. They cannot use a dedicated connection to the switch fabric. So any classic linecards in your chassis all have to share a single bus across the whole chassis.
fabric enable linecards - these support connections to the shared bus and dedicated connections to the switch fabric if available.
fabric only linecards - these support only dedicated connections to the switch fabric ie. they cannot use a shared bus.
The sup32 only provides a connection to the shared bus so you can use classic and fabric enabled cards in these but not fabric only. The sup720 does provide dedicated connections to the switch fabric and also connections to the shared bus so pretty much all cards are supported. The sup2T supports fabric only and a small subset of non fabric only.
So just knowing how much throughput the linecard has is not the only consideration because they also have to contend with any other linecards in the chassis that are using the shared bus. So it can complicate things although that's not to say the shared bus will be overutilised as that depends on other linecards in the chassis. The WS-6748 is a fabric only card so it does not have to share it's connection to the switch fabric which means for data transfer the only thing you need to take into account is the amount of throughput coming from end devices connected to the linecard.
So from the release notes -
WS-X6148-GE-TX has two port groups (1-24, 25-48). Each set of 8 ports (1-8, 9 -16 etc..) is limited to 1Gbps. So you have an 8:1 oversubscription ratio with and 48 / 8 = 6Gbps gives you a potential throughput of 6Gbps. If you wanted to try and balance ports you coud spread them across the 1Gbps per 8 port ranges. This may help with throughput depending on how utilised the shared bus is.
WS-X6148A-GE-TX has 6 port groups (1-8, 9 -16 etc) and each set of 8 ports is limited to 1Gbps. 48 / 8 = 6Gbps. So with this linecard there is a direct mapping between the port groups and the actual throughput per port group. The same considerations about balancing ports etc. as above applies.
WS-X6748-GE-TX has 4 port groups (1-12, 13-24, 25-36, 37-48) It also has 2 x 20Gbps dedicated connections to the switch fabric. Ports 1 - 24 map to one of the 20Gbps connections and ports 25 - 48 map to the other 20Gbps connection. With this linecard oversusbcription is relatively easy to work out because it is not sharing it's connection to the switch fabric with any other linecards in the chassis so it really is just a question of -
48Gbps in (in = from end devices connected to the linecard)
40 Gbps out (out = to the switch fabric)
If you wanted to ensure there was absolutely no chance of oversubscription then you would only use 20 ports from 1 - 24 and 20 ports from 25 - 48. It is however unlikely that all 48 ports would be transmitting 1Gbps concurrently so you are unlikely to see oversubscription with this module.
Some of the later linecards though eg. the 10Gbps linecards can easily oversubscribe the switch fabric connection so it is more important to balance correctly if you need to avoid oversubscription. For these linecards Cisco introduced a new command to put the linecard into performance mode (as opposed to oversubscription mode). The command simply disables certain ports so that the number of ports per connection to the switch fabric is equal ie if there were 8 x 10Gbps ports using a 40Gbps connection to the switch fabric then performance mode would shut down four of the ports. This can be useful if you need to run all ports with no oversubscription but can be a bit of a blunt tool if you only need to ensure some ports don't get oversubscribed and we had a recent thread in this forum that dealt specifically with that issue.
I hope some of this helps and i haven't just bored you stupid. Here's the link to the release notes for 12.2SX -
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/release/notes/hardware.html
Edit - i should also point out that port groupings are not just about throughput, they can also be about port buffers as some linecards share memory between port groups and again this can have a big impact on the performance of the linecard.
There are also whole areas i have not touched on ie. CFC vs DFC, certain fabric only cards using the shared bus for packet lookups in CFC mode, the mixing of classic and fabric only cards which can degrade the forwarding capacity of the entire chassis. If you need to know more about this just let me know.
Jon
02-05-2014 05:01 AM
Hi Jon,
Thanks for the detailed response, it answers the part that was confusing me and is a big help in me uinderstanding the cards. I appreciate the time spent in explaining this.
Stephen
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide