cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1056
Views
0
Helpful
5
Replies

Data centre design query using the 4948 switch

cbeswick
Level 1
Level 1

Hi,

Our network backbone is comprised entirely of 6500 switches with Sup720s. The core and distribution switch blocks are all dual 6500s with the bulk of the access layer switches comprised of the 2950 / 2960G switches.

We need to provide an increased level of performance in our data centre but only have a limited budget. I have looked at the 2350, 3560, 3750 and 4948. Initially the 2350 looked ideal but Cisco, quite annoyingly, have ommited support for single mode SFPs so the switch cannot support greater than a couple of hundred metre fibre runs which isn't far enough to provide cross site connections within the switch block.

The 3560 and 3750 switches do not appear to have enough switching backplane fabric, I know there are a few new variants out, but initial pricing suggests that these are out of our range. We cannot afford to go to 10Gig at the moment due to the cost of the 6500 10Gig modules.

So I am left with the 4948 which seems to fit the bill perfectly. The design will rely on a tried and tested "triangle" approach with layer 2 only at the data centre aggregation layer, with each 4948 switch dual homed to the 6500 distribution switch block. Becuase the 4948 comes with 4 SFP uplinks I will etherchannel each uplink to provide 2 Gbps to each distribution switch.

My only issue with this approach is this:- Assuming I populate the switch up to 44 ports (I have read that only 48 ports can forward at wire rate at any one time) with high throughput servers (e.g. iSCSI SAN) how much of a danger is there in oversubsribing the uplinks, and how much of a performance hit will this create?

I have thought about investigating the 10Gig variant of the 4948 for future scalability, but again, Cisco have decided to remove support for the TwinX gig converter so it cannot support 1Gpbs uplinks....!!

The only alternative to the above is to deploy 2960G switches, which would reduce costs significantly, but I dont even want to think about the problems we will face if we want to deploy iSCSI......

Many thanks in advance,

Chris.

1 Accepted Solution

Accepted Solutions

Jon Marshall
Hall of Fame
Hall of Fame

Chris

The 3560 and 3750 switches do not appear to have enough switching backplane fabric, I know there are a few new variants out, but initial pricing suggests that these are out of our range. We cannot afford to go to 10Gig at the moment due to the cost of the 6500 10Gig modules.

So I am left with the 4948 which seems to fit the bill perfectly. The design will rely on a tried and tested "triangle" approach with layer 2 only at the data centre aggregation layer, with each 4948 switch dual homed to the 6500 distribution switch block. Becuase the 4948 comes with 4 SFP uplinks I will etherchannel each uplink to provide 2 Gbps to each distribution switch.

My only issue with this approach is this:- Assuming I populate the switch up to 44 ports (I have read that only 48 ports can forward at wire rate at any one time) with high throughput servers (e.g. iSCSI SAN) how much of a danger is there in oversubsribing the uplinks, and how much of a performance hit will this create?

When you talk about the 3560 and 3750 new variants do you mean the E versions ?  The E versions can actually run at wire rate pretty much eg. the 3750-E supports a 106Gbps switch fabric whereas the 4948 supports a 96Gbps switch fabric. It's been a while since i priced up these switches so i'm assuming that the E versions are more expensive than the 4948, correct me if i'm wrong.

So assuming the above is true and you go with 4948 switches then the danger in oversubscription depends on -

1) how much traffic can be kept local to the switch

2) how much traffic 44 ports will actually generate - this is important because 44 ports running at 1Gbps don't generate 44Gbps all at the same time.

However if much of the traffic is inter-switch then your are obviously limited by a 2Gbps etherchannel although with (Rapid) PVST you could load-balance the vlans across both uplinks. Stll this only gives you 4Gbps at best.

If you are limiting each switch to one vlan then there are 2 other possible designs -

1)  have a L3 connection between your distribution pair then both uplinks can be forwarding because there is no L2 loop.

2) use routed links from the 4948 to the distribution pair

however in a DC both have their drawbacks especially 2) because it removes the flexibility of vlan allocation across switches.

Regardless of which option you choose 44Gbps does not go into 4Gbps so yes there is a danger of oversubscription and for high bandwidth apps this can degrade performance. And that is quite a bad oversubscription rate. But it really does depend on what i outlined above.

One other thing though. If you only purchase the 4948 ie. no 10Gbps uplinks there is no future proofing which i am sure you are aware of. If you then find that your are seriously overloading the uplinks and QOS cannot be used because all traffic is equal then you may need to go 10Gbps anyway which may make the 3750-E with the twin-gig convertor, for example, a better bet now.

My recommendation would be to investigate just how much of the traffic per switch would stay local and so can be switched at wire speed before making a final decision.

Jon

View solution in original post

5 Replies 5

Jon Marshall
Hall of Fame
Hall of Fame

Chris

The 3560 and 3750 switches do not appear to have enough switching backplane fabric, I know there are a few new variants out, but initial pricing suggests that these are out of our range. We cannot afford to go to 10Gig at the moment due to the cost of the 6500 10Gig modules.

So I am left with the 4948 which seems to fit the bill perfectly. The design will rely on a tried and tested "triangle" approach with layer 2 only at the data centre aggregation layer, with each 4948 switch dual homed to the 6500 distribution switch block. Becuase the 4948 comes with 4 SFP uplinks I will etherchannel each uplink to provide 2 Gbps to each distribution switch.

My only issue with this approach is this:- Assuming I populate the switch up to 44 ports (I have read that only 48 ports can forward at wire rate at any one time) with high throughput servers (e.g. iSCSI SAN) how much of a danger is there in oversubsribing the uplinks, and how much of a performance hit will this create?

When you talk about the 3560 and 3750 new variants do you mean the E versions ?  The E versions can actually run at wire rate pretty much eg. the 3750-E supports a 106Gbps switch fabric whereas the 4948 supports a 96Gbps switch fabric. It's been a while since i priced up these switches so i'm assuming that the E versions are more expensive than the 4948, correct me if i'm wrong.

So assuming the above is true and you go with 4948 switches then the danger in oversubscription depends on -

1) how much traffic can be kept local to the switch

2) how much traffic 44 ports will actually generate - this is important because 44 ports running at 1Gbps don't generate 44Gbps all at the same time.

However if much of the traffic is inter-switch then your are obviously limited by a 2Gbps etherchannel although with (Rapid) PVST you could load-balance the vlans across both uplinks. Stll this only gives you 4Gbps at best.

If you are limiting each switch to one vlan then there are 2 other possible designs -

1)  have a L3 connection between your distribution pair then both uplinks can be forwarding because there is no L2 loop.

2) use routed links from the 4948 to the distribution pair

however in a DC both have their drawbacks especially 2) because it removes the flexibility of vlan allocation across switches.

Regardless of which option you choose 44Gbps does not go into 4Gbps so yes there is a danger of oversubscription and for high bandwidth apps this can degrade performance. And that is quite a bad oversubscription rate. But it really does depend on what i outlined above.

One other thing though. If you only purchase the 4948 ie. no 10Gbps uplinks there is no future proofing which i am sure you are aware of. If you then find that your are seriously overloading the uplinks and QOS cannot be used because all traffic is equal then you may need to go 10Gbps anyway which may make the 3750-E with the twin-gig convertor, for example, a better bet now.

My recommendation would be to investigate just how much of the traffic per switch would stay local and so can be switched at wire speed before making a final decision.

Jon

Hi Jon,

Many thanks for your swift response.

It all boils down to the best switch we can put in with a 1Gbps uplink limitation due to our budget contraints. I would love to have the ability to upgrade to 10Gig but the 3750-E switches cost twice as much as a the standard 4948.

Anything in my opinion is better than a 2960G which simply wouldn't provide enough horsepower for local switch, let alone switch to switch comms.

Providing I raise the issue of potential switch to switch bottlenecks (even with a 4Gig load balanced design with RPVST) then there is little else I can recommend if management are unwilling to dig a little deeper into their pockets.

Thanks again.

Chris.

Chris:

You may want to look into the 4900M 2RU data center switch. It is specifically designed to support future 1G to 10G migrations due to its semi-fixed architecture. Its backplane is 320 Gbps and 250 Mpps for IPv4, thats more than twice the 4948 10G model. See if its cost is within your budget, and if not you can sell it to your managaement as a switch that will support future needs and thereby reduce your overall capex.

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps6021/ps9310/Data_Sheet_Cat_4900M.html

Victor

This switch is way out of our bugdet range

Alrighty then...you're going to go iSCSI, so you will need that 10G port density that the 4900M offers....It solves all your problems....and I think in the long run will save you money.

Review Cisco Networking for a $25 gift card