cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1574
Views
20
Helpful
21
Replies

new rack room

Hi,

My company is going to make some changes in it's interiour - so we are forced to make new rack room.

Old one will stay as where it is, but we will connect two of rack rooms.

My questions is what two switches would be the best solution to connect two rack rooms?

Our first opinion is there must be two separate ways of cables that connect those two switches (one in each rack room).

Each cable must pass a lot of walls, and there is possibility of EM influence on cables.

We want two seperate ways so if one day something happens to one cable connecting two switches, we wouldn't lose connection between floors.

What would be better solution - ethernet 10gb uplink or fibre 8 gb between those two switches? or maybe both if it is possible?

And to make those two seperate ways of cables redundant.

Any answer, suggestion, tip would be very helpful!

6 Accepted Solutions

Accepted Solutions

Leo Laohoo
Hall of Fame
Hall of Fame

Connect each rack rooms using fibre optic cables.  Depending on the distance, you can either use multi-mode fibre (short haul/range) or single-mode (long haul/range).

10Gb uplink is possible.  Here are your options:

Layer 2 only switch:

2960S "D" series and 2360 (top-of-rack server switch)

Layer 2/Layer 3

3560E/3560X and 3750X/3750E

If you have more finance then you can go with the 6500 (Sup32 or Sup720) or the 4500E (with Sup7).

View solution in original post

hobbe
Level 7
Level 7

Hi

Redundant ways are always nice !

Make shure that they do not cross eachother or run in paralell somewhere in the building.

I agree with leolaohoo  but would like to stress that depending on how you feel about your security between the two server rooms, there is a solution to the problem of someone easedropping or connecting into the cables on the way, called 802.1ae. Link encryption at wirespeed.

The 3750x can do that, however it can not today do it towards another 3750x or 3560x, but that is something that is changing in the next release.

That feature is perfect between server rooms.

Also the 3750x has a nice powerstack feature. and the "normal" stacking  feature makes it a nice candidate also.

If you need higher speeds fx for moving vmware servers or just datacenter capacity in and between the server rooms then I would look at the Nexus 5000 series

Good luck

HTH

View solution in original post

yes four switches is a good idea as it removes single points of failure, the switch is an active component and is far more likely to fail in service than a passive fibre cable

i would put two 3750xs forming a stack in each server room. create a 20gb port channel between the two stacks, one ten gig into each 3750. then you can lose one link or one switch, with no loss of service (except reduction in bandwidth)

cheers

View solution in original post

Yes I agree that 4 switches is a good solution.

However that said.

You need to be aware that IF you buy 4 3750x switches and connect them 2 in one room and 2 in the other room and connect the ones in the same room via the Stack cable and then setup a etherchannel between them you have very good chance of handeling a hardware faliure such as one line breaking down or a poersupply faliure. But and this is a big one, The two switches that are connected in a stack will be logically the same, ie when you change the IOS or if the IOS buggs out for some reason, you will have a break for aprox 5 minutes when the switches reloads.

That said if your company is ok with that small risk then I would go with this solution.

And If you buy the 3750x, then I think a good advice would be to buy 4 715 w powersupplies instead of the 350w.

That will give you hotstandby power through powerstacking.

The other way you can build is either with Spaning tree 4 switches in a box shape.

and use spanning tree to block the redundant links.

There is a third way with nonblocking ports but I am not shure if the 3750x supports it.

so we save that for another day..

Good luck

HTH

View solution in original post

Can I in this example have 10gb as active link between rack rooms? And 1 gb as backup?

Well you can't etherchannel if you have two links with two different speeds.

Would cisco in this case be smart enough to figure it out that all packets would go over 10gb link?

STP is going to be a b@stard case!

View solution in original post

You can use two different ways as i see it.

you can use spanning tree (or pvst+ and so on)

or you can use ip sla to take up the 1gig link if the 10gig link gets shut down.

However I would like to stress out that if you use a 1 gig for a backup link for 10gig then you might run into trouble with the 1gig link being saturated.

You might atleast want to try to make an etherchannel for a couple of 1gig links.

if you do not want to do that you should not use more than 1 gig on the 10gig link.

Since you are writing in word i suspect you do not like visio for some reason.

have you tried DIA its a freware version of something similar as visio.

http://live.gnome.org/Dia

Good luck

HTH

View solution in original post

21 Replies 21

Leo Laohoo
Hall of Fame
Hall of Fame

Connect each rack rooms using fibre optic cables.  Depending on the distance, you can either use multi-mode fibre (short haul/range) or single-mode (long haul/range).

10Gb uplink is possible.  Here are your options:

Layer 2 only switch:

2960S "D" series and 2360 (top-of-rack server switch)

Layer 2/Layer 3

3560E/3560X and 3750X/3750E

If you have more finance then you can go with the 6500 (Sup32 or Sup720) or the 4500E (with Sup7).

Thanks a lot leolaohoo!

hobbe
Level 7
Level 7

Hi

Redundant ways are always nice !

Make shure that they do not cross eachother or run in paralell somewhere in the building.

I agree with leolaohoo  but would like to stress that depending on how you feel about your security between the two server rooms, there is a solution to the problem of someone easedropping or connecting into the cables on the way, called 802.1ae. Link encryption at wirespeed.

The 3750x can do that, however it can not today do it towards another 3750x or 3560x, but that is something that is changing in the next release.

That feature is perfect between server rooms.

Also the 3750x has a nice powerstack feature. and the "normal" stacking  feature makes it a nice candidate also.

If you need higher speeds fx for moving vmware servers or just datacenter capacity in and between the server rooms then I would look at the Nexus 5000 series

Good luck

HTH

I have one more question.

Should I maybe think about buying 4 of switches?

I am wondering if something happens to switch in one rack room - we will obviously be disconnected.

What is possibility of this situation? Should i really be conerned so I should buy 4 of this switches?

yes four switches is a good idea as it removes single points of failure, the switch is an active component and is far more likely to fail in service than a passive fibre cable

i would put two 3750xs forming a stack in each server room. create a 20gb port channel between the two stacks, one ten gig into each 3750. then you can lose one link or one switch, with no loss of service (except reduction in bandwidth)

cheers

Yes I agree that 4 switches is a good solution.

However that said.

You need to be aware that IF you buy 4 3750x switches and connect them 2 in one room and 2 in the other room and connect the ones in the same room via the Stack cable and then setup a etherchannel between them you have very good chance of handeling a hardware faliure such as one line breaking down or a poersupply faliure. But and this is a big one, The two switches that are connected in a stack will be logically the same, ie when you change the IOS or if the IOS buggs out for some reason, you will have a break for aprox 5 minutes when the switches reloads.

That said if your company is ok with that small risk then I would go with this solution.

And If you buy the 3750x, then I think a good advice would be to buy 4 715 w powersupplies instead of the 350w.

That will give you hotstandby power through powerstacking.

The other way you can build is either with Spaning tree 4 switches in a box shape.

and use spanning tree to block the redundant links.

There is a third way with nonblocking ports but I am not shure if the 3750x supports it.

so we save that for another day..

Good luck

HTH

Hi hobbe,

Can I connect 4 of 3560-x with Stack cable and then setup an etherchannel between them?

I see that you have refered this possibilities for 3750x, but is 3560x capable of this?

No sorry the 3560 does not support stacking.

so they are not capable of that type of etherchannels between physical devices (but logical the same unit)

.

But you can still trunk to one 3560 or trunk to two different 3560 but have spanning tree turn the trunk link off on one of them.

Good luck

HTH

one more question:

I am thinking also on 2960s-24td-l because of the price.

Can it use both of it's 10gb SFP+ moduls at same time?

I am thinking of buying 2 of 2960s-24td-l and connect them with 4 10gb cables.

I am not very good at the 2960 (no personal experience) but from what i can se so yes both of them should work.

but if you are checking out the 2960s why dont you stack them ?

check out the price difference between a 2960 with modules and a 3750x-LAN (if you only want L2) or even ip-base.

There are some nice features that might be worth the price difference.

good luck

HTH

we are gonna buy two 3560x switches and use two of existing 2960 series switches to make connection between rack rooms with four switches.

And later on, when our budget grows again, we will buy two more 3560x switches.

I don't know if this is possible:

connection between rack rooms will be configured to go through 10gb link over 3560x, and backup link 1gb over 2960 switches to be made.

Will this work? I mean how will client computer know to send packets over 10 gb link, and not over 1gb?

can those links be prioritized? What is best practice in this case?

connection between rack rooms will be configured to go through 10gb link over 3560x, and backup link 1gb over 2960 switches to be made.
Will this work? I mean how will client computer know to send packets over 10 gb link, and not over 1gb?

This is do-able Dusan.  But whether or not it's been done in the "wild", on the other hand, and my answer will depend on the size and shape of the network involved.  If your network will not completely use the 10Gb then it's possible to fall-back on 1Gb.  But if your network uses 10Gb, like a 10 year old with a large Coke from McDonalds on a hot summer day, then my answer is no.

In your case of getting 3560X and pairing it with 2960 and doing "redundancy".

What we've done, in some parts of our network, is get a stack of 3750E (3750X will also do), and run EtherChannel to a stacked 2960S.  The EtherChannel goes from 3750 stack 1 to 2960 stack 2 for one link and goes to the second stack members of each side for the second link, etc.  That way if one physical link goes down, I got redundancy.  If one of the switches of either side goes down, I have an alternative uplink.

OK, here's the exact network layout that we will have  when we connect two rack rooms:

[URL=http://img641.imageshack.us/i/ciscorack.jpg/][IMG]http://img641.imageshack.us/img641/7987/ciscorack.jpg[/IMG][/URL]

Uploaded with [URL=http://imageshack.us]ImageShack.us[/URL]

Two things are bothering me:

1. How will I connect Access switches to Core switches?

Core uplinks will be busy because of interconnection between core switches..

Can I, in  this case use access ports of Core? Do I need to place some special configuration on those ports except to make them trunk?

2. What will be the best way to connect those 4 "core" switches?

We are pretty low on budget so we couldn't go for four 3750X whch was my first choice. We are going to upgrade those two 2960S-24TD-L with WS-C3560X-24T-S in a year or two. So we will have only WS-C3560X-24T-S as "core" switches between rack rooms .

Please tell me what would be the best way to organize this kind of network. I could get some other switch or even change network layout..

Thanks in advance!

2. What will be the best way to connect those 4 "core" switches?

You can get up to two 10Gb SFP+ for each 3560X and 2960S "D" series.  You can use that.

You can also stack the 2960S (up to four per stack) so the stack can share configurations.

See attachment if this is good enough for you.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card