cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
Join Customer Connection to register!
1675
Views
0
Helpful
4
Replies
nagel
Beginner

Increasing Bandwidth - Ether Channel Question

I have 2 - 45xx switches and 1 - 65xx switch at my core/distribution layer for the campus.  These 3 switches are removed from STP and each has 2 - 1 gig (gbic)  Etherchannel connections to the other (triangle configuration).

If I'm not mistaken this gives me a 2gig backbone across the 3 switches. 

The Server farm runs off of the 65xx switch.

The campus is approx 1250 PCs

We are runninginto a latency problem during peak hours (10 - 2) in getting back to the servers.

So, correct me if I'm wrong - Theoretically, 20 users running 100 meg could saturate our 2 gig backbone - Is that correct?

I realize that this will probably never happen as 20 users will not be sending/receiving 100 meg for sustained periods of time

I believe I can increase the Etherchannel connections to 8 (?) per switch - Is that correct?

Am I correct in thinking that, provided there are no underlying issues, increasing the number of fibers will effectively reduce latency issues across the backbone?

OR - Am I barking mad and need to start looking for a different solution?

Thanks for any ssistance you can provide.......

4 REPLIES 4
cadet alain
Advisor

Hi,

I believe I can increase the Etherchannel connections to 8 (?) per switch - Is that correct?

This is hardware specific so you'll have to see the max number of ports in a Bundle for your platforms.

But putting more ports into a Port-channel won't guarantee you a higher bandwidth if the load balancing algorithm is not good for your traffic flows.You can change this algorithm on both sides differently to use as many physical links as possible.So it should be different at the sides with the unique servers than at the side with the multiple clients because if the load balancing is src-mac  then at the server side all return traffic will only choose one link.

Regards.

Alain.

Don't forget to rate helpful posts.

Hi,

Configuring etherchannel will provide more bandwidth, but in the same time we need to consider the oversubscription as well.

++ All the interfaces will connect to a port-asic at the back end.

++ So if you configure the etherchannel with the ports in the same port-asic group then there is no guarantee that you will get the entire bandwidth depending upon the amount of traffic flow.

++ Check the datasheet of the linecard as well.

Hope this helps

Cheers

Somu

Rate helpful posts

darren.g
Contributor

nagel wrote:

I have 2 - 45xx switches and 1 - 65xx switch at my core/distribution layer for the campus.  These 3 switches are removed from STP and each has 2 - 1 gig (gbic)  Etherchannel connections to the other (triangle configuration).

If I'm not mistaken this gives me a 2gig backbone across the 3 switches. 

The Server farm runs off of the 65xx switch.

The campus is approx 1250 PCs

We are runninginto a latency problem during peak hours (10 - 2) in getting back to the servers.

So, correct me if I'm wrong - Theoretically, 20 users running 100 meg could saturate our 2 gig backbone - Is that correct?

I realize that this will probably never happen as 20 users will not be sending/receiving 100 meg for sustained periods of time

I believe I can increase the Etherchannel connections to 8 (?) per switch - Is that correct?

Am I correct in thinking that, provided there are no underlying issues, increasing the number of fibers will effectively reduce latency issues across the backbone?

OR - Am I barking mad and need to start looking for a different solution?

Thanks for any ssistance you can provide.......

nagel.

Latency like this is not necessarily the fault of your network backbone.

You've also got to consider that your servers probably only have 1 gig NIC's available to them - so in that case it doesn't matter what your backbone is, you can only get 1 gig (well, probably 950 meg, given overheads etc) into and out of each server.

If your campus is 1250 PC's, and a significant percentage of them are trying to access servers across the 2 gig backbone at the same time, youc ould definitely run into bandwidth issues. You also have to consider if you may be overloading the switching capacity of the component switches in trying to hit this much data across your links.

You don't say what kind of data is moving (file and print? video? web?), but a 2 gig backbone isn't that big in this day and age.

If you investigate your servers and they're not overloaded (Windows task manager will run a network track for you on each server as an indication), and you're satisified that your switches aren't being overloaded in the switching demands put on them (I doubt this int he 65xx, and am only slightly more allowing the possibility inth e 49xx - both are good series switches with the right Sup modules), it may be more cost effective for you to consider getting 10 gig interfaces into the 65xx and 45xx devices than running more links in the etherchannel (the maximum number of links is, as someone else pointed out, hardware dependent for each etherchannel, but is usually up to 8) - you can run more bandwidth across a single pair of fibre than you could in total acrodd 8 pair in an 8-port etherchannel - using your existing fibre you could get a 20 gig backbone just as easily (although I'm not sure if you could get 4, 10 gig ports into a 45xx without knowing the exact model).

Hope this helps.

Cheers.

cashqoo
Beginner

you may want to check the bandwidth utilisation on your backbone GE interfaces.