10-15-2004 05:48 AM - edited 03-02-2019 07:17 PM
Hello,
Normally we use daisy chain to interconnect our 2950 switches in wiring closet,such as Switch-1 G0/1 link to backbone,G0/2 to Switch-2 G0/1,Switch-2 G0/2 to Switch-3 G0/1,etc.I just wonder is there a limit how many switches can be daisy chained untill we run into some performance trouble?
Thanks
10-15-2004 05:59 AM
Check out some previous threads on this issue
10-15-2004 06:23 AM
Hi,
there is a limit regarding number of switches daisy-chained in a stack. I can't remeber exactly, I think it's 9 switches.
But there is another more serious limit:
The STP network diameter should not exceed 7.
Regards,
Milan
10-15-2004 06:26 AM
Hi,
there is a limit regarding number of switches daisy-chained in a stack. I can't remeber exactly, I think it's 9 switches.
But there is another more serious limit:
The STP network diameter should not exceed 7.
Regards,
Milan
10-16-2004 02:04 AM
Actually, there's no official limit to how many switches you daisy chain in the manner you describe, IF there's no redundant connection from the end of the chain back to your backbone.
If you did have a redundant connection from the end of the chain back, then Spanning Tree Protocol issues would come into play. Default STP timers are based on a network diameter of 7. In smaller-diameter networks, some people shorten the timers (Hello Time, Max Age, Forward Delay) to optimize the network for faster STP reconvergence.
You can also go the other way, and lengthen the STP timers to allow for a larger network diameter than 7. But you do this at the expense of slower STP reconvergence.
If you don't have a redundant connection from the end of the chain back, then the practical limit for how many switches you daisy-chain will depend on how much traffic they generate. Because the further away you get from the backbone, the more users users you will have contending for access to the shared uplinks.
In a busy network, it is conceivable that users on the switch closest to the backbone will enjoy faster performance and responsiveness than users downstream on the chain, who will encounter increasingly more contention for uplink bandwidth the further down the chain you go. From this perspective, uplinks would be a potential bottleneck and a flatter switched network design would be preferable. But even a well-designed, flattened-out network will not rule out performance trouble if everyone's trying to use one server and it's only connected at Fast Ethernet speed. Even a Gigabit Ethernet server connection can be a bottleneck.
Performance trouble all depends on where the traffic is bottlenecking. If it's on a server NIC, well, then there's not much you can do about it beyond Gigabit speeds at this point. You don't even get full GigE throughput with most NICs anyway. Spreading the work load across several servers is about all you can do to improve that situation. So in this scenario, you can possibly get away with daisy-chaining your switches. And save some money in the process.
But if the bottleneck is on the chained uplinks and the server connections are not being fully utilized, then the network design is the problem. Time to spend a little more money on equipment and connections and flatten the network out, to give your users a more equal chance at network access. Or, rearrange your users' work patterns so that the network load is spread out over time, and keep your switches daisy-chained.
Use MRTG or PRTG to monitor your link utilization on key connections over time, to get a sense of where you may have (or soon develop) performance bottlenecks.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide