I have to create a new switching infrastructure design with 6500 at a condensed distribution/core and an access layer. Tradtionally, if enough fiber strands exist, I connect two fiber strands between each access switch directly to the core using 3560's. This provides for redundancy.
However, should I be looking at 3750's? What are the advantages of it apart from managing multiple switches using one IP address? If I go the 3750's route, can I connect one fiber strand to the topmost switch in the stack and another to the bottomost switch for redundancy? Can I do an etherchannel between these two switches and the core? Any links would be appreciated too...
There are two ways,
You can use SFP inter-connect cables on 3560's to combine few of the 356o's as a single stack/cluster and then manage them using a single IP. However more than 3 switches are not recommended to be connected using the SFP interconnect cable. This will also not let you configure the ether-channel on the 2 links.
You can use 3750's which gives you trues stacking capabilities and you can configure a cross stack ether-channel to a single chassis. You cannot configure the cross-stack etherchannel to two different chassis in the core.
HTH,Please rate if it does.
So using (3) switches in the 3750 stack in my IDF, I can probably connect a fiber cable from each switch in the stack back to three different blades on the core. Right? This would enable full redundancy between access and core and also allow me to manage the multiple access switches in one closet using a single IP address. Is this a good way to proceed?
My other design would be to connect each of (3) 3560's individually to the core. With this design, redundancy is a little lacking as if a fiber connection fails, the switch loses connectivity while with the 3750's, it would retain connectivity back to the core through other switches in the stack.
Your thoughts and ads & disads about either design? Or if you have a white paper with such access<->core design examples, that would be appreciated too
Good day! Just a suggestion, multiple 3750 stack (using stacking cable) logically becomes a single switch, and to get full redundancy without the effect of STP, try connecting each 3750 to a corresponding port on the core and group them as an etherchannel (the same should be configured as well on the core switch). In this way, you will have a logical 3Gbps link between your core and your stacked 3750s. Configure this etherchannel as a trunk, then just create VLAN interfaces if your are to do L3. In this way, in case 1 3750 fails, your etherchannel will just lose 1/3 of its capacity but of course, equipment connected on the failed 3750 would also lose connectivity to the network.
Hope this helps.
We use 3750's exclusively for access layer connectivity with GLBP in a dual home configuration, to 2 6509 in the distribution layer. If the closet/stack has one 3750, then we connect Dist-R to Gi1/0/1 and Dist-L to Gi1/0/2. If 2 or more 3750's in the stack, then Dist-R to Gi1/0/1 and Dist-L to Gi2/0/1. With proper stack wise cabling, you can add or remove 3750 with zero service interruption. The key is to always power off the 3750 before you remove it from the stack ring, and connect the stack ring before powering up and adding a new 3750 in the stack. A single IP for stack management simplifies management greatly. We used to have multiple 3550's in the closet daisy chained together, but management and redundancy was much more challenging.
This is what I finally plan to do. Please let me know if it makes sense.
Firstly, one cannot establish a cross-stack etherchannel to (2) different switching cores, CoreA and CoreB. Right?
Hence, what I plan to do is this:
Given 2 or more 3750's in my IDF closet, I'll have one 10GB L2 connection back to CoreA and another 10GB L2 connection back to CoreB. Of course, CoreA and CoreB are inter-connected via L2 10GB connections themselves. This way, if one of the fiber strands were to go bad, the 3750's in the stack would still maintain connectivity. Of course, I'll have to rely on spanning tree so that only a single 10GB connection from the IDF closet to the core is being used and etherchannel would not be possible.
Design looks good?
L3 Between Cores and Dist is better design, but only if you have Subnet/Vlan per Closet/Stack. If you span vlans across closets, then you need L2 between Dist. Build fiber topology from Dist to Access shaped as V and not a triangle for best convergence that does not need STP.
Guys, I had a follow-up question to this design I'm doing but instead of continuing here, I opened a new thread. Simply search the forums for: CORE 6509 design question
Hope to see you there ;)