08-09-2007 09:56 PM - edited 03-05-2019 05:49 PM
We have been provided with a bit complex proposal by our 'independent' system integrators.
I call it complex because I see that they have incorporated a distribution layer, a core layer and another server farm switches into the design of our Local LAN which encompasses a 17 floor building with about 250 nodes / floor.
I do understand that this is a generic Cisco SME/Enterprise design which has been put forward by these guys but our requirements are simple.
As mentioned above, it's going to be a single building with 17 floors and each floor having about 250 normal windows XP Pro users, I don't foresee the need for more than a gigabit uplink from each IDF to core leave aside a distribution set of switches.
On the contrary, these guys are proposing 10G uplink from all.
Is 10G / Distribution layer a norm these days?
Disregarding the need for VoIP and Wireless (we might not be considering it at this stage), please take another quick look at the diagram and advise on any anamolies you might see.
Least oversubscription and security for server farms is one of my concerns.
Any/all comments would be much appreciated which might help us in getting a more economical but equally efficient solution.
10-22-2007 11:37 AM
"In light of this new bit of info, I might need to reconsider our decision to go with stacking 5 x 3750Es within each IDF as compared to 4506."
What information is causing you to reconsider?
"You mentioned that 3750E-48PD Stackwise+ is listed at 64 Gbps but it's really 32 Gbps (full duplex) despite having 128-Gbps switching fabric; when we compare against 4506 with SUP II + 10GE, I get 108 Gbps but according your analysis, I am only effectively getting 6 Gbps connectivity per line card (X4548-GB-RJ45; Five of 'em) to fabric. "
Correct, although the 128 Gbps switching fabric is per each 3750-E, while the 4500's 108 Gbps is the fabric for the whole chassis. (Similar issue with pps.)
"Can I safely deduce that without considering uplinks to core, which will be same in either case, I am about 2Gbps better (throughput wise - 30Gbps vs 32 Gbps) with stacked 3750E vis a vis 4506/7R with 48port, 1gig Line cards in five slots?"
No because comparing bandwidth between 4500 fabric and 3750-E is a bit like comparing apples and oranges. The 4500 line card supports 6 Gbps to a fabric while the 3750-E supports 32 Gbps to a ring. From this perspective, the 3750-E provides 5 times the bandwidth to its stack member vs. what the 4500 does to a line card.
On the other hand, since the 4500 provides a fabric, the full 6 Gbps is dedicated to each line card, where on the 3750-E, you might need to jump between non-end point stack members sharing the bandwidth between such members. The more members in the stack, the more likely effective ring bandwidth will be reduced.
For an edge device, assuming most traffic will transit the uplinks, the 3750-E appears to be a better fit since the bottleneck will likely be the uplinks, not bandwidth to/from the line cards.
For a distribution or core device, or an edge device with an unusually high many-to-many traffic distribution, a fabric architecture will likely be better (e.g. 4500, 6500 with sup 2 and SFM, or sup 720).
PS:
Another possible contender for an edge device is a 6500 with sup32/sup32-PISA. This architecture uses the 6500's 32 Gbps shared bus. Compare its bus architecture against 3750/3750-E stack.
Since the 3750-E supports local switching and each has an 128 Gbps switch fabric, placing hosts that need to communicate much between them on the same 3750-E would help maximize their performance. Since the 3750-E ring also supports spatial reuse, placing such hosts in adjacent stack members might also help performance.
10-22-2007 10:23 PM
Joseph, thanks.
Now the cost factor needs to be chipped in. I realised that a stack of five 3750E within each IDF is costing me about $200K more than, if I use 4506 across all floors. I was till now under the impression that chassis is costlier than stack.
How about, if I use 3x3750G and 2x3750E (top and bottom) to uplink the 10Gb to core within each of my IDF?
Unlike 3750E at 64gbps and forwarding rate of 101.2mpps, 3750G is listed at 32 Gbps with 38.7 mpps.
Now when I stack these two together with 3750Es doing the uplinks, what would be the net drop down in efficiency?
Any other problems in future, that you foresee with such a mix n match?
PS: I am afraid if I consider, as per your alternative suggestion, 6500 at the user edge, the price will shoot up even further..
10-23-2007 04:19 AM
For user edge, although the 4506 offers less raw bandwidth to the line card's ports than a 3750-E stack, I think it would be good enough. I don't see the 8 to 1 line card subscription as a real problem issue. One major feature is loss of the redundancy without a 4507R and without second sup vs. the stack. The second major issue is the decision, when using 4500s, between using the sup II-Plus-10GE vs. sup V-10GE. How does the 4500 sup choice impact your cost?
With regard to a server edge, then I think the raw bandwidth issue becomes more of an issue.
With regard to mixing 3750Gs and 3750-Es in the same stack, this can be done although you lose Stackwise+ bandwidth and its spatial reuse (and performance of the 3750-E). However, since I believe a user IDF usually isn't as demanding as a server edge, 3750Gs alone are also probably good enough. The problem is how to provide 10 gig uplink. There was the 3750G-16TD Switch but it's EOS. This only leaves using the 3750-E to obtain 10 gig, which is more expensive. (So was the 3750G-16TD.) Using one at top and bottom is one method. (It's a design I suggested, and was accepted, at a current customer, although the IDF stacks are up to the max of 9 units.)
With a mixed stack, you have can also use just one 3750-E as a head switch (i.e. one 3750-E and four 3750Gs). You can run the dual 10 gigs from it. If you worry about its failure, its a situation like a 4506 with loss of sup. However, if you have a third fiber run, you can run a "failover" gig connection from the bottom of the stack. (Another option is to just run one "primary path" 10 gig from the head 3750-E and one "failover path" gig from the bottom 3750G. You give up 10 gig, but for a user IDF, probably not really necessary. Saves against cost.)
PS:
The mention of the 6500 with sup32 for IDFs was to contrast its bus architecture vs. stack ring. I.e., the bandwidth of a fabric isn't always necessary.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide