cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
705
Views
0
Helpful
1
Replies

Stacked 3750's problematic?

coreywilson
Level 1
Level 1

Hi Everyone,

We have an esx environment of 24 hosts, 900vm's, 6 equallogic SAN array's and a few 48port 3750 1gb switches. We have four 3750's handling our iSCSI network and four handling our virtual infrastructure networking. Two in our SAN rack, two in our esx host rack and four of the other 3750's in our vm host rack as well.

We currently have both the iscsi switches stacked and the network switches stacked since we were using etherchannel previously. We have moved away from etherchannel on the iscsi network (it used to do nfs as well as iscsi) but the switches are still in stack mode. Likewise for the network switches, again we are no longer using etherchannel though.

We have experienced some problems with this stacked configuration that I do not recall experiencing before we had stacked switches (we were running 2950 1gb's).

1) We have experienced a loop (or something) on two occassions where there was mac flapping and somehow a communication error started rolling through each vm switch one by one causing vm's to become randomly inaccessible. Basically we were forced to shutdown every switch in the stack and power each one up one by one after a few minutes each. This appears to cleared the arp cache and things are okay again. This seemed to be caused by a stack cable being removed from a powered on switch, though I have been told this should not cause a problem. The second case was two switches (one in two different racks) lost access due to a circuit breaker tripping.

2) If one switch in the stack is restarted or unplugged (for whatever reason) it seems traffic going over that switch dies and sometimes takes at least 30 seconds to a minute to pick back up but this causes connectivity issues, obviously. I was under the impression that a failover/rerouting in a stack should be instaneous since its really one large logical switch (at the most a couple seconds).

The issue here is that our environment is extremely critical but I am not the network engineer. I have a large say in how the configuration goes but I am not a cisco expert so can only go by what I read.

So really my question is, is there any real advantage to remaining in a stacked configuration under these situations?  We really dont make changes to the switches so management is a non-issue imo. The iscsi switches are isolated and access ports, the vm switches are all trunk ports - so we control everything from our host networking stack. The only advantage I really see is the increased throughput between the SAN attached iscsi switches and the host iscsi switches (36gb?) vs. two or three copper trunk links between each switch. Very hard to even saturate a single 1gbps link anyways with the random io on iscsi anyways.

Secondly can anyone explain what may have been the cause of the rolling blackouts between the vm networking switches and why dropping one stacked member would interrupt service for so long?

Thanks

1 Reply 1

paolo bevilacqua
Hall of Fame
Hall of Fame

Probably a chassis-based, mid-range or high-end switch, designed for high availability, would work better for you.

Review Cisco Networking for a $25 gift card