cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2941
Views
17
Helpful
7
Replies

Slow performance through Catalyst 3560 w/SFPs

lfarago1
Level 1
Level 1

I have some WS-C3560-48PS-S switches with 1000BaseT and fiber SX gigabit SFPs connected to a server farm switch WS-C3750G-24TS-E. Servers are attached with gigatbit-capable NICs.

I found extremely sluggish performance when installing Windows XP to new PCs via the network, which first draw a DOS-based Windows 98 TCP/IP stack on themselves from floppy then start copying necessary files from the gigabit attached Windows 2000 servers. So client PCs are on 3560's 100Mbps ports, backbone is 1000BaseT or fiber SX (tried both, no difference) to C3750G-24TS-E, where the server is attached and set to gigabit speed.

Cabling is fine, and no port errors occur.

I managed to speed up the installation process (took 4 hours, become 15 minutes!!!) by any one of the next three measures:

1. Lowered the server connection speed to 100Mbps full duplex (switch port and NIC).

2. Set the backbone to 100Mbps full duplex.

3. Relocated the server to a 100Mbps port on the 3560 switch.

I could exclude the C3750G-24TS-E from the suspect list by moving the server directly to the 3560 1000BaseT SFP and still the same phenomena could be seen.

I suspect the Windows 98 TCP/IP stack for the sluggish performance.

Has anyone come accross similar problem or has a better idea?

7 Replies 7

Hi,

Do you have "mls qos" enabled. If so.. I that might be the issue. I was having a similar problem on my 3750's. Once I turned off mls qos, the problem corrected itself.

-- Dominique

Hello,

Yes, I have inputted the "mls qos" command in both the 3750 and 3560 switches, no any other qos measures. I run IP telephony (Call Manager) on the network also. Thank you for the advice, I will try the scenario without "mls qos" and I might not compromise on voice QOS since I run voice traffic on a separate voice VLAN.

I will let you know the outcome.

Thanks.

Does anyone know if this is a bug ??

We have had the same thing with some 3560's running 12.2.20EX, no QoS config but found that mls qos is on by default and doesn't show in the config. Issueing the no mls qos command fixed the throughput problems.

Here is what did...

Auto QoS set on the 48 port switches. Any devices on the 24TS connecting to another device on the 24TS works fine. Devices connecting from the 48P to the 24TS with Auto-QoS on the 48P experince severe performance degradation. Below is the global command to fix this.

**** Global Command to Fix the Issue

mls qos queue-set output 1 threshold 2 400 400 100 400

****** Queue Setting Before Fix **********

MID3750Stack#sh mls qos queue-set 1

Queueset: 1

Queue : 1 2 3 4

----------------------------------------------

buffers : 20 20 20 40

threshold1: 100 50 100 100

threshold2: 100 50 100 100

reserved : 50 100 50 50

maximum : 400 400 400 400

********** Queue Setting After Fix *************

MID3750Stack#sh mls qos queue-set 1

Queueset: 1

Queue : 1 2 3 4

----------------------------------------------

buffers : 20 20 20 40

threshold1: 100 400 100 100

threshold2: 100 400 100 100

reserved : 50 100 50 50

maximum : 400 400 400 400

Notice that in the output above that the thresholds for queue 2 by

default are only configured to allow 50% of their buffer to be used

before traffic is dropped since Queue 2 is the lowest priority queue

and services the type of traffic that your switch has been

dropping. We changed this queue to accomodate a higher amount of the

lower priority traffic which thus resolves the issue.

As you can see, there are core differences in the architecture of the

two switches which account for the different behaviors when it comes

to buffer management.

Since posting the question I found the following bug which looks similar to your changes. Thanks.

CSCeg29704 Bug Details

Headline Need to change 3750/3560 default QOS egress buffer allocation Product 3750 Model all

Component qos Duplicate of

Severity 3 Severity help Status Verified Status help

First Found-in Version 12.2(20)SE3 All affected versions First Fixed-in Version 12.2(25)SEB, 12.2(25)SEC Version help

Release Notes

After enabling QOS on 3750 and 3560 switches, certain application (mostly bursty and TCP based) experience significant performance degradation due to unexpected packet drops on some of the egress queues.

This is due to initial default egress queue threshold settings (when qos enabled) not optimized for this type of traffic pattern.

This initial default queue threshold settings (when qos enabled) thus need to be changed to accommodate these traffic.

Workaround:

Tune the egress queue thresholds parameters to allocate more to the affected queues.

Specifically, egress queue 2 thresholds need to have the following settings:

Thresholds1 = 200

Thresholds2 = 200

Reserved = 50

Maximum = 400

Great post!

With only 20 buffers for queue 2 and imaging type behavior I can see those buffers being constantly above 50% full (as in the orinals posters observation - 1000 going to 100)

buffer drops = tcp window starts closing = poor performance.

Are there any show commands on the 3560 switch that would display the actual usage and the overload of that queue (on a port) and that drops take place? Could not find any.

Thanks

Review Cisco Networking for a $25 gift card