cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1755
Views
0
Helpful
11
Replies

100 KBits/sec vs 1 GBits/sec ... urban legend ???

EricOuellet
Level 1
Level 1

Hi,

I work at very large company with many expert in every domain.

Many of them maintain an urban legend (a real myth) that switching from Ethernet 100Kb/s to Ethernet 1Gb/s would slow down users due to server bottleneck.

I totally disagree with them and I affirm than in worse case, the speed will just be the same as it is actually.

In better case, the speend would probably come close to 10x faster.

We only have 1Gbits switches (I mean no hubs) with category 5 cables.

-------------------------

Question:

-------------------------

Is there any way that any network could slowdown if everybody is raised from 100 Kb/s to 1 Gb/s ?

Thanks a lot,

Eric

2 Accepted Solutions

Accepted Solutions

Eric,

You are thinking in a layer 2 mindset and not allowing for the functionality of the layer 4 protocols.

If you wanted to upload a file to an FTP (a TCP based application) for example, it does not throw the packets out at line rate, remember that in TCP the reciever needs to acknowledge the packets.  It begins slowly with a small window, then as we start to receive successful ACK we increase the size of the window.

If your TCP flow starts to see a lot of loss then the flow is throttled, and drops back to a small window, much to the detriment of the transfer rate.

There are a whole ton of resources on the internet discussing this topic, but a good place to start if you are interested would be Wikipedia http://en.wikipedia.org/wiki/Slow-start

HTH

Chris

View solution in original post

It may not happen as much with higher-end switches because of congestion avoidance mechanisms like WRED, but it will still occur because of the basic capacity issue on the port.

(n x 1 gbs) hosts > 1gbps server

WRED works by watching queues and dropping a few frames here and there when it starts to see congestion rather than waiting for the queues to fill and overflow. It works best with TCP traffic because the dropped frames will be retransmitted at a slower speed. By dropping some packets at random, it can avoid congestion / overflow. If somthing like Tail-drop is used, where all new packets are dropped once the queues are full, you can end up with bit TCP connection waves where they all speed up, then when all their packets are dropped, they all slow down. Then, once they're packets are getting through, they all speed up again and the cycle repeats. Here's a good link: about WRED vs TailDrop.

Edit: I suppose I should include the link:

http://www.cisco.com/en/US/customer/docs/ios/12_2/qos/configuration/guide/qcfconav_ps1835_TSD_Products_Configuration_Guide_Chapter.html

View solution in original post

11 Replies 11

sdheer
Cisco Employee
Cisco Employee

Hi Eric,

First of all the bootleneck would depend on many factors such as the hardware you are usng,rate of trafic ,config on the ports , type of traffic etc.

In waht scenafrio are you talking about slowdown of the network.Could you provide little more specifications on the issue,

Regards,

Swati

It could be really hard to find but if absolutely needed, I can find hardware/stats details... but...

Could you give me only one scenario where a general network could suffer of switching from 100 to 1000 kbits /sec (without hubs) ?

Thanks

Eric

Eric,

If I understood the question correctly, then the answer depends.

If your hosts are currently 100Mb while your servers are already 1Gb, then you were to upgrade the hosts you would likely see worse performance.  The main reason for this is the TCP slow start mechanism.  If you provide faster access to your hosts, while having the same bandwidth to your servers you are indeed likely to congest your server ports.

Congestion drops are fairly random (at least within a class/queue) and these packet loss can cause the TCP session to throttle itself then slow start.   You can test this yoruself fairly easily with wireshark, and looking at the I/O graphs you will see your Tx Rate drop suddenly and take quite some time to get back up to speed again.  The easiest way is to think of it as a tree, the hosts are the branches so not a huge volume of the networks traffic needs to go to them, but the servers are located nearer the thicker root, so more traffic will go to them.

Of course if your hosts are already the same speed as your servers then increasing the bandwidth everywhere is not likely to impact performance (providing you can accomodate the additional BW in your core).

HTH

Chris.

Thanks Christopher,

I do not understand.

Whatever speed exists between the server and its switch as long as it is higher or egal (regular config) to the rest of the network, to me it can only exists potential advantages. I can't see why it would slow down more than for 100 mbits. It could be equal or better. How, logically, it could be worse ? What phenomen would cause a slowdown ?

In worse case:

Everybody want to acces the server at the same time. All switches buffers will fill up and will just say to the host to wait until switch buffer space are available.

Data will go to the server in a FIFO fashion (according to the algorithm in the switch).

In best case:

The server will serve the first faster, it will go away faster and let room for the second.

The second will come in, will be served faster, go away faster and be away from traffic faster to let other being served.

And consequently...

Where am I wrong ?

Eric,

You are thinking in a layer 2 mindset and not allowing for the functionality of the layer 4 protocols.

If you wanted to upload a file to an FTP (a TCP based application) for example, it does not throw the packets out at line rate, remember that in TCP the reciever needs to acknowledge the packets.  It begins slowly with a small window, then as we start to receive successful ACK we increase the size of the window.

If your TCP flow starts to see a lot of loss then the flow is throttled, and drops back to a small window, much to the detriment of the transfer rate.

There are a whole ton of resources on the internet discussing this topic, but a good place to start if you are interested would be Wikipedia http://en.wikipedia.org/wiki/Slow-start

HTH

Chris

Thanks a lot Christopher.

My ignorance in the domain probably drive me to a wrong way.

I will read your link, that I really appreciate.

But reading your comments does not convince me (sorry, i'm a little deaf ). Althought the flow is trottled down in your sample case, I think I will get the exact same behavior with 100 mbits ? (But I will read first and get back after) !

I will read the article and will get back to you. It could take one day because it is a little away from my main task.

I know for sure that with hub, collisions to the server would slowdown amazingly with higher network speed at host but I was sure that it was a completely other story with switches.

...I come back soon...

Hello Christopher,

I red the article twice. I'm happy to know that information.

But, I can't see why "TCP Slow Start" would affect more a 1gbits connexion than a 100mbits connexion ?

To me, I'm still at the same point. Reasons given doesn't affect speed differently between 100mbits and 1gbits (it affect both equally).

I don't see any reasons how a 1gbits connexion could be slower than a 100mbits connexion for any scenario where there is no hubs (where we can do abstraction of collisions) ?

I will eternally grateful at anybody who could tell me a real scenario with comprehensive explanation where a 1gbits connexion to a server (throught 1gbits switches) could be slower than a 100mbits connexion to the same server !!! (Knowing that the server connexion to its switch is at 1gbits).

Eric

Ok,

You have 5 clients, all connected at 100mbps and the server at 1gpbs

  • All 5 of those clients can hammer away at the server as fast as they want. There's no way for 5 100mbps hosts to overwhelm the servers port because the server has more aggregate bandwidth. The hosts could generate UP TO 500mbps of traffic, which is less then the 1gbps the server has.
  • TCP plays nice. ACKs will flow, windows will expand and the transfer will fly.

Alternately:

You have 5 clients, all connected at 1gbps each and the server at 1gbps

  • All of those 5 clients will try to connect and transfer at 1gpbs to the server. This COULD cause the server's switchport to become overwhelmed and packets to be dropped (5gbps aggregate > 1gbps server). This would cause the TCP window to shrink and the transfer to sloow down.
  • Yes, each host could theoretically transfer more quickly from the server to allow another one to connect. but the thing is, the hosts don't play nice with eachother. PC2 doesn't know (or even care) if PC1 is trying to transfer a file from the server, it's still going to try to transfer that file as fast as it can. This will overwhelm the switchport, causing frames to be dropped, ACKs to be missed and connections throttled. This would impact ALL connections going through the switchport. The switch, at layer 2, doesn't care about the Layer 4 info, it just passes the frames to the server.

Keep in mind, this example only uses 5 hosts. The problem is compounded when you get hundreds of clients talking to a busy server.

HTH

Thanks a lot rtjensen4

It is very clear. I also think that Christopher has the right answer but I did'nt understand.

As far as what I understand it would happen (degrade when highly sollicitated) with old/cheap switches (layer 2 or 3 switches).

But with newer/better switches (layer 4 and up) then the problem should not appear at all ? Am I right ?

I wonder what was HTH and found it! Yes it helps a lot ! Very clear !

TVMFYVGH

It may not happen as much with higher-end switches because of congestion avoidance mechanisms like WRED, but it will still occur because of the basic capacity issue on the port.

(n x 1 gbs) hosts > 1gbps server

WRED works by watching queues and dropping a few frames here and there when it starts to see congestion rather than waiting for the queues to fill and overflow. It works best with TCP traffic because the dropped frames will be retransmitted at a slower speed. By dropping some packets at random, it can avoid congestion / overflow. If somthing like Tail-drop is used, where all new packets are dropped once the queues are full, you can end up with bit TCP connection waves where they all speed up, then when all their packets are dropped, they all slow down. Then, once they're packets are getting through, they all speed up again and the cycle repeats. Here's a good link: about WRED vs TailDrop.

Edit: I suppose I should include the link:

http://www.cisco.com/en/US/customer/docs/ios/12_2/qos/configuration/guide/qcfconav_ps1835_TSD_Products_Configuration_Guide_Chapter.html

Thanks a lot rtjensen4.

I tried to follow the link but I'm not a Cisco customer, I can't. I'm using this site to find my answer because I haven't found a site that would explain me all those things. It also appears to me to be a complex domain where experience seems to help a lot. The domain is evolving enough fast to be hard to follow for somebody that it is not in it all the time.

I found some information on the web about WRED (I did'nt know anything about it). Thanks!

I'm little less ignorant in the domain now (probably 1% of what I should know). At least enough not to pretend that we could only have better performance by switching from 100mbits to 1gbits. I also know a bit more to help having a better general understanding.

Thanks a lot!

Eric

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card