cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
607
Views
1
Helpful
6
Replies

Speed for each interface in Cisco switch

13jobsp90
Level 1
Level 1

If I have a cisco switch with 1G SFP uplink connected,  all the copper interfaces (1G port) are connected to the PC. So here how will the speed will be shared between the connected  end devices (PCs) since all the RJ 45 ports are also 1G capable? How much speed will each PC will take? 

6 Replies 6

Joseph W. Doherty
Hall of Fame
Hall of Fame

Insufficient information to answer your questions.

By question how the speed will be shared between connected end devices (workstations)?

By question how the speed will be shared between connected end devices (workstations)?

Yup, I got that from your OP.

Still, insufficient information.

I would need answers to a very, very long list of questions, many of which you would likely find very difficult to answer, and then the data and answer changes millisecond by millisecond.

Generally, if there's no oversubscription, then congestion issues are very rare.

If there is oversubscription, some very old rules of thumb which recommended no more than 8:1 for server hosts, and 24:1 or 48:1 for user hosts.  But those "rules" can be far off.

Deriving the answer to your questions aren't well addressed in the literature.

Without using QoS, if you find you have performance issues, the usual answer is to increase bandwidth (which can be rather expensive, especially on WANs, if you need to eliminate oversubscription everywhere).

Fulvio Ferreira
Level 1
Level 1

Traffic will be prioritized and shared via QOS settings in you network config.

Royalty
Level 1
Level 1

Hi @13jobsp90,

It is really hard to understand the context, situation and meaning behind your question. I am making a stab in the dark and assuming that you are asking with relation to uplink bandwidth management/oversubscription. Are you asking how regular access ports that are connected to e.g. workstations share the bandwidth of an uplink to an upstream device? If so, hypothetically, take this scenario:

  • 24x 1000Mbps RJ-45 switchports that are connected to workstations, phones, printers, wireless access points, etc.
  • 1x 1000Mbps RJ-45 SFP switchport that is used as an uplink, connected to the core of the network, leading to e.g. the internet.

This is an uplink oversubscription ratio of 24:1. This is because 24 devices are connected at Gigabit speeds, which could all use the single uplink at the exact same time. However, the uplink is only 1Gbps (1000Mbps) capable.

To answer how the traffic is shared: By default, switches use queueing and internal buffers. If multiple devices send data upstream (which transit the uplink), the switch will try to forward the frames (traffic) in the order they were received. This is called First In First Out (FIFO) and is the default queueing mechanism. The buffer is a place in memory where frames (traffic) is stored. A queue is the grouping and order/prioritisation of frames in that said buffer. In this case, if workstations transmit more data than the SFP uplink can handle, the result is that the uplink is 'oversubscribed' or 'saturated'. If saturation happens, packets are queued, and if the queues fill up, packet drops happen. In terms of your question 'how much speed will each PC take' well potentially (unrealistically) all of it, since the uplink and a given workstation are connected at 1Gbps and it is on a first come first served basis. QoS could be a determining factor, but it is not enabled by default.

Feel free to ask further if anything is confusing, but please try to provide more clarity if you can

As @Royalty correctly notes, any one gig edge port can fully consume the gig uplink port.

So, what happens if two gig ports want to use the gig uplink port, as they can send 2 gig to the 1 gig port?

Well, that's very much an it depends answer.  Assuming two edge ports each want to use their full gig, with "classical" TCP, the uplink port would queue packets, queue would fill, overflow would be dropped, TCP would detect the drops and effectively drop the transmission rate.  In theory, each edge port would average about 500 Mbps, but TCP would continue to try to use a gig for each, so drops would be on-going.  Actual usage, short term, would be very variable, and long term, and depends on multiple factors.

To avoid the queuing, on the uplink interface, you could run the edge interfaces at 100 Mbps or 10 Mbps, but you've only relocated the underlying issue.  What if two edge ports want to transmit to the same 3rd edge port?  What if the uplink port, still set at gig, wants to send to an edge port running at 10 or 100 Mbps?

What if, the edge ports are still running at gig, but you have multiple apps, on the workstation that want to transmit?  I.e. how fast can the workstation send various app data to the workstation NIC?  (Hint, probably faster than gig.)

In general, fixing network performance issues, by just adding bandwidth, doesn't always succeed.  When it does succeed, it's often not for the reasons assumed.

Personally, one of the things that amazes me about networks, for half a century, there's been much development how to "share" a CPU or disk drives beyond FIFO, but now a quarter way into the 21st century, how do we share network bandwidth?  Still, often the default is FIFO!