cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4419
Views
5
Helpful
17
Replies

Very Basic Ethernet/Hub Question

visitor68
Level 5
Level 5

Here's something that most of us probably haven't thought about in years.

Some Ethernet basics...

We all know that when deploying a hub, even though the physical topology may be a hub and spoke (just like a switch), the logical topology is still a bus. Therefore, end stations must run in half-duplex mode to be able to leverage CSMA/CD - you know, listen and send if all clear. If not, start a randomized backoff timer and resend when timer expires.

What I have never seen explained is what it is about a hub that presents a bus topology? If coaxial cable is deployed, it is easy to visualize the fact that it leads to a bus topology and that collsions will occur because there is one wire upon which each station receives data from others and sends data to others. But if UTP cable is deployed (Cat 5, etc), there are separate Rx and Tx pairs, so what is it about the hub that creates a bus topology?

Note, a hub does indeed use flooding, but so does a switch sometimes (unknown unicast, for example), yet even when a switch floods, there is still a hub and spoke topology - physically, of course, and logically - hence, no collisions.

Thoughts?

Thank you

17 Replies 17

Leo Laohoo
Hall of Fame
Hall of Fame
Note, a hub does indeed use flooding, but so does a switch sometimes (unknown unicast, for example), yet even when a switch floods, there is still a hub and spoke topology - physically, of course, and logically - hence, no collisions.

When a switch powers up, it's CAM table is empty.  This is the only time a Layer 2 switch behaves like a hub.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information  contained within this posting without consideration and with the  reader's understanding that there's no implied or expressed suitability  or fitness for any purpose. Information provided is for informational  purposes only and should not be construed as rendering professional  advice of any kind. Usage of this posting's information is solely at  reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

A hub just repeats what comes in on one port to all the other ports.  Physically not a shared medium, logically it is.

This regenerates the original frame, which is also why you can extend Ethernet distance using hubs.

However, unlike a switch, there are timing dependencies which restrict how many hubs you can cascade.

Jon Marshall
Hall of Fame
Hall of Fame

Note, a hub does indeed use flooding, but so does a switch sometimes (unknown unicast, for example), yet even when a switch floods, there is still a hub and spoke topology - physically, of course, and logically - hence, no collisions.

Not necessarily. There could be collisions ie. the switchport is running at half-duplex but the collisions will only be seen by the switchport and the end device connected to that switchport.

That said this question has made me rethink a lot of what i thought i knew ie. we all know the basic fact that a switch has a collision domain per port but how exactly does it do this and how do broadcasts/unknown unicasts fit into the picture. So what i have come up with so far -

A coaxial cable could be thought of as a path that only one device can use at a time  with no directions ie. a packet being sent on the path has to go all the way along it because the destination could be anywhere, there is no way of knowing. A hub simply replicates this. There is no intelligence in the hub that adds directions, it still has to send the packet all the way along. So functionally they may look different but they are doing exactly the same.

Because there is only one path only one device can be using it at any one time. Because there are no directions the packet must go to all destinations along the path, except the one it came from obviously.

A switch on the other hand does 2 things very differently -

1) it creates multiple paths within the switch fabric between each and every port. So if port A wants to go to port B that is one path. If port A wants to go to port C that is another path. It's technically called microsegmentation.

2) not only does it create multiple paths, it also adds directions to these paths. It does this by using L2 mac-addresses

It's important to distinguish between the 2 things. A lot of material you read on switches and collision domains explains that a switch implements collision domains simply by filtering packets based on their mac-addresses but this would then imply that a broadcast, even on a switch, extends the collision domain, because obviously you can't filter. However even a broadcast is still using these dedicated paths between ports, it's just using all of them except one specific one.

So which of the 2 above actually provides a collision domain per port. Personally i would say 1 does. 2 does mean that the packet does not have to be sent down every path, but these paths are still dedicated paths between ports.

Having said that a collision domain is defined on a switch as the switchport to end device whereas i seem to have described it as a path between 2 ports which is not quite the same thing.

So i would be very interested to hear others interpretation on this as I accept i may be missing something really obvious

Jon

Jon, thank you for that detailed answer....I still don't see it, though.

I just don't buy the "hubs flood packets out all ports" argument for why there would be collisions. As long as there are separate Rx and Tx paths, the flooding in and of itself should not be a cause for collisions. There are separate paths - so what collisions?

Let's walk it through....

There are 10 hosts connected to a hub using UTP cable with separate Rx and Tx paths. Computers send traffic on pins 1 and 2 and receive on pins 3 and 6. The switch does the opposite. Ok, so host 1 will send a packet, the hub will flood that packet to hosts 2-9 on pins 3 and 6, they all get the packet. Meanwhile they can send traffic back to the switch on pins 1 and 2. So, where are the collision?

The answer has to be that it is the internal architecture of a hub that is different from a switch.

[EDIT]  By the way, it has nothing to do with MAC-learning or the fact that switches forward based on MAC addresses either. While a switch is first populating its CAM table, as far as flooding is concerned, it is acting exactly like a hub.[EDIT]

Gentlemen,

My two cents about this.

When I explain the nature of a hub to my students I always tell them to think of a hub as a shared segment inside the hub box. Instead of BNC and T connectors to this segment, there are RJ-45 ports, but within the hub, the circuitry still creates an electrical environment to a shared medium - simply taking signals on Rx pair of the incoming port, amplifying them - without even knowing what they mean! - and sending them out the Tx pairs on all other ports. This process happens in real time without any buffering whatsoever, because you cannot buffer an electrical impulse just so.

It is true that Rx and Tx paths are separate, even within a hub. However, think of three stations on a hub, say, A, B, and C. Now imagine that both A and B start talking. What will the Tx pair towards C be filled with? Naturally, with a totally unintelligible superposition of signals coming into the hub from A and B. That is the collision you are looking for, and where the collision starts really destroying data.

It is very important that with hub we do not talk about flooding packets or frames because hubs do not know either. They know bits or signals or symbols, but nothing about units of data.

With switches, such electrical collisions can never happen because when a NIC sends frames, it does not talk over a physical media (including hubs) to the receiving station directly, rather it talks - unknowingly - to the port buffer on the switch it is connected to. The entire frame is stored on a switch into the buffer and is sent out the destination port in its entirety only if the destination port is free. Otherwise, it waits. Hence, a switched environment is in fact a bunch of point-to-point connections - between NICs and switch port buffers - and the delivery of data is not done just-as-they-come, possibly creating a collision on some egress port like it happened with hubs. Rather, with switches, the process can always be visualised as two-step:

  1. from the sending party to the switch port (or shared) buffer
  2. from the switch buffer over its port to the connected party

Both these processes are point-to-point in their nature, and are performed only if the egress port is clear. This creates a non-shared medium.

Best regards,

Peter

Disclaimer

The  Author of this posting offers the information  contained within this  posting without consideration and with the  reader's understanding that  there's no implied or expressed suitability  or fitness for any purpose.  Information provided is for informational  purposes only and should not  be construed as rendering professional  advice of any kind. Usage of  this posting's information is solely at  reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Peter Paluch wrote:

unknowingly - to the port buffer on the switch it is connected to. The entire frame is stored on a switch into the buffer and is sent out the destination port in its entirety only if the destination port is free.

"The entire frame is stored . . ."

True for a "store-and-forward" only switch, not always true for "cut through" switches.

One of the subtle differences between a hub and a switch, the hub imposes less latency.

Because of the additional latency imposed on a switch, both waiting for the whole frame to arrive, and then checking CRC (a store-and-forward switch will not normally forward a corrupted frame, a hub will) and deciding what to do with it (what port[s] to send the frame to), some (cut through) switches could actually begin transmitting the frame before it had been completely received.  This would reduce the switch's additional frame forwarding latency compared to a hub.

With the advent of 100 Mbps Ethernet, "cut through" switches seemed to fall out of favor, since less time was spent waiting for the frame to be received and the switch's forwarding processing time had also usually been reduced.

With "new" high speed LAN networking, i.e. ultra low latency, "cut though" has reappeared to once again reduce a switch's frame forwarding latency.  If I remember correctly, it's a feature now found on the Nexus 7000.

PS:

Peter also describes a hub taking the incoming signal and amplifying it in real-time.  I recall, perhaps incorrectly, the signal is regenerated.  If I do recall this correctly, again there's a subtle difference.  The two most significant might be a regenerated signal would be a perfect copy of what was received (not always the same as what was sent) and some additional latency (usually minute).

Hello Joseph,

"The entire frame is stored . . ."

True for a "store-and-forward" only switch, not always true for "cut through" switches.

I was sure somebody was going to point out that I have described only the store-and-forward process, and that it does not apply to cut-through switching

Yes, absolutely, that is correct. Then again, I was trying to explain the fundamental basics, not all the flavors and specifics of diverse switching methods.

Peter also describes a hub taking the incoming signal and amplifying it  in real-time.  I recall, perhaps incorrectly, the signal is  regenerated.  If I do recall this correctly, again there's a subtle  difference.  The two most significant might be a regenerated signal  would be a perfect copy of what was received (not always the same as  what was sent) and some additional latency (usually minute).

Correct. Again, I provided a simplified explanation. Truly, the signal is not just amplified but also regenerated, i.e. the hub actually does not plainly amplify any signal it receives, but it should actually recognize the symbols of the line code and reproduce them on egress ports. There is indeed a latency incurred. If my memory serves, for FastEthernet, there were two classes of hubs depending on their latency, and depending on the class, you could have only one hub, or two in connected in a daisy-chain fashion.

Best regards,

Peter

Disclaimer

The   Author of this posting offers the information  contained within this   posting without consideration and with the  reader's understanding that   there's no implied or expressed suitability  or fitness for any  purpose.  Information provided is for informational  purposes only and  should not  be construed as rendering professional  advice of any kind.  Usage of  this posting's information is solely at  reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

Peter Paluch wrote:

Hello Joseph,

"The entire frame is stored . . ."

True for a "store-and-forward" only switch, not always true for "cut through" switches.

I was sure somebody was going to point out that I have described only the store-and-forward process, and that it does not apply to cut-through switching

.
.
incurred. If my memory serves, for FastEthernet, there were two classes of hubs depending on their latency, and depending on the class, you could have only one hub, or two in connected in a daisy-chain fashion.

Best regards,

Peter

Yea, I'm finally a somebody.

What you're recalling for FastEthernet is Class I vs. Class II.

PS:

Wiki on Ethernet hubs: http://en.wikipedia.org/wiki/Ethernet_hub

Peter

Was hoping you would get involved

I was missing something obvious and that was port buffers on switches. The ability to store and forward the packet until access to the switch fabric is available.

It is surprising how such as straighforward question can make you think ! And it is also surprising how much misinformation there is out there on such a basic issue eg. switches create a collision domain per port because they can do L2 lookups on the frames being received. Yes they can do this and this does allow them to send the frame to correct port only but that does not explain, as i said previously, how a switch still has a collision domain per port when a broadcast is received.

It has also got me thinking about wire speed switches and the role port buffers play in this (thanks for that Peter ).

Jon

Hi Jon,

switches create a collision domain per port because they can do L2  lookups on the frames being received. Yes they can do this and this does  allow them to send the frame to correct port only but that does not  explain, as i said previously, how a switch still has a collision domain  per port when a broadcast is received.

Actually, I would argue some of that. Switches do create a collision domain per port but it is not related in any way to any L2 lookups.

The first reason is that you can have a hub connected to a switch. The hub creates a shared segment where collisions may occur, but even if they do, they will propagate at most to the switch port. A switched port will never extend the collision further because it is not just taking the electrical signals and regenerating them on all other switch ports in real time, rather it expects an intelligible frame to arrive - and if it does not, nothing is forwarded. Hence, the switch port indeed creates a boundary for a collision domain.

The second reason is concerned with the duplex setting. If a device connected to a switch port operates in a different duplex setting than the switch port itself, situations may arise when a device set to half duplex starts receiving data while sending data itself. Although these transmissions are carried by separate wire pairs in the TP cabling and hence this occurence does not electrically destroy data, it is still a violation of the protocol from the viewpoint of the half-duplex-operating device, and is also considered a collision. A logical one, as opposed to a collision where the electrical signals are summed together, but still a collision. Therefore, even a pair of interconnected switch ports, or a NIC and a switch port, do create an isolated collision domain, with the collisions in this case refer to the occurence of duplex operation violation.

Best regards,

Peter

Hi Peter

Actually, I would argue some of that. Switches do create a collision domain per port but it is not related in any way to any L2 lookups.

I totally agree which is what i hope i was getting across. I was pointing out that a lot of the documentation i have seen explains collision domains being implemented on switches by using L2 lookups. But it think it's become obvious that the collision domain is implemented in the actual hardware of the switch rather than the L2 lookups.

The L2 lookups enable the switch to forward packets more efficiently but it does not in and of itself create the collision domain.

Jon

Hi Jon,

Agree 100%.

Best regards,

Peter

Folks:

Isn't it interesting how a fundamental topic that we have  long ago abandoned comes back to haunt us? I think what happens is that  we learn these basic concepts at a time when our overall understanding  is limited, and therefore we do not question certain aspects of what we  are told. As time passes, we learn about much more advanced topics and  forget to worry about the details of the basics, which may have been  presnted to us incorrectly when we first learned them.

This is why it is essential to start off reading books  that explain "why" not just "what". Books that are too simplified can  cause more dmaage then help. How many times have we heard that hubs  force end stations to operate in half-duplex mode becase they simply  "repeat out all ports?" While it is true that hubs behave this way, it  is not the reason at all why there are collisions. If we understand what  is really meant by "each end station listens for a carrier" before  transmitting, it is telling us that there is no electrical isolation  between the stations. What the end stations are doing when they are  "listening" is detecting an increase in the DC voltage level present on  the wire. It would not be able to do that if there was electrical  isolation that is typically provided by capacitors.

So, the end station and the input circuitry on the  receiving switch port create isolated segments - electrical circuit  constructs that are islolated from others. This is why an end station  connected to a switch port does not have to "listen" - in fact, it can't  hear anything even if it tried to listen because everyone is  electrically isolated from the other.

Therefore, it can safely be said that collisions are a function of layer 1 activity, not layer 2. So, MAC-address learning and targeted transmissions to a specific port, and whatever other control and data plane activity of Ethernet have nothing to do with why there are collisions.

Buffers were mentioned, so I won't get into that.

Hope this Helps

Hello Victor,

Thanks so much for joining the discussion! Long time no see - how are you?

Thank you for providing a fresh view on this. I was myself reminded of several important things while reading your post. Thanks again!

There is a couple of things, though, that I would like to discuss in more detail.

If we understand what  is really meant by "each end station listens for a  carrier" before  transmitting, it is telling us that there is no  electrical isolation  between the stations.

Not necessarily. It first and foremost means that each end station is receiving the transmission of a (possibly single) sending device just as the transmission occurs, i.e. in the same time instant as the signals are sent by a sender, they are sent towards and detectable by all possible receivers. How this multiplication is performed is another story - it may be via an electrically shared circuitry, or by point-to-multipoint replication.

What the end stations are doing when they are  "listening" is detecting  an increase in the DC voltage level present on  the wire. It would not  be able to do that if there was electrical  isolation that is typically  provided by capacitors. 

I cannot agree with this, for several reasons:

  • Link codes are balanced so that their DC component is 0. Sure, immediate voltage may not be zero, but over longer periods of time, the DC component must be kept at 0.
  • There is no need for a station to perform "listening for carrier" differently than listening to an incoming frame.
  • Ethernet uses baseband transmission, i.e. digital signals immediately encoded into a suitable link code with discrete levels. They are not modulated onto a (harmonic) carrier as with broadband carrier systems (apart from 10Broad36). Hence, it would be actually a futile attempt to look for a particular "carrier" on most Ethernet variants although the CSMA/CD method calls it that way. Listening for "carrier" on Ethernet is simply the same as listening to whether there is any frame being transmitted at all.
  • Most importantly, there is a galvanic insulation of a NIC from the cabling. Either there is a inductive coupling by a miniature transformer in the NIC's RJ45 connector or closely to it, or an optical coupler (opto-isolator) is used. See the http://upload.wikimedia.org/wikipedia/commons/9/9e/Network_card.jpg - the YCL 16PT-005B circuit is a LAN Isolation Transformer, the 20F001N is a 10BaseT filter, and the DC-101 is a DC/DC converter. Note that this is a card with BNC/RJ45 connector, clearly from ages when hubs were predominantly used. With a galvanic separation of the actual NIC's circuitry from the cable, it is impossible to measure an elevated DC level at all - just as you suggested if using capacitive coupling.

So I would not personally explain things from the viewpoint of a pure electrical insulation as it actually is present even in hubbed networks.

it can safely be said that collisions are a function of layer 1 activity

I wholeheartedly agree with this.

Best regards,

Peter