cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1401
Views
5
Helpful
6
Replies

Ethernet payload size

trane.m
Level 1
Level 1

Hello,

As far as i understand, the minimum payload size of an Ethernet frame is 46 bytes. I read the reason for that is so that collisions could be reliably detected by devices. I guess that in case of a collision, a frame size would change and, by that, display that something is wrong. This makes sense to me; The frame should be 46 bytes, if it's not 46 bytes, something is wrong.

 

Then i find out that there is also a maximum size of the payload of 1500 bytes. This does not make sense to me. Assuming i'm right about a collision altering the size of an ethernet frame, how does a device know the difference between e.g. a 100 byte frame and the result of two frames colliding, creating a 100 bytes long mixed frame?

1 Accepted Solution

Accepted Solutions

The original poster believes that a collision will alter the size of an Ethernet frame. That is not the case. A collision does not change the size at all. To understand the concept of the minimum size of an Ethernet frame perhaps we should start by remembering that when Ethernet was developed we were not using twisted pair cables or fiber cables, we were using different cables (10Base2 and 10Base5) and there was a definite limitation on the length of those cables. And one important aspect of this cable arrangement was the time that it would take for a signal to get from one end of the cable to the other end of the cable. In these Ethernet implementations if a device wanted to transmit it would listen on the wire. If there was active signal on the wire then it would wait to transmit. If it was quiet then the device could begin to transmit. While it was transmitting it would listen for any incoming signal. If it detected an incoming signal then that indicated a collision and the device would stop transmitting, send a signal indicating a collision, wait for some interval, and attempt the transmission again.

So think about it this way: if deviceA is at one end of the cable and deviceB is at the other end of the cable and deviceA begins to transmit a frame and while deviceA is transmitting deviceB also begins to transmit - if deviceA is transmitting a very short frame it might complete transmitting before the first bit from deviceB arrives. So there really was a collision on the wire but deviceA did not detect it. To reliably detect the collision deviceA needs to transmit long enough to be sure that its signal reaches the other end before it stops transmitting. The calculation was that a frame length of 46 bytes would accomplish that.

HTH

Rick

View solution in original post

6 Replies 6

46 is minimum and its not fixed size. maximum can go upto 1500 for standard non jambo frames. for error identification, frame using CRC

https://en.wikipedia.org/wiki/Ethernet_frame

Please rate this and mark as solution/answer, if this resolved your issue
Good luck
KB

The original poster believes that a collision will alter the size of an Ethernet frame. That is not the case. A collision does not change the size at all. To understand the concept of the minimum size of an Ethernet frame perhaps we should start by remembering that when Ethernet was developed we were not using twisted pair cables or fiber cables, we were using different cables (10Base2 and 10Base5) and there was a definite limitation on the length of those cables. And one important aspect of this cable arrangement was the time that it would take for a signal to get from one end of the cable to the other end of the cable. In these Ethernet implementations if a device wanted to transmit it would listen on the wire. If there was active signal on the wire then it would wait to transmit. If it was quiet then the device could begin to transmit. While it was transmitting it would listen for any incoming signal. If it detected an incoming signal then that indicated a collision and the device would stop transmitting, send a signal indicating a collision, wait for some interval, and attempt the transmission again.

So think about it this way: if deviceA is at one end of the cable and deviceB is at the other end of the cable and deviceA begins to transmit a frame and while deviceA is transmitting deviceB also begins to transmit - if deviceA is transmitting a very short frame it might complete transmitting before the first bit from deviceB arrives. So there really was a collision on the wire but deviceA did not detect it. To reliably detect the collision deviceA needs to transmit long enough to be sure that its signal reaches the other end before it stops transmitting. The calculation was that a frame length of 46 bytes would accomplish that.

HTH

Rick

Joseph W. Doherty
Hall of Fame
Hall of Fame

As I recall, the minimum size, for Ethernet, is, as you noted, based on collision detection.  Basically, it allows time for the signal to get across the maximum allowed distance for Ethernet.  I.e. two hosts at opposite ends of a shared Ethernet cable will detect the collision against what they are now transmitting.  If the frame is shorter, transmission would be completed before the transmitting station "knew" there's a collision.  (BTW, in theory, an interface could "detect" a possible collision after finishing transmitting, if another frame was received within a specific time window, but hardware to support would be much more complex.  Often it's difficult to appreciate the capabilities of hardware, and possible costs, when many of today's existing standards were defined.)

Regarding the maximum size, basically, at that time of defining a standard, pick some maximum size number that all vendors could support, using (then) current technology (at also at a reasonable cost).

Other network technologies supported larger L2 frames, such as token ring and some WAN media too (as I recall), but those technology's interfaces were often more expensive too.

I remember when whole (business) PCs, with embedded Ethernet, cost less than a token ring card.  (One reason why we're all using Ethernet variants [including even Ethernet WAN hand-offs], and not token ring.)

Oh, and a collision doesn't alter the size of frames (i.e. nor does it matter, except for the minimum, for the reason explained above).  On the wire, two frames, overlapping, "garble" each other.  Transmitting hosts, soon as they detect the "garble" (i.e. collision) stop transmitting (and back off retransmitting for a "random" amount of time).

A couple additional thoughts about how Ethernet has changed.

In the early days when the Ethernet standards were being developed it was a shared medium and it was important to be able to detect collisions. Basically everything operated as half duplex (you could talk or you could receive, but you could not do both). With the advent of twisted pair wiring (and especially with fiber) it became possible to transmit and to receive at the same time (physically different wires for each). So now the norm is for interfaces to operate full duplex and collisions are not important. But they are still part of the standard. And though it is no longer important to be able to detect a collision the minimum frame size is still part of the standard.

The standard specifies maximum frame size as 1500. Part of the reason for that was that you did not want some device to monopolize the medium by transmitting very big frames. Specifying a maximum frame size was a way to be sure that other devices would be able to get a turn transmitting. In today's technology that is no longer important and many modern switches implement jumbo frames without any impact on the network.

HTH

Rick

I am glad that our explanations have been helpful. Some times it is good to look back and to remember how our technologies have evolved. Thank you for marking this question as solved. This will help other participants in the community to identify discussions which have helpful information. This community is an excellent place to ask questions and to learn about networking. I hope to see you continue to be active in the community.

HTH

Rick

"A couple additional thoughts about how Ethernet has changed."

Indeed!

"The standard specifies maximum frame size as 1500. Part of the reason for that was that you did not want some device to monopolize the medium by transmitting very big frames."

That's true, too, because, again, as Rick mentions, original Ethernet used a "shared media".   The first/early wired variant (mid '70s) didn't use 10 Mbps, but ran at 2.94 Mbps.  I.e. bigger frames would consume even more time on the wire, although (if my math is correct) even at that speed, a 1500 byte frame would only take about 4ms to transmit.

Again, though, I recall (?) from reading IEEE journals in the early '80s, hardware capabilities, and their costs, very much impacted hardware design decisions.

Just to give an idea about hardware capabilities and costs (then), consider I purchased my first "microcomputer" in '79 (prior to IBM's PC) which has a 1.7 Mhz Z80 processor (superior to 8080 or 6502), 32 KB of RAM (i..e not TB, GB or even MB), two 89 KB 5 1/4 inch floppy disc drives (and a 300 baud modem [alone, that modem cost something like $400, then]).  The prior "only" cost me about 1/3 of my (then) yearly salary, or about the price of a mid size new automobile.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card