cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
173476
Views
70
Helpful
11
Replies

Why the MTU size is 1500 ?

nadeem-akhtar
Level 1
Level 1

Can somebody explain that why the Ethernet Maximum Transmission Unit was chosen to be 1500 bytes (why exactly 1500) ? It might have some historical reason.

Thanks,

Nadeem

11 Replies 11

jeffrey.zhou
Level 1
Level 1

The standard set by IEEE802.3 specifies the MTU of Ethernet is 1500 bytes.

Thanks for the reply. However, what I want to know is that why 1500 ? Why didn't they pick 1400 or 1600 or any other value for MTU size. Like there is reason for the least frame size of 64 (related to round trip and maximum cable length), there has to be some reason for MTU being 1500.

Thanks,

Nadeem

MTU is sizing at OSI level 3.

So at level 2, the Ethernet Frame max size is (whithout VLAN tagging/Ethertype) 1518. There is header/trailer which sizes are 18 bytes.

So if you consider the necessity to route the packet at the level 3, there is no considerations about this 18 bytes for the L2...

thisisshanky
Level 11
Level 11

There is an obvious reason why the frame payload size was chosen to be 1500 bytes. A frame size of 1500 bytes, offers, maximum efficiency or throughput.

As you know, ethernet frame has 8 byte preamble, 6 byte source and 6 byte destination mac address, mac type of 2 bytes, and 4 bytes CRC. Assuming the MTU payload to be 1500 the total number of bytes comes to 1500 + 8 + 6 + 6 + 2 + 4 = 1526 bytes. Now between each frame there is a inter frame gap of 12 bytes which constitues 9.6micro seconds gap between each frame. This is essential so that frames dont mix up. So the total size of each frame going out of a host is 1538 bytes.

So at 10 Mbps rate, the frame rate is 10 Mbps / 1538 bytes = 812.74 frames / second.

Now we can find the throughput or efficiency of link, to transmit 1500 bytes of payload. by multiplying the frame rate with the number of bytes of the payload.

So efficiency = 812.74 * 1500 * 8 = 9752925.xxxxx bps which is 97.5 percent efficient ( comparing with 10 MBps)

I guess I have gone too much with mathematics of Ethernet, but the interesting thing to notice is that, as the number of bytes in the payload increases, the frame rate is decreasing. See that for an MTU of 1500 bytes on payload, the frame rate has reduced to 812 frames per second. If you increase it above 1500, frame rate would become less than 812.

Also there is a minimum limit for the MTU which is actually 46 bytes. If you calculate the size of the frame for a 46 byte payload it would come to 12+8+6+6+2+46+4 = 84 bytes. Now calculating the frame rate we get it as =

10mbps/ (84 * 8 bytes) = 14880 frames per second. We could have gone to a frame size even lesser than this, which could increase the frame rate even more, but I guess during those times, when IEEE made the standards, the routers didnt have that much frame forwarding capability.

So I think due to above reasons, and considering maximum efficiency, IEEE would have fixed the min and max size of payload as 46 bytes and 1500 bytes.

Sankar Nair
UC Solutions Architect
Pacific Northwest | CDW
CCIE Collaboration #17135 Emeritus

Thanks for the detailed and comprehensive reply.

You have mentioned in your reply that 1500 byte pkt size gives an efficiency of 97.5 % and a frame rate of 812 frames /sec. Is 97.5 % efficiency or 812 frames/sec a std to comply with, cuz if i use 2000byte payload frames i would get 98.13 % efficiency which is higher than 97.5.

well explained...

Great explanation!!!

DAVE GENTON
Level 2
Level 2

and there are underlying operations reasons on top of what shanky has written. If you take all of his data and frame transmit times, you can than look at bit times on the wire. This is important due to the CSMA/CD architecture of ethernet and determines what the total distance can be between 2 hosts. In other words without definded times to wait and see if someone is xmitting on the wire, it would be chaos and alot of collisions. If you had 2 pc's placed on the ethernet 2 miles long (exaggerated to make a point) and pc 1 started xmitting a frame, while at just about the same time pc 2 listens on the wire and hears nothing so he starts xmitting a frame because pc 2 has not yet seen the change in voltage by pc 1 due to distance limitation being exceeded which all boils down to timing and sizing.

All good and well but if your clockrate goes up (say 10Gbps which is more common than 10Mbps these days) the significance of sticking to 1500 byte is diminishing. so within DC's it's more common to see 9k (9182) since all intermediate interfaces are apt to be 10GE or higher these days. For internet access 1500 is still valid though.

"All good and well but if your clockrate goes up (say 10Gbps which is more common than 10Mbps these days) the significance of sticking to 1500 byte is diminishing."

Yea, 10G is likely now found more often than 10Mbps, but as most hosts still don't run at 10G, and WAN connections often don't either (and often don't support jumbo Ethernet sizes), I believe the need to sticking to 1500, except between specific hosts (such as in a DC), isn't diminishing all that quickly.

One issue with using jumbo Ethernet, it's non-standard. There are other potential issues to using jumbo Ethernet, which don't always offset increasing the payload efficiency by a few percent (especially with 10G [or 25G, 40G, 100G's] increased bandwidth).

Joseph W. Doherty
Hall of Fame
Hall of Fame
Yea, there are probably some reason why 1500 was chosen. One, I suspect, was whatever the NIC vendors felt they could build (and at what cost [early NICs could equal the cost of the whole PC]). Another might have been related to the CRC algorithm being used (as MTU increases, some errors are more likely to slip by [communication standards take into account type of corruptions expected on the media]). Some of might have been, just "yea, that sounds good" (i.e. setting to a decimal hundred value), etc.

Review Cisco Networking for a $25 gift card