- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-05-2017 10:17 PM - edited 03-05-2019 09:36 AM
How does UDP manage to preserve message boundaries? I am guessing that UDP can preserve messages boundaries because UDP doesn't care about the RWND/CWND or congestion when it transmits the data, Right?
Lastly, my understanding is that:
TCP data get segmented at L4 within the buffer according to many variables such as RWND, CWND, additional mechanisms, and etc. UDP data also get segmented from the Hard. I am wondering according to which size, does UDP data get segmented and become a datagram?
Solved! Go to Solution.
- Labels:
-
Other Routing
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-07-2017 04:59 PM
Hello,
UDP preserves message boundaries by a set of its design traits:
- UDP never groups several blocks of data received from the application together into a single segment, and it never splits a single block of data received from the application into multiple segments. There is always a 1:1 relation between the block of data received from the application, and the resulting UDP segment. Whatever comes from the application, UDP just puts its header on it, and passes it to IP.
- After the UDP segment is delivered to the destination host, the receiving application is supposed to call an OS function (typically read() or recv(), depending also on the programming language) to receive the payload of the segment. Every recv() dequeues exactly one received UDP segment. Even if the application reads less bytes than the size of the segment, the whole segment is considered to be dequeued; any unread bytes from the dequeued segment will be irretrievably lost.
Best regards,
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-07-2017 04:59 PM
Hello,
UDP preserves message boundaries by a set of its design traits:
- UDP never groups several blocks of data received from the application together into a single segment, and it never splits a single block of data received from the application into multiple segments. There is always a 1:1 relation between the block of data received from the application, and the resulting UDP segment. Whatever comes from the application, UDP just puts its header on it, and passes it to IP.
- After the UDP segment is delivered to the destination host, the receiving application is supposed to call an OS function (typically read() or recv(), depending also on the programming language) to receive the payload of the segment. Every recv() dequeues exactly one received UDP segment. Even if the application reads less bytes than the size of the segment, the whole segment is considered to be dequeued; any unread bytes from the dequeued segment will be irretrievably lost.
Best regards,
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-07-2017 06:37 PM - edited 12-07-2017 08:43 PM
My understanding:
TCP -> There is a myriad of possibilities how the receiver's TCP driver dequeues received TCP segments to the application. EX) If A sends 2000B to B, B might get 1000B + 1000B, or 500B + 1000B + 500B, or 100B + 200B + 700B + 1000B (Countless Combinations).
UDP -> There is only one case. Since no TX/RX buffer is involved in UDP communication, sender's UDP driver just add UDP header and passes the resulting segment to L3. Receiver's application calls an OS function and receiver's UDP driver will pass received datagram to the application. If A sends 500B + 500B, B will get 500B + 500B (Only One Combination)
-> Assuming UDP and TCP are sending 50000B. Is it fair to say that "Fragmentation" would happen more in UDP than TCP because UDP never groups or splits data received from the application at L4?
-> If I try to send more than 65,507B of data over UDP, an error will occur at L3. Right?
Thank you very much for additional explanation :) Your explanation is the best!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-07-2017 06:45 PM
Ten PCs (1 to 10) are connected to the switch.
1. PC1's ARP Aging Time < SW's MAC Address Aging Time
-> PC1 sends an ARP request and SW floods it. SW refreshes the PC1's MAC Address Aging time on the MAC Address Table.
2. PC1's ARP Aging Time > SW's MAC ADD Aging Time
-> PC1 sends a unicast packet. Since SW doesn't know which port to forward that packet, it (unicast) floods that packet.
Am I right? If so, I think ARP Aging time must be <= MAC Address Aging time since a unicast flood might have a sensitive payload.
