The beauty of TCP/IP

R

Roedy Green

I was thinking about a pleasing feature of TCP/IP.

The throughput does not depend on how many hops it takes to get to
you. It depends on the speed of the slowest hop, which would normally
be the one next to you or next to the server. TCP/IP is like a very
long train of packets with many in flight at once. There is no fixed
path for packets.

A highspeed middle link could be the bottleneck if it were the most
congested.

On the other hand if you send a datagram and send a datagram in
return, the response time depends on the number of hops.

So even though the Internet is designed at the fundamental level
around delivery of individual packets, it is most efficient when
delivering streams.
 
L

Luc The Perverse

Roedy Green said:
I was thinking about a pleasing feature of TCP/IP.

The throughput does not depend on how many hops it takes to get to
you. It depends on the speed of the slowest hop, which would normally
be the one next to you or next to the server. TCP/IP is like a very
long train of packets with many in flight at once. There is no fixed
path for packets.

A highspeed middle link could be the bottleneck if it were the most
congested.

On the other hand if you send a datagram and send a datagram in
return, the response time depends on the number of hops.

So even though the Internet is designed at the fundamental level
around delivery of individual packets, it is most efficient when
delivering streams.


Yes - now come up with a way for me to explain to my grandma the difference
between latency and throughput - because thus far people just don't get it.
 
R

Roedy Green

Yes - now come up with a way for me to explain to my grandma the difference
between latency and throughput - because thus far people just don't get it.

The difference between one guy with a bucket and a bucket brigade. The
bucket does not get there much faster by brigade, but more buckets per
minute do.
 
L

Luc The Perverse

Roedy Green said:
The difference between one guy with a bucket and a bucket brigade. The
bucket does not get there much faster by brigade, but more buckets per
minute do.

Ah thanks - that works well.
 
E

E.J. Pitt

Roedy said:
On the other hand if you send a datagram and send a datagram in
return, the response time depends on the number of hops.

Not really. Datagrams are routed the same way as TCP segments, so
consecutive datagrams can take different paths through the network, just
as TCP segments can.
 
E

E.J. Pitt

Luc said:
Yes - now come up with a way for me to explain to my grandma the difference
between latency and throughput - because thus far people just don't get it.

The difference between a truckload of tapes and a modem. The truck has
high bandwidth, long latency, the modem has low bandwidth, short latency.
 
L

Luc The Perverse

E.J. Pitt said:
The difference between a truckload of tapes and a modem. The truck has
high bandwidth, long latency, the modem has low bandwidth, short latency.

While I appreciate your effort - I have more faith in the bucket brigade
idea :) Maybe it's just cause I know my grandma.
 
R

Roedy Green

Not really. Datagrams are routed the same way as TCP segments, so
consecutive datagrams can take different paths through the network, just
as TCP segments can.

Even if each packet goes a different route, by slightly different
numbers of hops, the round trip time for the entire packet bundle
depends on the number of hops and the time of each hop, or more
precisely the case for the worst subpacket.
 
R

Roedy Green

While I appreciate your effort - I have more faith in the bucket brigade
idea :) Maybe it's just cause I know my grandma.

Why does Grandma want to know this?
 
L

Luc The Perverse

Roedy Green said:
Why does Grandma want to know this?


She doesn't - she was never interested, and that is likely why she just
couldn't grasp it.

It came up once a long time ago in a conversation with her.

The question has come up multiple times though; I remember once trying to
explain to a friend why a modem might be superior to a satelite connection
for playing games. I got through to him, but it was harder than I expected
it to be to explain. The concept seems intuitive to me, so it is my belief
that an example to which someone can relate should aid in making it
intuitive to other people.
 
E

E.J. Pitt

Roedy said:
On the other hand if you send a datagram and send a datagram in
return, the response time depends on the number of hops.
[I wrote:]
Not really. Datagrams are routed the same way as TCP segments, so
consecutive datagrams can take different paths through the network, just
as TCP segments can.
[Roedy Green wrote:]
Even if each packet goes a different route, by slightly different
numbers of hops, the round trip time for the entire packet bundle
depends on the number of hops and the time of each hop, or more
precisely the case for the worst subpacket.

(a) Subpackets have nothing to do with it. A fragemented UDP datagram
will never be reassembled.

(b) Even if it was, why is this different from TCP/IP? RTT depends on
the total latency between endpoints. It doesn't really depend on the
number of hops, just one hop with a large latency will mostly determine
the RTT.
 
R

Roedy Green

(b) Even if it was, why is this different from TCP/IP? RTT depends on
the total latency between endpoints. It doesn't really depend on the
number of hops, just one hop with a large latency will mostly determine
the RTT.

Performance in TCP/IP depends on throughput, packets per second
arriving. That roughly depends on the time to traverse the worst hop.
One hop becomes the bottleneck in the train. Speeding up the other
hops won't help, same a traffic. So, oddly, having many hops won't
necessarily hurt performance.

Performance in UDP depends on end-to-end time. It is the sum of all
the hops. The more hops you have the slower it will be.

Here is a typical local tracert:


1 <10 ms <10 ms <10 ms router [192.168.0.1]
2 43 ms 23 ms 23 ms shawgateway [24.69.120.1]
3 25 ms 23 ms 23 ms rd1cv-ge3-1-2.gv.shawcable.net
[64.59.166.98]
4 27 ms 24 ms 23 ms rd2cv-pos0-0.gv.shawcable.net
[66.163.72.1]
5 27 ms 29 ms 30 ms rc1wt-pos2-1.wa.shawcable.net
[66.163.77.21]
6 28 ms 29 ms 29 ms rc2wt-pos1-0.wa.shawcable.net
[66.163.68.2]
7 27 ms 29 ms 29 ms rx0wt-abovenet.wa.shawcable.net
[66.163.68.22]
8 29 ms 29 ms 30 ms 209.249.11.173.data-fortress.com
[209.249.11.173
]
9 31 ms 35 ms 29 ms a.cust.65-110-0-2.van.data-fortress.com
[65.110.
0.2]
10 43 ms 38 ms 34 ms mindprod.com [65.110.20.44]
 
R

Roedy Green

(a) Subpackets have nothing to do with it. A fragemented UDP datagram
will never be reassembled.

You and Gordon disagree on this. Where is the definitive
documentation?
 
S

Scott W Gifford

Roedy Green said:
You and Gordon disagree on this. Where is the definitive
documentation?

The definitive source is RFC 791. My reading is that fragmentation
and reassembly happen at the IP layer, below UDP, and therefore
fragmented UDP packets will always be reassembled. Some features
present in the UDP protocol (RFC 768), such as checksums, wouldn't
work without reassembly, suggesting this is a correct interpretation.

What UDP/IP stacks do in real life may be another story, so the real
definitive answer might come from sending some test packets across a
network and seeing what comes out the other side. Or from a network
guru, if any are handy. :)

----Scott.
 
E

E.J. Pitt

Scott said:
The definitive source is RFC 791. My reading is that fragmentation
and reassembly happen at the IP layer, below UDP, and therefore
fragmented UDP packets will always be reassembled. Some features
present in the UDP protocol (RFC 768), such as checksums, wouldn't
work without reassembly, suggesting this is a correct interpretation.

What UDP/IP stacks do in real life may be another story, so the real
definitive answer might come from sending some test packets across a
network and seeing what comes out the other side. Or from a network
guru, if any are handy. :)

Well I thought I was one ;-) but Gordon is correct. Reassembly takes
place at the IP layer as the RFC and Stevens vol I & II make clear. So
UDP datagrams can be reassembled. What *won't* happen is retransmission
of a component part if it is lost, unlike TCP.
 
E

E.J. Pitt

Roedy said:
Performance in TCP/IP depends on throughput, packets per second
arriving. That roughly depends on the time to traverse the worst hop.
One hop becomes the bottleneck in the train. Speeding up the other
hops won't help, same a traffic. So, oddly, having many hops won't
necessarily hurt performance.

Performance in UDP depends on end-to-end time. It is the sum of all
the hops. The more hops you have the slower it will be.

I'm sorry, I still don't understand what the difference you are talking
about actually is. Performance *is* throughput, surely? bytes per
second? and surely this is a linear function of delay at each hop and
number of hops? in both cases?
 
R

Roedy Green

I'm sorry, I still don't understand what the difference you are talking
about actually is. Performance *is* throughput, surely? bytes per
second? and surely this is a linear function of delay at each hop and
number of hops? in both cases?

No. Let me try once again. What counts with TCP/IP doing a long file
download is how many bytes per second come through the spigot. If
would not matter if each individual packet took 60 seconds to wend its
way through the packet net. Thus the number of hops is not critical.
What is critical is the bottleneck hop. (which varies since packets
don't take the precise same route). The response time of the server is
not critical either, so long as it can keep the pipeline filled.

For a Datagram, what counts is the round trip time for one packet to
make it to the server and for a response to be formulated and a packet
send back. Thus every hop is critical, and the number of hops are
critical.
 
E

E.J. Pitt

Roedy said:
No. Let me try once again. What counts with TCP/IP doing a long file
download is how many bytes per second come through the spigot. If
would not matter if each individual packet took 60 seconds to wend its
way through the packet net. Thus the number of hops is not critical.
What is critical is the bottleneck hop. (which varies since packets
don't take the precise same route). The response time of the server is
not critical either, so long as it can keep the pipeline filled.

For a Datagram, what counts is the round trip time for one packet to
make it to the server and for a response to be formulated and a packet
send back. Thus every hop is critical, and the number of hops are
critical.

So you are comparing apples and oranges: bandwidth in TCP and latency in
UDP. So?

I also don't see how this justifies your original statements that the
Internet 'is most efficient when delivering streams' or that the number
of hops is the critical element in UDP timings.
 
C

Chris Smith

E.J. Pitt said:
So you are comparing apples and oranges: bandwidth in TCP and latency in
UDP. So?

I also don't see how this justifies your original statements that the
Internet 'is most efficient when delivering streams' or that the number
of hops is the critical element in UDP timings.

It was phrased in a confusing way. However, I'd agree that the Internet
is most efficient when delivering "streams"... that is, when delivering
large amounts of one-way data whose transfer time with the available
bandwidth dwarfs the communication latency. Whether these streams are
delivered via TCP or UDP is, of course, pretty much irrelevant to the
fact being discussed. I think that's led to a little confusion here.

I think Roedy's logic is that streams of large amounts of data are most
likely to be done via TCP rather than UDP, and therefore there exists
some correlation between TCP and more efficient applications. I'm not
so sure that correlation is real, though. Downloading of FILES for
local storage and later use would be done with TCP. However, online
viewing of real-time media such as video or audio is more commonly done
with UDP (and, confusingly, in a way that's also known as "streaming")
because a late packet is no good... if I miss a frame of the video,
that's fine; and I don't want to delay the receipt of the later frames
until the server can retransmit that frame that was missed. Hence,
streaming live audio/video is a counterexample to pushing this into the
TCP/UDP division.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
R

Roedy Green

I think Roedy's logic is that streams of large amounts of data are most
likely to be done via TCP rather than UDP, and therefore there exists
some correlation between TCP and more efficient applications

I was not trying to make any such statement. I was trying to point
out that UDP and TCP/IP stream performance depend on different
characteristics of the connection. Either you have a continuous
stream of data or you have intermittent packets. Even if you were to
use TCP/IP for exchanging intermittent packets, it will behave from a
performance point of view like UDP.

I thought I was stating something almost too obvious to mention, but
apparently it is not.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,008
Latest member
HaroldDark

Latest Threads

Top