The beauty of TCP/IP

R

Roedy Green

if I miss a frame of the video,
that's fine; and I don't want to delay the receipt of the later frames
until the server can retransmit that frame that was missed. Hence,
streaming live audio/video is a counterexample to pushing this into the
TCP/UDP division.

In that case, you have a stream, a bucket brigade of packets, so your
performance behaves for all practical purposes much as if you had used
TCP/IP for the stream.
 
C

Chris Smith

Roedy Green said:
I was not trying to make any such statement.

I'm still not convinced. You say you're not making any such statement,
yet you go on to make such a statement.
I was trying to point
out that UDP and TCP/IP stream performance depend on different
characteristics of the connection.

But they don't. Both UDP and TCP have latency and bandwidth. Latency
depends on the sum of the processing at each node and communication
times between hops, and bandwidth depends on the width of the narrowest
communication channel or the speed of the slowest processing node. Some
applications depend mainly on latency, and some on bandwidth; so the two
will perform differently, regardless of the choice of TCP or UDP.

The TCP/UDP choice is almost completely orthogonal to the question of
bandwidth or latency. The ONLY difference is that under TCP,
transmission errors can cause future data to be buffered at the target
node, and buffers have limited size... so in a path with very high
latency and occasional errors, TCP can fail to meet the bandwidth
potential of UDP over a similar path. Of course, the up-side is
reliable messaging, which is not available with UDP.

So...
Either you have a continuous
stream of data or you have intermittent packets.
Yes.

Even if you were to
use TCP/IP for exchanging intermittent packets, it will behave from a
performance point of view like UDP.

There is no such thing as performing "like UDP". It can perform like a
continuous stream, or like intermittent packets with frequent round-
trips... but neither is more or less UDP.

As I said before, since many applications of UDP are for streaming large
amounts of data with no round trips (e.g., video streaming, multicast
broadcasting, ) and many applications of TCP are for intermitten
communication with frequent round-trips (e.g., telnet, ssh, ftp control,
most smtp and pop3 and imap, etc.), most people with experience in
network protocols will think that you've got it quite backwards. You
haven't really got it backward; there's really no significant
relationship at all.
I thought I was stating something almost too obvious to mention, but
apparently it is not.

I think your talk about TCP and UDP is confusing people. That's all.
Latency and bandwidth is not at issue.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
E

E.J. Pitt

I've re-read all the above and it seems to me that Roedy is arguing in a
circle. If what you are interested in is stream performance, of course
bandwidth is the dominant characteristic of interest. If you are
interested in response times then of course latency is the dominant
characteristic of interest.

However, this is an *analytical* truth, not an insight about the
Internet: it flows logically from the definitions of the units
concerned. It has nothing specifically to do with TCP or UDP or IP; it
is equally true of a garden hose. If you want to full a bucket quickly,
choose a large bandwidth pipe, i.e. a large cross-section. If you care
about the time it takes a drop of water to traverse the hose, choose a
low-latency i.e. short hose.

As for the statement that the number of hops is what determines UDP
performance, this is simply false. Total latency is the sum of the
individual latencies, no getting away from it. Similarly total effective
bandwidth is the minimum of the bandwidths of the individual segments,
nothing you can do about it.
 
R

Roedy Green

I'm still not convinced. You say you're not making any such statement,
yet you go on to make such a statement.

I am surely the expert on what I meant to say. I won't argue about how
many different ways there are to interpret or twist my words. I was
not writing a legal contract.

My best wording of what I mean to say is at
http://mindprod.com/jgloss/tcpip.html#PERFORMANCE

If that is still unclear, please suggest a better wording.
 
C

Chris Smith

Roedy Green said:
I am surely the expert on what I meant to say.

I'm just trying to communicate. I tried to sum up your statement in a
certain way... but then you disagreed, and then went on to (apparently)
say exactly what I had summed up earlier. Clearly, one of us is not
understanding the words of the other.
My best wording of what I mean to say is at
http://mindprod.com/jgloss/tcpip.html#PERFORMANCE

Ah. We still disagree, then. I guess the essence of our disagreement
can be summed up by changing your statement:

"... TCP/IP is like a bucket brigade to put out a fire, where a
Datagram is like a single person running with the bucket back and
forth ... TCP/IP does not wait for a packet to be delivered before
sending the next."

with this expanded version:

"... TCP/IP is like a bucket brigade to put out a fire, where a
Datagram is exactly like a bucket brigade, except that we're only
looking at one bucket at a time. The actions of all parties are
identical; TCP just provides the terminology and administrative
capability to think about the whole brigade as a single thing.
Neither TCP nor datagrams (UDP) necessarily wait for a packet to be
delivered before sending the next. However, if I were writing an
application that needed to wait for acknowledgement from the other
side, I would almost certainly be using TCP. Hence, on average,
UDP (datagram) applications probably get better performance
(in terms of bandwidth) than TCP applications... but because of the
nature of the problems they solve, not because TCP is somehow
inherently inferior. The median is probably about the same."

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
E

E.J. Pitt

Roedy Green said:
However, if I were writing an
application that needed to wait for acknowledgement from the other
side, I would almost certainly be using TCP. Hence, on average,
UDP (datagram) applications probably get better performance
(in terms of bandwidth) than TCP applications...

Roedy, I don't follow the logic here at all. (a) The 'hence' does not
follow from what precedes; (b) if you needed to wait for an
acknowledgement you would probably not be taking advantage of windowing,
so why would you 'almost certainly be using TCP'?
... but because of the
nature of the problems they solve, not because TCP is somehow
inherently inferior. The median is probably about the same."

I don't know who said TCP is 'inferior'; I still don't know who is
comparing apples and oranges; I don't know what this 'median' could
possibly be measured with; and I still don't know why you refuse to add
up latencies over a path. Your oft-repeated statement that the number of
hops in UDP is critical assumes that all hops have the same latency,
which is untrue.
 
C

Chris Smith

No, he didn't. That was me, suggesting to Roedy how I would change the
statement from his glossary, which I believe is incorrect.
Roedy, I don't follow the logic here at all. (a) The 'hence' does not
follow from what precedes; (b) if you needed to wait for an
acknowledgement you would probably not be taking advantage of windowing,
so why would you 'almost certainly be using TCP'?

The logic is as follows:

Applications with round-trips (i.e., they wait on a response from the
remote host) get lower bandwidth than applications that do not do so.
Since waiting for a response assumes that it matters whether the
response comes, such applications also need a reliable transport
protocol and almost certainly would be implemented with TCP.

So applications that stream large amounts of data (and get better
bandwidth) may be written in TCP or UDP. Applications that do frequent
round trips, on the other hand, are written almost exclusively in TCP.
On average, TCP will see lower bandwidth because it includes
applications that do round-tripping.
I don't know who said TCP is 'inferior'

It was pre-empting a conclusion that seemed likely from the kind of
logic in the original article.
; I still don't know who is comparing apples and oranges;

It's my believe that Roedy is.
I don't know what this 'median' could possibly be measured with;

It's a guess. Jeez, I'm not defending a dissertation here. Do you have
any reason to believe it's a bad guess?
and I still don't know why you refuse to add
up latencies over a path. Your oft-repeated statement that the number of
hops in UDP is critical assumes that all hops have the same latency,
which is untrue.

I don't think Roedy really does refuse to add latencies over a path.
I've been reading this thread, and I've never seen him refuse.

Roedy is also not defending a dissertation. So yeah, of course latency
at each hop differs. Nevertheless, as a general trend, an IP packet
sent over more hops is going to experience higher latency than an IP
packet sent over fewer hops. That's a _perfectly_ reasonable statement,
in absence of specific reason to believe that the longer path has
significantly lower latency per hop.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
E

E.J. Pitt

Chris said:
Do you have
any reason to believe it's a bad guess?

Yes. What on earth is it a guess *at*? Measured in what? What exactly is
the median of X apples and Y oranges?
So yeah, of course latency
at each hop differs. Nevertheless, as a general trend, an IP packet
sent over more hops is going to experience higher latency than an IP
packet sent over fewer hops. That's a _perfectly_ reasonable statement,
in absence of specific reason to believe that the longer path has
significantly lower latency per hop.

It's a matter of logic. It's a universal statement which therefore can
be disproved by a single counter-example. The latency around my home LAN
is around 1ms per hop. The latency via xDSL to my ISP is around 227ms. QED.
 
C

Chris Smith

E.J. Pitt said:
Yes. What on earth is it a guess *at*? Measured in what? What exactly is
the median of X apples and Y oranges?

The median bandwidth achieved by TCP applications that are network-
limited in performance is likely to be about the same as the median
bandwidth achieved by UDP applications that are similarly network-
limited in performance. The mean bandwidth for such TCP applications,
on the other hand, is likely to be lower than the mean bandwidth for
such UDP applications.
It's a matter of logic. It's a universal statement which therefore can
be disproved by a single counter-example. The latency around my home LAN
is around 1ms per hop. The latency via xDSL to my ISP is around 227ms. QED.

People speak in generalizations. In fact, that's one of the fundamental
higher-level thought processes. If either Roedy or I claimed to be
giving a rigorous performance model, you might have a point. As it is,
I don't care.

No one's implementing an IP routing algorithm based on Roedy's "bucket
brigade" analogy. It's just a way of explaining concepts. If it
succeeds, it will be intuitively obvious that the speed of a hop
matters. If it doesn't succeed, it failed anyway.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
E

E.J. Pitt

Chris
On the other hand if you send a datagram and send a datagram in
return, the response time depends on the number of hops.

This is the original statement I took issue with. Has it been corrected
or retracted?
So even though the Internet is designed at the fundamental level
around delivery of individual packets, it is most efficient when
delivering streams.

and the same is true of a garden hose. Or a piece of straight wire.

basta cosi

EJP
 
C

Chris Smith

E.J. Pitt said:

(I don't know what you did to munge the References header, but it's
lucky I saw your response. You might want to be more careful about
that.)
This is the original statement I took issue with. Has it been corrected
or retracted?

The statement above, as written, is true. I don't know who said it --
probably Roedy? -- so I couldn't retract it. But in any case, I don't
see why anyone would retract it. The response time absolutely does
depend on the number of hops (among other things, of course).

My argument with Roedy is that the word "datagram" above is worse than
irrelevant. The statement is technically true of all IP packets,
assuming that you send one and then one gets sent back in response.
However, this usage pattern is basically broken with UDP except in
tightly controlled known reliable environments, since the packet is
never guaranteed to arrive. Therefore, it's practically guaranteed
you'd be using TCP when you observe this poor performance due to high
latency and frequent round-trips, even though it's possible for any
variety of IP.
and the same is true of a garden hose. Or a piece of straight wire.

So? Several people have said they find Roedy's analogy useful. You
think they are wrong for understanding it better in Roedy's terms?
Frankly, it doesn't matter how wrong you think that is. It's still
true.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
R

Roedy Green

This is the original statement I took issue with. Has it been corrected
or retracted?

It is true. It is also true it depends on the combined latencies of
all the hops.
 
F

frankgerlach

I am also thinking about the ease and "beauty" of a TCP server written
in java. Just create a ServerSocket, go into an accept() loop and start
worker threads for the connections you get. FINITO !
No need for Apache, Tomcat, Servlets or J2EE monsters like WebSphere.
The only thing you might need is the ability to write a scanner&parser
for the grammar of your client/server conversation. People who are not
able to write a recursive descend parser might go with the J2EE
monsters, the rest can implement the TCP acceptor described above.
 
F

frankgerlach

There might be a third possibility for those who don't know what LL(1)
means: Serialization. No need for RMI of course - the simple Acceptor
does the job.
 
C

Chris Smith

I am also thinking about the ease and "beauty" of a TCP server written
in java. Just create a ServerSocket, go into an accept() loop and start
worker threads for the connections you get. FINITO !
No need for Apache, Tomcat, Servlets or J2EE monsters like WebSphere.
The only thing you might need is the ability to write a scanner&parser
for the grammar of your client/server conversation. People who are not
able to write a recursive descend parser might go with the J2EE
monsters, the rest can implement the TCP acceptor described above.

WebSphere, of course, does a lot of things... and implementing network
protocols is the least of them. Don't get me wrong; certainly most of
what WebSphere does is completely useless to most applications... but
nevertheless there are additional features. One of those things, among
the more useful, is to implement a considerably more scalable threading
model than what you describe above. At least use
java.util.concurrent.ThreadPoolExecutor.

Even with plain network protocols, you can only design your own if you
plan to implement both sides of the conversation, which is quite rare.
If I want to talk to a web browser, I'm going to write servlets rather
than re-implement HTTP 1.1. After all, servlets and Tomcat are pretty
simple and lightweight wrappers around HTTP anyway, along with some
decent higher-level capabilities.

--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.

Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation
 
F

frankgerlach

That was exaclty my point - who needs all the bells and whistles of
WebSphere ?? The management guys who dictate its use ? What they need
is an alternative to PowerPoint :))
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top