*BSD News Article 72982


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!qns3.qns.net!imci4!newsfeed.internetmci.com!news.mathworks.com!uunet!in1.uu.net!nntp.inet.fi!news.funet.fi!news.helsinki.fi!news
From: torvalds@linux.cs.Helsinki.FI (Linus Torvalds)
Newsgroups: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.bsd.freebsd.misc
Subject: Re: TCP latency
Date: 6 Jul 1996 13:29:38 +0300
Organization: A Red Hat Commercial Linux Site
Lines: 175
Message-ID: <4rlf6i$c5f@linux.cs.Helsinki.FI>
References: <4paedl$4bm@engnews2.Eng.Sun.COM> <31D2F0C6.167EB0E7@inuxs.att.com> <4rfkje$am5@linux.cs.Helsinki.FI> <31DC8EBA.41C67EA6@dyson.iquest.net>
NNTP-Posting-Host: linux.cs.helsinki.fi
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:44196 comp.unix.bsd.netbsd.misc:3944 comp.unix.bsd.freebsd.misc:22942

In article <31DC8EBA.41C67EA6@dyson.iquest.net>,
John S. Dyson <toor@dyson.iquest.net> wrote:
>Linus Torvalds wrote:
>> 
>> No. TCP is a _stream_ protocol, but that doesn't mean that it is
>> necessarily a _streamING_ protocol.
>> 
>Okay, you CAN kind-of misuse it by using TCP for a single transaction,
>like simple HTTP transactions.

That's NOT misusing TCP. You're showing a very biased view here. Just
because YOU like streaming TCP does NOT mean that TCP should necessarily
be streaming. There is a lot more to TCP than just TCP windows.

TCP has lots of huge advantages over just about _anything_ else, which
makes it the protocol of choice for most things.

 - It's everywhere. Just about _everything_ supports TCP, and unless you
   want to paint yourself into a small corner of the market you'd better
   use TCP these days (UDP matches this point too, but you can forget
   about IPX, appletalk and TTCP).
 - it works reasonably well for lots of different things. UDP is useless
   for lots of things (nobody sane would ever have done http with UDP,
   it simply would have been a bitch)
 - it's there, and it's there NOW. It's not some great new technology
   that will revolutionalize the world in ten years. It WORKS.

>		  That is the reason for the implementation
>of the so far little used protocol extension TTCP.  (FreeBSD has it
>for example.)  Also, there are advanced features in www browsers/servers
>like Netscape where the connection is kept up for more than one transaction.
>(Why be silly to re-establish a connection, when you could have kept the
>previous one up?) 

We're not talking about _connection_ latency, we're talking about
_packet_ latency.  The tests quoted here have not been about how fast
you can connect to a host, but how fast you can pass packets back and
forth over TCP.  That's exactly the kind of thing you see with http and
keeping the connection open, or with NFSv3 over TCP, or with a
distributed lock manager (..databases) that has TCP connections to the
clients or with a _lot_ of things. 

>With many/most web pages being 1-2K, the transfer rate starts to
>overcome the latency, doesn't it?  For very small transactions, maybe
>100 bytes the latency is very very important.  How many web pages are that
>small???

1-2kB is nowhere _near_ streaming.  Over normal ethernet 1460 bytes is
still just one packet, to start seeing TCP in a streaming environment
you have to actually fill up your window (which for any reasonable TCP
implementation is usually at least 8kB).  And most web pages probably
are a lot smaller than 8kB..  (and despite all the hype, let's not get
stuck on www: on a smaller scale something like a lock manager can be a
lot more performance critical for some application, for example)

Again, I want to point out that I'm not arguing against throughput here:
throughput is at _least_ as important as latency for TCP, and most
traditional uses of TCP are definitely throughput-bound.  So don't get
the idea that I find throughput unimportant - I just want to point out
that latency _does_ matter, and can matter a lot more than throughput
for some applications.  And those applications aren't just "make
believe": they are real-world everyday stuff. 

>Now I can understand that there might be specific applications where there
>are only a few hundred bytes transferred, but those appear to be in the
>minority. (Especially where it is bad that a latency of 100usecs worse
>is bad in a SINGLE THREADED environment.)  Note -- in most single threaded
>environments, 100usecs is in the noise.

Again, latency is probably more important than throughput up to around
10kB or so (TCP window), and it can actually get MORE important for a
multithreading system.  Because low latency can also mean that the
system spends less time sending out the packets, so it can go on serving
the _next_ client faster. 

>There are a few applications that need very low latency, but remember,
>latency != CPU usage also.

Sure, latency != CPU use, and some systems definitely try to maximize
throughput at the cost of latency.  For example, the whole idea with the
Nagle algorithm for TCP is to get better throughput at the cost of some
latency - but only when we notice that the latency of the network itself
is higher than that of the application (ie Nagle doesn't take effect
until for the _next_ packet if we haven't had confirmation for the
previous one). 

However, in many cases latency _is_ CPU use, and we can't just gloss
over latency with taking advantage of concurrency. It works sometimes,
but it can be very bad for other things (maybe you've followed the
threads on comp.arch about the "Tera" architecture, where essentially
the same thing has been discussed wrt memory latency).

You tend to always bring up "heavy load" as an argument against low
latency, and that is not really the answer either.  You _can_ hide
latency with concurrency, but that definitely does not work for
everything.  Dismissing numbers because they were done under conditions
where the machine wasn't doing anything else is as stupid as dismissing
numbers that were done under load.  Can't you see that?

>Retorical question: are all of the pipelined CPU's "low quality" because
>their latency is long, but their execution rate is fast???  They do
>things in parallel, don't they?  (Scheduling 101 :-)).

Actually, they do things in parallell, but they don't make latency
worse.  They just take advantage of the fact that they have multiple
("independent") hardware units that do separate work, and thus they can
improve throughput by trying to avoid letting those hardware units be
idle.  And the end result is that the latency for a bunch of operations
can be lower, even thought the latency for just one operation has stayed
the same. 

The same goes for SMP: the latency of a single CPU is not made worse by
having another CPU do other things - they can work in parallell (yes,
this is oversimplified, and in some cases latency _does_ get worse, but
that is generally considered a real problem - it's definitely not a
feature). 

Oh, and just about _anybody_ would prefer one CPU that is twice as fast
than two separate CPU's.  It is almost _always_ faster (and for some
things it will be twice as fast), and the only problem with that is that
it is almost always also a LOT more expensive.  This was one of my
points in the previous message: throughput is "easy", while latency is
"hard". 

That's why hardware designers (and software people too) often improve
throughput instead of improving latency. Simply because it's _easier_ to
do. Not because throughput is any more important than latency.

(On some level throughput _equals_ latency - throughput can be seen just
as the "batch latency".  So you could say that I'm arguing against
looking at latency from two different sides: latency for one operation,
and latency for a "batch" of operations.  BOTH are supremely imporant,
and anybody who thinks otherwise is not really seeing the full picture). 

>> Wrong. TCP latency is very important indeed. If you think otherwise,
>> you're probably using TCP just for ftp.
>>
>I guess FreeBSD-current makes it up by being faster with the fork/execs
>done by simple www servers. (About 1.1msecs on a properly configured
>P5-166.)

For www servers, the most important part is probably low context switch
overhead and good per-packet and connection latency (and note that the
lmbench numbers that have been floating around are _not_ connection
latency, they are "packet" latency - the two are likely to have a strong
correlation, but are not necessarilt the same thing).  The fork/exec
stuff isn't as large a problem because most good servers will try to
pre-fork, exactly because they want low latency. 

That's not to say I don't want to beat BSD: do you have actual
comparisons against Linux? I suspect that the problem with Linux might
be user-level overhead of the shared libraries, not the actual fork/exec
itself - if you have numbers with shared/static binaries I'd be very
interested indeed.. 

>> Think Quality (latency) vs Quantity (throughput).  Both are important,
>> and depending on what you need, you may want to prioritize one or the
>> other (and you obviously want both, but that can sometimes be
>> prohibitively expensive).
>> 
>The quality vs. quantity is interesting, since I consider for certain
>applications, slower transfer rates *significantly* impact quality.

I'm not arguing _against_ throughput.  Try to get that straight. 
Throughput is important too (so is Quantity - would you rather have a
_really_ really good and flat roadsystem in the US that only connects
New York and San Francisco, or do you accept a few potholes and know
that you can drive anywhere? You'd like both, but are you going to pay
for it?). 

Quantity vs Quality is NOT a either or: it's a balancing issue.  I'm
just telling you that latency is very important too, and if you just say
"throughput" all the time, you're losing out on 50% of the equation.. 

		Linus