*BSD News Article 72986


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!solace!nntp.uio.no!Norway.EU.net!EU.net!newsfeed.internetmci.com!zdc!zdc-e!szdc-e!news
From: "John S. Dyson" <toor@dyson.iquest.net>
Newsgroups: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.bsd.freebsd.misc
Subject: Re: TCP latency
Date: Sat, 06 Jul 1996 12:34:50 -0500
Organization: John S. Dyson's home machine
Lines: 202
Message-ID: <31DEA3A3.41C67EA6@dyson.iquest.net>
References: <4paedl$4bm@engnews2.Eng.Sun.COM> <31D2F0C6.167EB0E7@inuxs.att.com> <4rfkje$am5@linux.cs.Helsinki.FI> <31DC8EBA.41C67EA6@dyson.iquest.net> <4rlf6i$c5f@linux.cs.Helsinki.FI>
NNTP-Posting-Host: dyson.iquest.net
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.0b5Gold (X11; I; FreeBSD 2.2-CURRENT i386)
Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:44211 comp.unix.bsd.netbsd.misc:3948 comp.unix.bsd.freebsd.misc:22946

Linus Torvalds wrote:
> 
> In article <31DC8EBA.41C67EA6@dyson.iquest.net>,
> John S. Dyson <toor@dyson.iquest.net> wrote:
> >Linus Torvalds wrote:
> >>
> >> No. TCP is a _stream_ protocol, but that doesn't mean that it is
> >> necessarily a _streamING_ protocol.
> >>
> >Okay, you CAN kind-of misuse it by using TCP for a single transaction,
> >like simple HTTP transactions.
> 
> That's NOT misusing TCP. You're showing a very biased view here. Just
> because YOU like streaming TCP does NOT mean that TCP should necessarily
> be streaming. There is a lot more to TCP than just TCP windows.
>
Linus, your arrogance is showing here...  making personal disparaging
remarks.  You DO NOT need to do this.  Note that I used the term 
"kind-of." :-(.  That qualification was added for a reason -- so that
you
would understand that TCP doesn't do that task as well as it does other
things.

> 
> TCP has lots of huge advantages over just about _anything_ else, which
> makes it the protocol of choice for most things.
> 
Believe it or not, I agree that it is the best (except for TTCP) for the
application, given what is available.  It just isn't as well suited to
the application as TTCP is.  That is one reason WHY TTCP was invented
and added to FreeBSD.

>
>  - It's everywhere. Just about _everything_ supports TCP, and unless you
>    want to paint yourself into a small corner of the market you'd better
>    use TCP these days (UDP matches this point too, but you can forget
>    about IPX, appletalk and TTCP).
>
That is why FreeBSD did not get rid of TCP in lieu of TTCP :-).  You are
making a simple criticism of the benchmark into a much bigger argument
than it needs to be.

>  - it works reasonably well for lots of different things. UDP is useless
>    for lots of things (nobody sane would ever have done http with UDP,
>    it simply would have been a bitch)
I agree that it works reasonably well, so what is your point?  It is
not the best possible protocol for the task.  That was my point. 

>  - it's there, and it's there NOW. It's not some great new technology
>    that will revolutionalize the world in ten years. It WORKS.
>
Right.
 
>
> 
> We're not talking about _connection_ latency, we're talking about
> _packet_ latency.  The tests quoted here have not been about how fast
> you can connect to a host, but how fast you can pass packets back and
> forth over TCP.  That's exactly the kind of thing you see with http and
> keeping the connection open, or with NFSv3 over TCP, or with a
> distributed lock manager (..databases) that has TCP connections to the
> clients or with a _lot_ of things.
>
But in the most often used application of HTTP, the single message per
connection happens often.  Now, how many transactions are needed to
set up a TCP connection?  Hmmm?  Please refer to Stevens, and it will
explain it to you.  (TTCP helps fix that problem.)
 
> >With many/most web pages being 1-2K, the transfer rate starts to
> >overcome the latency, doesn't it?  For very small transactions, maybe
> >100 bytes the latency is very very important.  How many web pages are that
> >small???
> 
> 1-2kB is nowhere _near_ streaming.
True, but did I say anything to disagree with that?  However, that 1-2K
starts making latency much less important.  You appear to speak in
absolutes.

> 
> Again, latency is probably more important than throughput up to around
> 10kB or so (TCP window), and it can actually get MORE important for a
> multithreading system.  Because low latency can also mean that the
> system spends less time sending out the packets, so it can go on serving
> the _next_ client faster.
>
My criticism of the benchmark is that it does not model what you are
claiming above.  When you speak of latency, you must qualify how much
latency and how much bandwidth.  If there is a minor difference in
latency,
then bandwidth becomes more important.  You are speaking in absolutes
with
little qualification given relative values.  You also have to describe
the
environment for the test.  If the test is the only load on the
(sub)system,
then it only shows the performance with the benchmark load.  We should
also standardize tests for systems that might have 100's of connections
active, and also concurrent connection requests.

The latency results are static, and under very limited conditions.

 
>
> 
> You tend to always bring up "heavy load" as an argument against low
> latency, and that is not really the answer either.  You _can_ hide
> latency with concurrency, but that definitely does not work for
> everything.  Dismissing numbers because they were done under conditions
> where the machine wasn't doing anything else is as stupid as dismissing
> numbers that were done under load.  Can't you see that?
>
Please look into how much CPU is needed when you have many active TCP
connections.  It goes up.  Yes I do use load in the
arguments, because it IS important to consider for overall system
performance measurement.  Don't you consider the scalability of you
algorithms, or just make it fast for one or two processes (makes
lmbench look good???)

>
> 
> That's not to say I don't want to beat BSD: do you have actual
> comparisons against Linux? I suspect that the problem with Linux might
> be user-level overhead of the shared libraries, not the actual fork/exec
> itself - if you have numbers with shared/static binaries I'd be very
> interested indeed..
> 
When I do my benchmarks between FreeBSD and Linux, I *usually* do them
without shared libs, because I usually take the kernel view of things...
If you remember alot of old lmbench runs and compare FreeBSD vs. Linux,
people would compare our dynamic shared libs results with the old Linux
static shared libs, and FreeBSD would come out far behind.  We did alot
to
speed FreeBSD with dynamic shared libs to be almost as fast as Linux
with
static shared libs.

Next we noticed that we could gain alot more in performance with a few
minor
changes (enhancements.)  Now, our fork/exec perf is sigificantly faster
than our old fork(alone) perf.  The way that process fork is done has
now
been totally changed.  Processes are now represented differently in
memory,
and almost any performance disadvantages of the excellent (I can say
that
egoless, because I did not invent them) abstractions of our pseudo-Mach
VM
system are gone.  You and us started from different directions, we got
stuck
with legacy code, and you have had to implement things from scratch. 
Both
are difficult to make perform well.

My recent benchmark runs show that FreeBSD-current (kernel) is 5-10%
faster
than Linux in fork/exec operations.  There are more improvements being
made
to -current (not ready yet) that add a bit more performance to it.  The
best
that I have seen on my machine with pre-current stuff shows about
460-480
usec fork times (the best I had gotten with Linux 1.3.9x was about 540.)
-current shows about 500-520 (built only with -O, without the latest
enhancements.)  We generally don't build our kernels with (-O2
-fomit-frame-pointer), so sometimes we can gain a few percent in certain
benchmarks there also.  (We have chosen to make stack tracebacks
available
by default.)

As always though, when people need the highest fork perf, when memory
is not an issue, we suggest building programs static (you should
probably
suggest that for Linux also.)

At least neither FreeBSD nor Linux are as slow as the old SVR4. :-)

> 
> Quantity vs Quality is NOT a either or: it's a balancing issue.  I'm
> just telling you that latency is very important too, and if you just say
> "throughput" all the time, you're losing out on 50% of the equation..
> 
You are the one that brought up the Quality argument and equated Latency
with it...  Sorry...  I was just trying to open up your eyes to show you
that there were other factors associated with quality.  Hopefully, (I
do think that) you are now seeing that.  (For example, ONE cgi script
fork/exec can make a bigger difference than the TCP latency.)  Even
if it isn't a www application, there are alot of applications (in
fact most) where the code actually does substantial work with the
data sent :-).

I am sure that the DRIVER for the DEC chipset will be looked at, because
it appears to be more a driver issue than a network code issue.  But
the vast majority of the users of FreeBSD would gain less by investing
in that (because it is in the region of diminishing returns for most
applications of either FreeBSD or Linux), than working on other areas.

It is a quality issue -- but there are many aspects of the quality
of the product.
 

John