*BSD News Article 73639


Return to BSD News archive

Newsgroups: comp.os.linux.networking,comp.unix.bsd.freebsd.misc
Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!nntp.coast.net!howland.reston.ans.net!newsfeed.internetmci.com!in2.uu.net!cygnus.com!kithrup.com!sef
From: sef@kithrup.com (Sean Eric Fagan)
Subject: Re: TCP latency
Organization: Kithrup Enterprises, Ltd.
Message-ID: <DuI083.FH3@kithrup.com>
References: <4paedl$4bm@engnews2.Eng.Sun.COM> <4s8cuq$ljd@bertrand.ccs.carleton.ca> <31E7C0DD.41C67EA6@dyson.iquest.net> <4s8tcn$jsh@fido.asd.sgi.com>
Date: Sat, 13 Jul 1996 20:14:27 GMT
Lines: 66
Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:45115 comp.unix.bsd.freebsd.misc:23494

In article <4s8tcn$jsh@fido.asd.sgi.com>,
Larry McVoy <lm@slovax.engr.sgi.com> wrote:
>Nobody said that Linux' TCP latency under load is faster,

The original post that caused all this hubbub was a .signature that had a
one-line statement extolling the virtues of Linux' TCP code, along with a
challenge.

That single line is highly misleading.  Having low latency does not
necessarily lead to higher bandwidth, which is the point John has been
trying to make.  (Well, one of them.)

On the other hand, a low latency for TCP is highly desirable.  It will make
interactive login sessions much nicer (at least over the LAN), and some
applications apparantly benefit greatly from it (both Larry and Linus have
mentioned locking).

Part of the reason John is so upset, I think, is because Linux users
(including Larry) used to point out how wonderful Linux' context switching
numbers were -- wonderfully low, making things so fast, right?  But it
bombed under load, meaning that anyone who had a heavily-used machine (such
as, oh, one of the five-most-used ftp/http servers on the internet ;)) would
see performance degradation that was out of line with what the simplistic
benchmarks would show.  (I don't know if Larry has fixed this in lmbench; I
would hope so, because this should be something he is interested in.)

I don't fully agree with John in his postings.  I have, however, enjoyed
some of them, because most of the implementors don't take the time to
explain their reasons for code changes in public.  I would dearly like to
see more of that ;).

>It's useful to know what your protocol stack is costing you.  If you load
>up the system, you don't know if the degradation is due to cache misses,
>TCP lookup problems, locking (either bottom/upper half and/or SMP), etc.

Indeed.  And what your overhead for a context switch is, and overhead for
entry into a system call, things like that.

lmbench is a wonderful tool for that.  It's also a great way to test for
regression while doing certain kinds of kernel hacking.  (lmbench's
*spread*, which continues to grow, is a testament to free software.  Well,
that's what *I* like to think. ;))

That doesn't, however, tell the whole story.  Sure, you might get a context
switch number from lmbench of three nanoseconds, but how well does that
scale with having twelve million processes, half of which are ftp processes
over your 4GB/s networking interface, and the other half are doing a build
of the entire OS from scratch?  (Okay, so I'm exaggerating a bit, and
assuming that hardware will continue to improve ;).)

>BSD guys:  "your benchmark sucks!  your numbers are wrong!  you mislead the
>	    world!  you suck!  whine!"

I could point out that you, Larry, have made it obvious to some people
(including myself) that if we didn't want to discuss Linux with you, we
weren't welcome to discuss *anything* with you.  However, I don't know if
you have taken that approach with lmbench; a couple of comments I've heard
have indicated that that might be the case, but I haven't heard or seen
anything from you about it.

I *will* point out that that characterization is unfair, inflammatory, and
(in my experience) largely wrong.  I will not, however, completely disagree
that John has been a bit out of line himself -- the best way to refute
someone's arguments should be with *facts*, not allegations.  (And that goes
for *everyone*.)