*BSD News Article 73097


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!spool.mu.edu!sgigate.sgi.com!fido.asd.sgi.com!neteng!lm
From: lm@neteng.engr.sgi.com (Larry McVoy)
Newsgroups: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.bsd.freebsd.misc
Subject: Re: TCP latency
Followup-To: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.bsd.freebsd.misc
Date: 8 Jul 1996 07:20:52 GMT
Organization: Silicon Graphics Inc., Mountain View, CA
Lines: 68
Message-ID: <4rqcsk$ff8@fido.asd.sgi.com>
References: <4paedl$4bm@engnews2.Eng.Sun.COM> <4qaui4$o5k@fido.asd.sgi.com> <4qc60n$d8m@verdi.nethelp.no> <31D2F0C6.167EB0E7@inuxs.att.com> <4rfkje$am5@linux.cs.Helsinki.FI> <31DC8EBA.41C67EA6@dyson.iquest.net>
Reply-To: lm@slovax.engr.sgi.com
NNTP-Posting-Host: neteng.engr.sgi.com
X-Newsreader: TIN [version 1.2 PL2]
Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:44332 comp.unix.bsd.netbsd.misc:3961 comp.unix.bsd.freebsd.misc:23034

[lotso latency vs bandwidth discussions deleted]

Since I wrote the benchmarks, I can at least try and explain why they are
the way they are and acknowledge their limitations.  I'll stay away from
saying anything about which OS is "better".

In general, all of the lmbench tests are designed to show you "guarenteed
to be obtainable without tricks" performance numbers.  I don't like the
SPEC approach of allowing the -go-fast-just-for-this-spec-benchmark flag
to cc, so I insist that the benchmarks are compiled with -O and that's it.

I'll cop to the complaint John made that the tests don't show how the system
scales.  There are several ways that I could improve things, such as

	. plot bandwidths & latencies as a function of the number of 
	  tests running (scaling) and amount of data transfered (cache vs
	  memory).
	
	. Design benchmarks that are closer to what happens in real life
	  (I'm thinking mostly web stuff here - I need a benchmark that
	  connects, transfers a variable amount of data, and disconnects).

At any rate, that's lmbench 2.0, and we are talking 1.0.  I'm aware of the
needs and I'll try and get on it this month, it's overdue.  If you want, 
I can post my lmbench TODO list and we can beat it up for a while.  I'd
like to see the next version be something we can all agree on.

Moving on: the comment John made about static Linux vs dynamic FreeBSD
libraries doesn't ring a bell with me.  It's certainly not true for any
numbers I've published (like in the Usenix paper - that was all dynamic
on all systems that supported it, including Linux).  

At Usenix, it was suggested that I stacked the deck in favor of Linux
because I presented 120Mhz FreeBSD numbers vs 133Mhz Linux numbers
(if I remember correctly, it might have been the other way around, but
I doubt it since it was a BSD type complaining and they rarely complain
I'm not being fair to Linux).  I sort of take offense at the suggestion;
at the time, those were all the numbers I had.  And I don't "stack the
deck" on numbers.  Ever.  You might take a look at how SGI hardware does
on lmbench and consider that I work for them, and that the numbers are
obviously unflattering.  

I do stack the deck in the following way:  I talk to Linus all the time
about what I happen to think is important to OS performance.  Linux looks
good in the places where he agreed that I had a point; the context switch
numbers are a great example - Linus did some really nice work there.
I make no apologies for talking to Linus, I enjoy it.

Finally getting to latencies et al.  I think that everyone should print
out the two long messages from Linus in this thread.  In over ten years
of working in the OS world, i have never seen a better treatment of
the issues.  John needs to push that chip off his shoulder and listen
to what Linus is saying - it has nothing to about Linux vs FreeBSD; it
has everything to do with what makes sense for an OS, any OS.

The lat_tcp stuff was written before the Web existed (I think, certainly
before it was widespread).  Motivations for the benchmark include: it
was a critical performance path in the Oracle lock manager.  I'd like to
see Unix get clustering capabilities at some point.  Part of the goal
there is to be able to do remote exec's in such a short amount of time
that you can't tell if they are local or remote.  TCP latency was in that
critical path as well.  Finally, I think it is a reasonable way to see
how muc overhead your stack is giving you.  Since the payload is essentially
zero, you're looking at almost pure os/stack/interrupt overhead, and that's
useful information.
--
---
Larry McVoy     lm@sgi.com     http://reality.sgi.com/lm     (415) 933-1804