*BSD News Article 72988


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!qns3.qns.net!imci4!newsfeed.internetmci.com!swrinde!cs.utexas.edu!uwm.edu!vixen.cso.uiuc.edu!sdd.hp.com!hp-pcd!hpbs2500.boi.hp.com!hpax!cupnews2.cup.hp.com!raj
From: raj@cup.hp.com (Rick Jones)
Newsgroups: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.freebsd.misc
Subject: Re: TCP latency
Followup-To: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.freebsd.misc
Date: 4 Jul 1996 00:53:34 GMT
Organization: Hewlett-Packard's Network Computing Division
Lines: 39
Message-ID: <4rf4me$nve@hpindda.cup.hp.com>
References: <4paedl$4bm@engnews2.Eng.Sun.COM> <4pf7f9$bsf@white.twinsun.com> <31D2F0C6.167EB0E7@inuxs.att.com>
Reply-To: raj@cup.hp.com
NNTP-Posting-Host: hpindio.cup.hp.com
X-Newsreader: TIN [version 1.2 PL2.2]
Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:44208 comp.unix.bsd.netbsd.misc:3947

John S. Dyson (dyson@inuxs.att.com) wrote:
: Steinar Haug wrote:
: > Pentium local           250 usec
: > ...
: > AMD FreeBSD -> Pentium  520 usec
: All this TCP latency discussion is interesting, but how does this
: significantly impact performance when streaming data through the
: connection?  Isn't TCP a streaming protocol?  Was TTCP used in these

TCP is indeed a streaming protocol, the performance of which is bound
by several things. One is how many CPU cycles it takes to send/recv a
packet. Another is the window size divided by the end-to-end latency
(W/RTT).

Something like a minimum-sized TCP ping-pong test can start to give
you a feel for those things without having to saturate the (possibly
production) link with traffic.

As often as not, the biggest component of latency in a single-byte
netperf TCP_RR test is the Transport/OS/Driver CPU path length. If you
take two cards in the same systems and compare their latencies, you
are getting something (not complete mind you) of a feel for their
various driver path lengths. And for the overall path length. If you
can run an accurate CPU util measure during the test, you can also see
how much path is outside the latency path and is "overhead" instead.

The longer the path, the more CPU will be consumed when running that
data throughput application. 

The greater the latency, the larger the window you need to overcome
it.

It isn't all just mbit/s. think of an NFS server - lots and lots of
little requests like getattrs and lookups and such. That performance
will be bound more my the latency of the system(s) than the bandwidth
of the link.

rick jones
http://www.cup.hp.com/netperf/NetperfPage.html