*BSD News Article 73113


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!nntp.coast.net!news.kei.com!news.mathworks.com!uunet!in1.uu.net!news.artisoft.com!usenet
From: Terry Lambert <terry@lambert.org>
Newsgroups: comp.os.linux.networking,comp.unix.bsd.netbsd.misc,comp.unix.bsd.freebsd.misc
Subject: Re: TCP latency
Date: Mon, 08 Jul 1996 13:03:26 -0700
Organization: Me
Lines: 91
Message-ID: <31E1698E.2EAA7F26@lambert.org>
References: <4paedl$4bm@engnews2.Eng.Sun.COM> <4qaui4$o5k@fido.asd.sgi.com>
		<4qc60n$d8m@verdi.nethelp.no> <31D2F0C6.167EB0E7@inuxs.att.com>
		<4rfkje$am5@linux.cs.Helsinki.FI> <31DC8EBA.41C67EA6@dyson.iquest.net> <x7ybkxgcx2.fsf@oberon.di.fc.ul.pt>
NNTP-Posting-Host: hecate.artisoft.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 2.01 (X11; I; Linux 1.1.76 i486)
Xref: euryale.cc.adfa.oz.au comp.os.linux.networking:44377 comp.unix.bsd.netbsd.misc:3965 comp.unix.bsd.freebsd.misc:23058

Pedro Roque Marques wrote:
] I don't think HTPP that was the issue here.
] Using TCP you have two big different classes of applications: bulk
] data transfers and what is traditionally called interactive traffic
] (small packets; delay sensitive; sometimes you can/should agregate
] several writes in a datagram that goes out on the stream)
] 
] If you want to messure bulk data performance you use something like
] bw_tcp (big buffer writes down the pipe), if you want to messure *one*
] of the factors that influences TCP performance for this so called
] "interactive" traffic one choice that seams resonable is to test
] latency (usually defined as how long does it take to push a byte back
] and forward on a socket n times).

I'd tighten up this definition to:

1)	Unidirectional bluk transfers

2)	Request/response sessions


Traditionally, request/response has grown out of DOS protocols,
like SMB over NetBEUI or TCP/IP, and NCP over IPX, etc..

That is, they are a concession to the memory limitations of
DOS machines and are otherwise poorly suited to the tasks they
purport to perform.

Specifically, if it is my intent to load a copy of WP.EXE over
the net, it behoooves me to make one request instead of many to
do this, yet, the DOS loader interface to the INT 2A/2C NetBIOS
from INT 21 makes this nearly impossible.

One could make the same arguments relative to predictive caching
for data access, and distributed cache coherency (where CIFS
substitutes opportunistic locks for intelligence in the clients).


It is important of HTTP connections, which resolve per URL, in
general, multiple times for a data-laden page (exception: there
is a mechanism for keeping the HTTP link open.  There is also
WebNFS, which does the same thing in terms of increasing locality).

It is also important for badly designed protocols which are
unlikely to change because of backward compatability concerns
in terms of ensuring market protections at the expense of
technological advancement (back to SMB and NCP, here).

In general, the interactive sessions native to UNIX systems (Telnet,
rlogin, etc.) which use TCP instead of UDP for transport are not
time sensitive because they are user-bound.  The user typing speed
is the slowest link.

One could argue HTTP, but then one would have to admit that it
had been repaired, and that it was NetScape and Microsoft browser
and server technology that is preventing deployment.


I fought the transport latency issue on UnixWare 2.x when we
were implementing the NWU (NetWare for UNIX) product; it is a
real issue -- for legacy code.  It is not the *burning* issue
which you are attempting to portray it as; it is certainly not
a limiting issue for the future, which must inevitably move
away from these DOS-call-centric implementations of timing
critical request/response protocols.


] I'll better just point the differences and let you make an opinion
] about it. The big big difference between the 2 is the way the handle
] timers: BSD with fast and slow timeout and linux with per socket
] timers with precise values. You can argue that those 200ms/500ms are
] cheaper when you have a loaded machine... however those functions have
] to look though all the sockets and have an O(n) complexity. On Linux,
] on the other hand you have an O(n) complexity in the add_timer
] function which is called for every send and receive. True, the cost of
] Linux timers is greater but they are always more precise than BSDs
] timers. Since i religiously dislike the BSD way of doing TCP timers
] ;-) let me add that those timer values will probably be a bit more
] broken under high load :-)


Actually, the BSD code will retry on a transient "no route to host"
failure, and the Linux system will give up the connection (cv: the
bug report by Matt Day).


                                        Terry Lambert
                                        terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.