*BSD News Article 33906


Return to BSD News archive

Xref: sserve comp.os.386bsd.misc:3044 comp.os.linux.misc:21166
Newsgroups: comp.os.386bsd.misc,comp.os.linux.misc
Path: sserve!newshost.anu.edu.au!harbinger.cc.monash.edu.au!msuinfo!agate!howland.reston.ans.net!europa.eng.gtefsd.com!library.ucla.edu!whirlwind!newsserver!michel
From: michel@blizzard.seas.ucla.edu (Scott Michel)
Subject: Re: STREAMS  (was I hope this wont ignite ...)
Sender: news@seas.ucla.edu (News Daemon)
Message-ID: <MICHEL.94Aug5120311@blizzard.seas.ucla.edu>
In-Reply-To: vjs@calcite.rhyolite.com's message of Fri, 5 Aug 1994 14:01:21 GMT
Date: Fri, 5 Aug 1994 19:03:11 GMT
Distribution: comp
Reply-To: scottm@intime.com
References: <CtMnq1.C8@rex.uokhsc.edu> <31d5ls$8e9@quagga.ru.ac.za>
	<Cu0w8x.923@seas.ucla.edu> <Cu2Ey9.2oM@calcite.rhyolite.com>
Organization: School of Engineering & Applied Science, UCLA.
Lines: 72

>>>>> "V" == Vernon Schryver <vjs@calcite.rhyolite.com> writes:
In article <Cu2Ey9.2oM@calcite.rhyolite.com> vjs@calcite.rhyolite.com (Vernon Schryver) writes:

V> In article <Cu0w8x.923@seas.ucla.edu>
V> michel@lightning.seas.ucla.edu (Scott Michel) writes:
>> ...  Most x86 System V's use Lachman's TCP/IP package (I know
>> that SCO and Interactive did) which is based on top of
>> Streams. But there are some optimizations that Lachman did to
>> make it faster. And there are numerous stream buffer parameters
>> that can be tuned.

V> System V STREAMS are a nice porting environment.  It's far
V> easier to port STREAMS code from one system to another than BSD
V> protocol switch code , which is justification for DKI/DLPI
V> using STREAMS (but not the reason for using STREAMS; that has
V> to do with politics).  STREAMS were emphatically not "designed
V> to implement the ISO 8 layer model" (was that an intentional
V> slip?  It's great!)  STREAMS were designed for not just network
V> stuff--read the old AT&T "STREAMS Primer."  My first experience
V> with STREAMS was writing what I suspect was the first
V> commerical implementation of UNIX STREAMS tty code.  It shipped
V> years before either AT&T or Sun shipped theirs, and was
V> completed about the time the Lachman TCP/IP was started.

I own one of the original STREAMS manuals from AT&T, and had to to
battle with the DataKit (a fiber interface to a network processor
who's name escapes me.) Sure, STREAMS is a generic pathway for
seperating components in a device driver, but then again, the
conversation was about networking, wasn't it? Pedantic displays of
knowledge ("I've been doing programming for 25 years") don't help the
discussion much.

V> Unfortunately, all of those put and service functions and the
V> generic nature of the stream head and scheduler ensure that
V> STREAMS are never as fast as sockets.  I think you can make
V> "page flipping" and "hardware checksumming" work with STREAMS
V> (two primary techniques for fast networking), but I doubt it is
V> possible to make a "squashed STREAMS stack" without doing fatal
V> violence to the fundamental ideas of STREAMS.  The fastest
V> TCP/IP implementations are based on sockets, not STREAMS, and
V> they run 2 to 20 times faster (yes, twenty, as in Gbit/sec).

Ever notice that everything has to be designed and implemented twice?
I think the same is true of STREAMS, it needs to be reimplemented now
that we know the mistakes and things we'd like to do better.

V> It is extremely difficult to implement sockets on top of
V> STREAMS.  The years of bad results were not just because they
V> didn't care, but because it is very hard.  The models differ in
V> critical respects.  It is simply false that "conceptually
V> sockets and TLI implement the same thing" unless you stand so
V> far back that you think COBOL and C are the same.

I dunno about that statement. If you're talking about actual semantics
and syntax, then the COBOL vs. C issue is valid. If you're talking
about the process modelled, then there are some striking
similarities. For example, given that we want to set up a simple
connection oriented IPC:

sockets:
server calls: socket -> bind -> listen -> accept -> read -> write ...
client calls: socket ->         connect ->          write -> read ...

TLI:
server calls: t_open -> t_bind -> t_alloc -> t_listen -> t_accept ->
	      t_rcv -> t_snd ...
client calls: t_open -> t_bind -> t_alloc -> t_connect -> t_snd -> t_rcv ...

Structurally, they model the same thing, the names of the calls are
different.

-scottm