*BSD News Article 95396


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!feed1.news.erols.com!howland.erols.net!ix.netcom.com!news
From: Jerry Hicks <jerry_hicks@bigfoot.com>
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Re: Socket drivers for SCSI -- FDDI/Ethernet is better
Date: Thu, 15 May 1997 11:58:00 -0400
Organization: GoWorld Communications, Inc.
Lines: 68
Message-ID: <337B3288.7A3F@bigfoot.com>
References: <337329C5.A5678A08@isr.co.jp> <5kvoik$5hj@uriah.heep.sax.de>
		  <3374492B.C490C5DE@isr.co.jp> <5l6iql$8k5@verdi.nethelp.no> <5l7om0$5nk@uriah.heep.sax.de> <337A69B0.14F7@OntheNet.com.au>
Reply-To: jerry_hicks@bigfoot.com
NNTP-Posting-Host: atl-ga9-02.ix.netcom.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-NETCOM-Date: Thu May 15  9:25:17 AM PDT 1997
X-Mailer: Mozilla 3.01Gold (WinNT; I)
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.freebsd.misc:40957

Tony Griffiths wrote:
> 
> J Wunsch wrote:
> >
> > sthaug@nethelp.no (Steinar Haug) wrote:
> >
> > > Frankly, I don't understand why anybody would want to use SCSI for high
> > > speed networking - it was never meant for that! If you really,
> This is indeed true...  The command/response flavour of SCSI does not
> lend itself to inter-processor communications.  A disk drive, even a
> _smart_ one, does not simply get the "idea" of transfering a Meg or more
> of data to a host in the hope that this will be useful.  Basically with
> SCSI, all operations of 'slave' devices are directed and managed from a
> master device.  While it is possible to have 2 master devices (at
> different SCSI ids) on the bus at once, I don't think that both can be
> operational simultaneously (ie. one is in stand-by mode ready to go if
> the other master "fails").
> 
> *really*
> > > need QOS in your networks today, buy ATM. Otherwise, 100 Mbps Ethernet
> > > is certainly the way to go (simpler and less expensive - current best
> > > price is around $340 for full duplex NIC + switch port).
> >
> > There's also FDDI.  While it doesn't guarantee QOS, i think it will
> > guarantee a some maximum propagation delay, since it's a dual
> > token-ring.  But i'm fairly clueless about the details either.  I
> > don't think SCSI is the way to go for this, however.
> 
> With FDDI (or at least with the glass variety) it is possible to have to
> 2 NICs connected with a simple cross-over (Tx <-> Rx) without a hub of
> any variety, allowing cheap (?) high-speed inter-cpu communications!
> The performance should be better than 100 Mbps Ethernet (although
> possibly not as good as the full-duplex version) as the maximum MTU for
> FDDI is 4500 bytes .v. 1500 bytes for Ethernet.
> 
> Tony

I was interested in using SCSI in this manner for a redundant cellular
switch we developed, having heard of an ATM switch project at Georgia
State University which used the same approach.

That project was developed under QNX.

We saw a couple of advantages, mostly with respect to processor
utilization, and SCSI-3 hot-pluggable capabilities.

All host adapters are definitely not equal.  It seems that some were
well suited for this type of application, others not.

I'm a little dated on my SCSI, but always viewed a host adapter as just
another SCSI device on the bus.

Interestingly enough, Microsoft has been using this sort of SCSI
approach in developing their "Wolfpack" NT clusters.  Compaq didn't go
that way though and are opting for FibreChannel in their OEM
configuration.

Does anyone have direct experience using ATM for PC's running FreeBSD? 
How do they perform? Processor utilization?

It still seems it would be fun to take a couple of SCSI boards and
achieve a (very) high-speed link between colocated machines.


Aloha!

Jerry Hicks
jerry_hicks@bigfoot.com