*BSD News Article 90952


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!news.mel.connect.com.au!munnari.OZ.AU!news.Hawaii.Edu!news.caldera.com!enews.sgi.com!news.sgi.com!news1.best.com!nntp1.ba.best.com!not-for-mail
From: dillon@flea.best.net (Matt Dillon)
Newsgroups: comp.unix.sco.misc,comp.unix.bsd.freebsd.misc,comp.unix.bsd.bsdi.misc,comp.sys.sgi.misc
Subject: Re: no such thing as a "general user community"
Date: 13 Mar 1997 11:07:57 -0800
Organization: BEST Internet Communications, Inc.
Lines: 71
Message-ID: <5g9jad$bo3@flea.best.net>
References: <331BB7DD.28EC@net5.net> <5g6rr5$jgo@REX.RE.uokhsc.edu> <5g76gb$6c6@flea.best.net> <3327BBF9.784A@earthlink.net>
NNTP-Posting-Host: flea.best.net
Xref: euryale.cc.adfa.oz.au comp.unix.sco.misc:36521 comp.unix.bsd.freebsd.misc:36969 comp.unix.bsd.bsdi.misc:6309 comp.sys.sgi.misc:29078

:In article <3327BBF9.784A@earthlink.net>,  <fastbit@earthlink.net> wrote:
:>Matt Dillon wrote:
:>
:>> 
:>>      * Modern single-cpu boxes running modern operating systems (p.s. NT is
:>>        not considered a modern operating system) are more then sufficient
:>>        to handle modern day I/O loads.  Our newsreader box, with 256MB
:>>        of ram and 250 reader processes and the disks going like hell,
:>>        have cpu's (pentium pro 200's) that are 80% idle.  80 fucking percent!
:>> 
:>>        What this means is that a single-cpu platform can generally saturate
:>>        whatever I/O you throw at it and still have plenty of suds left over.
:>> 
:>
:>Matt,
:>
:>I don't understand your analysis here.  If the CPU is idle and the disks
:>are going like crazy, it's not that the CPU is too fast -- it's the I/O
:>that's too slow.  What you want to do is to match the I/O to the CPU --
:>make the I/O fast enough to keep the CPU busy.  That translates into
:>more web-pages or email messages processed per unit of time. That's the
:>whole point of the O200 cc-NUMA architecture, hello ...

    Hello!  Short of video, there are very few people out there who
    need that much bandwidth on a single box.  A ppro 200 with a standard
    PCI bus would still have to be loaded down with 30+ 4G modern-day
    hard drives and half a gig of ram to fully utilize the cpu.  

    I can only think of a few companies that would actually need
    more I/O bandwidth then a standard PCI bus gives you.  Furthermore,
    I have serious doubts that IRIX could even drive a fully decked
    out O200 at maximum efficiency, if the kernel performance we see
    on our challenge L is any indication.

    Why the hell would I want to throw 30 hard drives on a single platform
    when I can get ten times the reliability by distributing those drives
    to several platforms?

    I/O bandwidth on a general purpose platform is something that is 
    needed for video, graphics, and massive shared-memory multi-cpu 
    constructs (many of which work just fine in a less expensive distributed 
    environment).   At the moment, that's just about it.  The basic problem
    with this picture is that video chipsets are progressing at a phenominal
    rate.  Especially now that microsoft is seriously supporting 2D and
    3D graphics, what used to be cpu-intensive graphics work is now starting 
    to be done on-chip, with graphics coprocessors that are not subject
    to a platform's backplane I/O bandwidth.  Whereas before such
    coprocessors had to programmed directly by the game software, now device
    drivers are pushing the functionality through to the coprocessors
    without any major loss in efficiency and allowing the high level
    programs to remain high level.

    Consider what you can throw onto a ppro PC with 5 PCI slots right now:
    Say, two duel-100BaseT ethernet cards and three 40MByte/sec SCSI
    cards, supporting four 100BaseT links and 30 4G or 9G disks.  That
    covers 95% of your potential userbase.  Perhaps even 98%.

    And, no matter what, a direct cpu<->memory interface will always be
    faster then a bus interface, even a crossbar bus interface.  So the
    multi-card/multi-cpu gigs have major competition with tightly 
    coupled single and duel cpu systems which have almost glueless
    memory interfaces.

    SGIs current problem is that the apps and I/O devices just aren't there
    yet to make a high speed bus useful enough over cheaper technologies.

    About the only thing I know of that needs a high speed bus, apart from
    video, is a packet switch or router.

						-Matt