*BSD News Article 79242


Return to BSD News archive

Newsgroups: comp.os.linux.misc,comp.unix.bsd.freebsd.misc,comp.infosystems.www.misc
Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.mel.connect.com.au!news.mira.net.au!news.vbc.net!alpha.sky.net!winternet.com!clio.trends.ca!news-feed.inet.tele.dk!news.inet.tele.dk!arclight.uoregon.edu!usenet.eel.ufl.edu!news.mathworks.com!uunet!in3.uu.net!zorch!zorch.sf-bay.org!scott
From: scott@zorch.sf-bay.org (Scott Hazen Mueller)
Subject: Re: Unix too slow for a Web server?
Reply-To: scott@zorch.sf-bay.org
Sender: usenet@zorch.SF-Bay.ORG (Charlie Root)
Organization: At Home; Salida, CA
Message-ID: <DyAr0D.sL@zorch.SF-Bay.ORG>
References: <323ED0BD.222CA97F@pobox.com> <3246D415.41C67EA6@FreeBSD.org> <Pine.BSF.3.91.960923200853.12260C-100000@dyslexic.phoenix.net> <Dy91KF.IMA@interactive.net> <52am8g$fvs@nntp1.u.washington.edu> <52atoq$d9a@halon.vggas.com>
X-Nntp-Posting-Host: localhost.sf-bay.org
Date: Wed, 25 Sep 1996 16:26:37 GMT
Lines: 28
Xref: euryale.cc.adfa.oz.au comp.os.linux.misc:131717 comp.unix.bsd.freebsd.misc:28018 comp.infosystems.www.misc:44111

>>[...] 20 gigs [...]

>You just use rdist or that faster compressing replacement for it announced
>not long ago.

Not if you've got 20 gigs, you don't.  I forget offhand how many hours rdist
ran for <2GB spread among 200,000 files.  11, I think.  I also tried AFS; a
'vos release' (volume synchronization operation) on that filesystem took 13
hours.

You can invest in a fast NFS server (NetApp or Auspex), but you'd better watch
your I/O rates, and you'd probably want a fast back-end network to separate
the NFS activity from the HTTP bits.  I also personally wouldn't want NFS
going on in my DMZ, but your security profile may be less paranoid.

Shared arrays are cool, but not many companies make them.  We use an EMC
Symmetrix that will connect up to 8 systems (Fast Wide Diff SCSI), but that's
*waaay* past the $70,000 originally mentioned, like 3 or 4 times.  It's a hot
box, and releasing a new volume is as simple as mounting it (read-only) on the
front-end machine.

Otherwise, I suggest partitioning your site so that the really common
operations (like serving static home pages) are split among building-block
systems that can replicate a small amount (a few 10s of MBs) of data.  Then
use higher-horsepower systems for those bits that require the full data set.

              \scott