*BSD News Article 61765


Return to BSD News archive

#! rnews 3462 bsd
Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!nntp.coast.net!chi-news.cic.net!newsfeed.internetmci.com!in1.uu.net!news.tacom.army.mil!reason.cdrom.com!usenet
From: "Jordan K. Hubbard" <jkh@FreeBSD.org>
Newsgroups: comp.unix.bsd.freebsd.misc,comp.os.linux.development.system
Subject: Re: The better (more suitable)Unix?? FreeBSD or Linux
Date: Sun, 11 Feb 1996 00:23:16 -0800
Organization: Walnut Creek CDROM
Lines: 51
Message-ID: <311DA774.167EB0E7@FreeBSD.org>
References: <4er9hp$5ng@orb.direct.ca> <4f9skh$2og@dyson.iquest.net> <4fg8fe$j9i@pell.pell.chi.il.us> <311C5EB4.2F1CF0FB@freebsd.org> <4fjodg$o8k@venger.snds.com>
NNTP-Posting-Host: time.cdrom.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 2.0 (X11; I; FreeBSD 2.1-STABLE i386)
To: Michael Griffith <grif@hill.ucr.edu>
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.freebsd.misc:14053 comp.os.linux.development.system:17654

Michael Griffith wrote:
> Cool.  It should be the new default.

It is, since during the installation is not one of those times you're
worried about data loss.  If it croaks somewhere in the middle and hoses
your new filesystem, you just start over.

> I fail to see how it is subjective.

That's because you're thinking in terms of mathematic proofs and not how
well it actually WORKS IN PRACTICE.  Most people are only interested in
the latter and don't really care too much about the latest paper
published by YourFavoriteU on hypothetical FS performance or
reliability.

Your proof is also bogus in that it doesn't take into account other
important factors, such as whether the system is on an UPS or if kernel
stability can be rated in crashes-per-year or crashes-per-week.  Power
loss and system crashes account for more data loss in the field than any
other factors I can think of, and hence top my list of things to concern
myself with.

An interesting test, the results of which would actually be quite
enlightening, would be to build two identical configurations and load
one with FreeBSD 2.1 and the other with, say, with RedHat 2.1 or
Slackware 3.0.  Run a checksum scan across *every* file on each system
and store the results someplace where they can't be nuked.  Now start an
application chosen for its disk-intensive nature, possibly with a few
recursive chowns/chmods of large file trees (not an uncommon thing to
find running on a typical UNIX system) sprinkled in.  After a measured
amount of wall clock time, literally yank the plug out of the wall on
the test machine and bring it back up.  Run your scan and see if any
files were lost, checking also to see how the application's own data
files were damaged, if at all (you obviously want to pick an application
that writes easily verifyable data).

Now do the same thing on the other box.  How did it do?

For extra points, try it with both sync/async mounts under FreeBSD, just
to be fair.  I'm not sure if ext2fs can be mounted synchronous for extra
safety, but if so, you'd definitely want to test that too.

Again, you can yell all you like about predicting the *theoretical*
likelyhood of data loss and how you don't need to factor in external
criteria like this to prove your point, but proving that point is
*meaningless* to people who are actually using their machines to do real
work!  Enough with the empty proofs, bring on the empirical data,
please!
-- 
- Jordan Hubbard
  President, FreeBSD Project