*BSD News Article 61810


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.mel.connect.com.au!news.syd.connect.com.au!gidora.kralizec.net.au!not-for-mail
From: bde@zeta.org.au (Bruce Evans)
Newsgroups: comp.unix.bsd.freebsd.misc,comp.os.linux.development.system
Subject: Re: The better (more suitable)Unix?? FreeBSD or Linux
Date: 21 Feb 1996 16:54:30 +1100
Organization: Kralizec Dialup Unix
Lines: 55
Message-ID: <4gec2m$v0q@godzilla.zeta.org.au>
References: <4er9hp$5ng@orb.direct.ca> <4fjodg$o8k@venger.snds.com> <311DA774.167EB0E7@FreeBSD.org> <4gdhoc$pcf@leonard.anu.edu.au>
NNTP-Posting-Host: godzilla.zeta.org.au
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.freebsd.misc:14086 comp.os.linux.development.system:17702

In article <4gdhoc$pcf@leonard.anu.edu.au>,
Paul Gortmaker <gpg109@leonard.anu.edu.au> wrote:
>"Jordan K. Hubbard" <jkh@FreeBSD.org> writes:

>>An interesting test, the results of which would actually be quite
>>enlightening, would be to build two identical configurations and load
>>one with FreeBSD 2.1 and the other with, say, with RedHat 2.1 or
>
>Or just use one machine, and  do the FreeBSD test, and then the Linux
>test, hitting the reset buttton after the same amount of elapsed time.
>No need to have two machines.

>Even for this test to have any meaning, it would have to be repeated
>at least 10 times (for each Linux and FreeBSD - total of 20 runs!), 
>hitting the reset button at various times, otherwise the statistics 
>would still be meaningless. You would want to cat the raw disk device to 

I once tried a "tar x" vs reset button test for the Minix fs under Minix
and Linux and UFS under FreeBSD and gave up when nothing interesting
happened after about 10 resets.  Thousands of resets would probably
required for enough stress :-(.  This is due to a number of factors:

- the "tar xf" test is poor because it only adds files.  Something that
  mixes creations with deletions would be better.

- Linux and Minix probably spent less than 1% of the time with
  inconsistent metadata, so a random reset has less than a 1 in 100
  chance of leaving inconsistent metadata (for many files)!  This is
  because the cache actually works to minimize writes under Minix and
  worked under Linux (bdflush under current Linuxes increases the
  number of writes and opens windows of inconsistency by not writing
  everything at once.  However, this is probably not very important
  for the "tar xf" test since creations affect mainly new blocks).

  OTOH, UFS spent about half it's time waiting for metadata to be
  written for an average "tar xf", so a random reset has a chance of
  about 1 in 2 of leaving inconsistent metadata, but only for one file
  per process (perhaps more if the reset leaves a block partially
  wrtitten?).

  The "tar xf" benchmark poor for more reasons:
  - for large files, relatively little time is spent waiting for
    metadata to be written, so on both systems it's hard to hit reset
    while a lot of metadata is inconsistent.  You might need to hit it
    10000 times instead of only 1000 to see a problem :-).

  - for small files, UFS spends relatively more time waiting for
    metadata to be written, so it will appear to be less robust.

>tape to allow easy restoration for subsequent runs. This would guarantee 
>the same initial file allocation for each run. You are looking at a
>full day of work to perform this test in a semi-meaningful manner.
      ^^^ year :-)
-- 
Bruce Evans  bde@zeta.org.au