*BSD News Article 61973


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.uwa.edu.au!disco.iinet.net.au!news.uoregon.edu!news.dacom.co.kr!usenet.seri.re.kr!news.cais.net!news.jsums.edu!gatech!newsfeed.internetmci.com!inet-nntp-gw-1.us.oracle.com!news.caldera.com!news.cc.utah.edu!park.uvsc.edu!not-for-mail
From: mday@park.uvsc.edu (Matt Day)
Newsgroups: comp.unix.bsd.freebsd.misc,comp.os.linux.development.system
Subject: Re: The better (more suitable)Unix?? FreeBSD or Linux
Date: 21 Feb 1996 19:06:07 -0700
Organization: Utah Valley State College, Orem, Utah
Lines: 93
Message-ID: <4ggj2f$mej@park.uvsc.edu>
References: <4g5ivp$28m@park.uvsc.edu> <4ge2qa$2gm@park.uvsc.edu> <4ggc9k$kbv@park.uvsc.edu>
NNTP-Posting-Host: park.uvsc.edu
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.freebsd.misc:14220 comp.os.linux.development.system:17837

In article <4ggc9k$kbv@park.uvsc.edu> Terry Lambert <terry@lambert.org> writes:
>mday@park.uvsc.edu (Matt Day) wrote:
>]
>] In article <4g5ivp$28m@park.uvsc.edu> Terry Lambert <terry@lambert.org> writes:
>] >4)	Sync makes no difference in user perception of speed
>] >	unless you're the type of user who lives to run bogus
>] >	benchmarks, and then claim they represent a single
>] >	figure-of-merit to use when picking your machine.
>] 
>] I disagree.  ``rm -r'' runs much more slowly on a file system that does
>] synchronous metadata updates, and that's just for starters.  In many
>] cases worth caring about, synchronous metadata updates have a
>] significant negative impact on "user perception of speed".  Do you
>] honestly think Ganger and Patt did all that soft updates research just
>] to optimize for bogus benchmarks?
>
>The vast majority of file usage in an installed end user site
>is manipulation of existing files.
>
>Mass file deletion is infrequent.
>
>I think the soft updates research was done to address file system
>performance problems (yes, metadata updates are specifically
>mentioned in the abstract, and yes, mass deletes are one example
>of this -- but not the only example).
>
>Soft updates increase overall concurrency, more than could be
>achieved with delayed writes.  The address the order dependency
>as a graph problem, though the soloution is not nearly as general
>s I'd like.
>
>Soft updates are a sigificant, geeral win, and they happen to
>address many issues, of which the lmbench create/delete performance
>is one.
>
>This does not validate the lmbench create/delete test as being a
>correct benchmark.

I believe you have misunderstood me.  You said synchronous metadata
updates make no difference in user perception of speed, except for bogus
benchmarks.  I disagree with that statement.  If synchronous metadata
updates made no difference in user perception of speed except for bogus
benchmarks, then the only people who would care about the problem would
be the people who run bogus benchmarks, and that just isn't the case.
Synchronous metadata updates have been identified as a file system
performance problem by many researchers, including Ganger and Patt. [1]
As their research indicates, the problem is real, not trivial.

As I hope you can now see, my disagreement had nothing to do with
whether or not the lmbench create/delete test is a correct benchmark.

>The "rm" overhead is a result of POSIX semantic requirements; as
>you yourself have pointed out, these requirements can be, in some
>cases, more favorably interpreted than UFS chooses to interpret
>them.

No, you are completely wrong.  The ``rm -r'' performance problem is
caused by the use of synchronous writes to sequence metadata updates,
thus protecting metadata integrity. [1]  It has nothing to do with POSIX.

>I believe your perception test to be atypical of common usage;
>can you claim the FS operations you are performing to be typical,
>or do they fall more into the category of "stress testing"?

You want a more typical operation than ``rm -r''?  How about compiling?
Ganger and Patt measured a 5-7 percent performance improvement in the
compile phase of the Andrew file system benchmark when running on a file
system using soft updates to sequence metadata updates rather than
synchronous writes.  They go on to explain that the compilation
techniques used by the benchmark were aggressive and time-consuming,
while the CPU they ran the benchmark on was slower than the standards of
the day, which means you could expect the performance improvement to
significantly increase on systems with faster CPUs.  They also saw a
50-70 percent improvement when running the Sdet benchmark from the SPEC
SDM suite.  (As Ganger and Patt explain it, Sdet concurrently executes
one or more scripts of user commands designed to emulate a typical
software-development environment (e.g., editing, compiling, file
creation, and various UNIX utilities).) [2]

Based on this evidence, I claim that synchronous writes to sequence
metadata updates significantly slows down typical file system
operations.

Matt Day <mday@park.uvsc.edu>

[1] G. Ganger, Y. Patt, "Soft Updates: A Solution to the Metadata Update
    Problem in File Systems"
    http://www.pdos.lcs.mit.edu/~ganger/papers/CSE-TR-254-95/

[2] G. Ganger, Y. Patt, "Metadata Update Performance in File Systems",
    USENIX Symposium on Operating Systems Design and Implementation
    (OSDI), November 1994, pp. 49-60
    http://www.pdos.lcs.mit.edu/~ganger/papers/osdi94.ps.Z