*BSD News Article 9069


Return to BSD News archive

Newsgroups: comp.unix.bsd
Path: sserve!manuel.anu.edu.au!munnari.oz.au!sgiblab!spool.mu.edu!uunet!pmafire!mica.inel.gov!ux1!fcom.cc.utah.edu!cs.weber.edu!terry
From: terry@cs.weber.edu (A Wizard of Earth C)
Subject: Re: Dumb Question: Why 512 byte block?
Message-ID: <1992Dec18.030833.7395@fcom.cc.utah.edu>
Sender: news@fcom.cc.utah.edu
Organization: University of Utah Computer Center
References:  <1992Dec18.005050.20594@decuac.dec.com>
Date: Fri, 18 Dec 92 03:08:33 GMT
Lines: 56

In article <1992Dec18.005050.20594@decuac.dec.com>, darryl@vfofu1.dco.dec.com (Darryl Wagoner) writes:
|> Why is everything in 1/2 k block instead the BSD standard of 1024 byte blocks?
|> Yes, I know there is a '-k' switch, but it seems to me it should be
|> the otherway around.

Think of disk blocks as the curve-fitting algoritm they taught you when you
first learned integral calculus:  the smaller your slices, the closer you
come to approximating the area under the curve.

If I have a set of 6 512 byte files, I will use up 3K of disk for them;
similarly, if I had a blocking factor of 1K, I would use of 6K (since the
smallest fragment usable by a file is now 1K.

If I have 6 1.5K files, this translates to 9K of disk(512B) or 12K of disk(1K).
Obviously, if I have 6 1.6K files, both blocking factors take up 512K.

The offset into the disk is a _block_offset_; what this means is that you
will start looking for data at the offet*blocking_size when given an address,
and that reads/write into kernel cache are done (usually) in block_size
increments.  A device accessed this way is a blocked device.

When you store a lot of little files on the disk, small blocks are wasteful
of disk space (for instance, 1024 empty files take 512K vs 1M of disk for
the smaller blocking factor).

When you store a few large files on the disk, it is better to eat the 0 to
1K-1B penalty for each file (as opposed to the 0 to 511B for the small
blocking factor) in trade for increased speed of access for the drive.

Since the typical 386BSD installation is a *large* number of files, it makes
sense to trade speed of access for storage space.

If you are planning on attaching a large drive, you will want to up the
blocking factor when you mkfs it, unless you make multiple partitions, since
there is a limiting calculation which will prevent >1G drives from working
well at a 512B block size.  Other than that, most modern drives, and most
modern controllers supporting scatter/gather operations in hardware tend to
lessen the impact of blocking on the drives (although significant speedups
can be had by mucking with cylinder group sizes).

Some modern file systems allow you to dynamically change blocking factor,
and even interleave, as you are writing the disk; for these file systems,
there is usually a utility called "tunefs" to adjust things for all subsequent
writes.


					Terry Lambert
					terry@icarus.weber.edu
					terry_lambert@novell.com
---
Any opinions in this posting are my own and not those of my present
or previous employers.
-------------------------------------------------------------------------------
                                        "I have an 8 user poetic license" - me
 Get the 386bsd FAQ from agate.berkeley.edu:/pub/386BSD/386bsd-0.1/unofficial
-------------------------------------------------------------------------------