*BSD News Article 19277


Return to BSD News archive

Path: sserve!newshost.anu.edu.au!munnari.oz.au!constellation!convex!convex!cs.utexas.edu!math.ohio-state.edu!magnus.acs.ohio-state.edu!usenet.ins.cwru.edu!agate!doc.ic.ac.uk!uknet!mcsun!news.forth.gr!news.forth.gr!vp
From: vp@nemesis.csi.forth.gr (Vassilis Prevelakis)
Newsgroups: comp.os.386bsd.development
Subject: Re: Compressing file system ?
Date: 8 Aug 1993 19:19:37 +0300
Organization: Institute of Computer Science, FORTH Hellas
Lines: 42
Message-ID: <vp.744825872@news.forth.gr>
References: <23rh55$ct@encap.Hanse.DE> <23tr2j$3tt@europa.eng.gtefsd.com> <23tsn3$7e@Germany.EU.net> <CBCJv4.8AC@sugar.neosoft.com> <240as9$bdu@klaava.Helsinki.FI>
NNTP-Posting-Host: nemesis.csi.forth.gr

In article <CBCJv4.8AC@sugar.neosoft.com> peter@NeoSoft.com (Peter da Silva) writes:
>In article <23tsn3$7e@Germany.EU.net> bs@Germany.EU.net (Bernard Steiner) writes:
>> Note: I think the idea of an optional compressing filesystem is OK, I just see
>> more potential problems than possible benefits.

You could use the nfs mechanisms provided by the kernel to create a new
file system that has all kinds of new functionality.  The advntage is that
you don't need to mess with the kernel and you can try all sorts of tricks
without having to reboot.


The basic requirement for such an fs is that you'd need to be able to
access block X of the uncompressed file when you only have a compressed
image available (rest of nfs protocol does not appear to have problems).
(BTW by access I mean both read and write).  Of course you could uncompress
the file when you finish with it compress it again, but while this may
work with small files it may fail with large files (e.g. the 50+Mb tar
images from the X distribution) also NFS is supposed to be stateless
so you won't be able to determine when you have finished with that file.
Also there may be files that should't be compressed but this can be dealt
with the 
>	chmod +c filename	-- allow compression on this file.
as suggested by peter@NeoSoft.com (Peter da Silva).

So how do we deal with block level access?
	The idea that comes to my mind is that you break the file into
blocks and compress each block individually.  I would suppose that
you'd lose some efficiency in the compression (esp. if you have to
copy your compression table in the beginning of each block).  Maybe
having larger blocks would reduce the overheads.

	What does the net think?


**vp

-----------------------------------
Vassilis Prevelakis   |       vp@csi.forth.gr
Thoukididou 10A       |   old style address:
Plaka, Athens 105 58  |       ...!mcvax!ariadne!vp
GREECE                |
Tel. +30 1 32 32 867  |   FAX +30 1 72 24 603