*BSD News Article 27784


Return to BSD News archive

Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!elroy.jpl.nasa.gov!usc!sol.ctr.columbia.edu!news.kei.com!bloom-beacon.mit.edu!think.com!news!everettm
From: everettm@mickey.think.com (Mark J. Everett)
Newsgroups: comp.os.386bsd.development
Subject: Re: Notes on the *new* FreeBSD V1.1 VM system
Date: 23 Feb 94 10:12:02
Organization: Thinking Machines Corporation
Lines: 42
Message-ID: <EVERETTM.94Feb23101202@mickey.think.com>
References: <BcxpGux.dysonj@delphi.com> <MYCROFT.94Feb20102534@duality.gnu.ai.mit.edu>
	<CLL9J6.FCF@endicor.com>
NNTP-Posting-Host: mickey.think.com
In-reply-to: tsarna@endicor.com's message of 21 Feb 94 19:16:17 GMT

In article <CLL9J6.FCF@endicor.com> tsarna@endicor.com (Ty Sarna) writes:

   In article <MYCROFT.94Feb20102534@duality.gnu.ai.mit.edu> mycroft@duality.gnu.ai.mit.edu (Charles Hannum) writes:
   > 
   > mentions have been addressed in NetBSD already.  Except for the
   > behavior when paging space runs out (which is a subject of debate;
   > several Mach-based systems simply panic, while others algorithmically
   > choose processes to kill), there are no reported instabilities in the
   > NetBSD VM system.

   IMHO, each of these behaviors is almost equally unpallatable.  If I'm
   running an important process (say a long-running memory-hungry
   application such as ray-tracing, database server, or whatever), and the
   system decides to kill it arbitrarily, it might as well have paniced as
   far as I'm concerned.  In either case, my processes get killed with no
   warning or chance to avoid it.  AIX implemented the killing behavior,
   and there were so many complaints from customers (database servers
   getting killed and corrupting the database, etc) that I believe they
   finally changed it. 

It may seem flip, but if you have plenty of disk space allocated for
paging, this isn't a problem.  If there isn't any left, there isn't any
left.

The "proper" solution is to allocate swap space as you increase paging
space.  The problem with this solution is that a lot of the performance
gains are made as a result of lazy evaluation, and this effectively
rules that out.

   Is there any reason why memory allocation can't simply fail when there
   isn't any more to allocate? Sure, lots of unix software is going to
   kill itself off anyway dereferencing NULL pointers, but it at least
   gives those who care to check return values a chance to avoid
   catastrophe. 

   -- 
   Ty Sarna                 "As you know, Joel, children have always looked
   tsarna@endicor.com        up to cowboys as role models. And vice versa."
--

DISCLAIMER:  These opinions are mine, all mine.