*BSD News Article 17523


Return to BSD News archive

Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!elroy.jpl.nasa.gov!swrinde!gatech!howland.reston.ans.net!xlink.net!fauern!news.tu-chemnitz.de!irz401!uriah!not-for-mail
From: j@bonnie.tcd-dresden.de (J Wunsch)
Newsgroups: comp.os.386bsd.questions
Subject: Re: Virtual memory problem
Date: 25 Jun 1993 18:27:44 +0200
Organization: Textil Computer Design GmbH, Dresden, Germany
Lines: 68
Message-ID: <20f920INNc79@bonnie.tcd-dresden.de>
References: <1993Jun24.015842.21623@news.arc.nasa.gov>
NNTP-Posting-Host: bonnie.tcd-dresden.de
Bcc: j

In article <1993Jun24.015842.21623@news.arc.nasa.gov> root@wanderer.nsi.nasa.gov (Michael C. Newell) writes:
>I've been trying to compile some of the Xview utility
>programs (notably textedit), and they include LOTS of
>header files.  They bomb out with an error, "insufficient
>virtual memory"...
>
>In rooting around the malloc related code I came across
>the constant "DFLDSIZ" in "/sys/i386/include/vmparam.h".
>It is supposed to be the "initial data size limit", and
>was set to "6*1024*1024", or 6Mb.  I changed this to
>"16*1024*1024" and rebuilt the kernel.  The little C
>program now was able to allocate 8192 1K blocks.  This
>was enough to get past the "insufficient virtual memory"
>errors [unfortunately others followed... :{(]
>

Yep, first: gcc's memory allocator has some problems. You'd
experience this in a very ugly way if attempting to compile
X applications that include large bitmap files. (They're --
from the C point of view -- large static arrays.)  My attempt
to compile the xphoon program required as much as 35 MB of
virtual memory. (David Dawes from the XFree86 folks told me,
linking gcc with GNU malloc reduces the problem.)

On the other hand, i've also changed the definitions you
mentioned. But i didn't like to modify the header files, and
actually, modifying the values is as easy as:

options		"DFLDSIZ='(16 * 1024 * 1024)'
options		"MAXDSIZ='(64 * 1024 * 1024)'

Include the above lines into your kernel's config file, reconfig
and rebuild it.

Someone else proposed to unlimit any data segment sizes (by `limit
datasize unlimited'), but i find this really a bad idea. For a
couple of reasons: first, setting the default size to more than the
amount of physical memory available might cause your system
thrashing if you attempt to run something that allocates as much
virtual memory as it can get. If you still set a limit, you'll just
get an error message, then you can decide to increase the value so
extensive paging will occur.

Second, even an `unlimited' doesn't mean you're unlimited. There
are still hard limits (and even `limit -h' won't help), e.g.  the
MAXDSIZ mentioned above. This value appears to be used in the
kernel for the allocation of memory maps, thus you never can extend
your data segment above 32 M (in the original kernel). Maybe
increasing it to 64 M would increase the overhead, but it gives you
the opportunity to set higher limits, of course, you still need to
have the swap space:-)

Last not least, 386bsd isn't stable enough to allow you strange
actions. Unlimiting data seg size e.g. on a box with only 8 mb
physical memory, and then running a test program that allocates
as much memory as it can get (and writes back at least one byte
per each 4 KB page), will certainly put the machine into an
unusable state. If you're impatient, only reset will help...

Btw., the default physical memory usage limit is nicely adapted
to the amount of available physmem for user processes. Eventually,
we should introduce something similiar to DFLDSIZ, totally avoiding
this constant.
-- 
in real life: J"org Wunsch |   )  o o  | primary: joerg_wunsch@tcd-dresden.de
above 1.8 MHz:   DL 8 DTL  |    )  |   | private: joerg_wunsch@uriah.sax.de
                           | . * ) ==  |
          ``An elephant is a mouse with an operating system.''