*BSD News Article 12663


Return to BSD News archive

Newsgroups: comp.os.386bsd.bugs
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!amdcad!BitBlocks.com!bvs
From: bvs@BitBlocks.com (Bakul Shah)
Subject: Re: unlimit / Re: VM problems w/unlimited memory?
Message-ID: <C3sy8B.M0r@BitBlocks.com>
Organization: Bit Blocks, Inc.
References: <C3qpIH.Is9@unx.sas.com> <1nprd8$8nj@Germany.EU.net>
Date: Sat, 13 Mar 1993 00:48:10 GMT
Lines: 33

bs@Germany.EU.net (Bernard Steiner) writes:

>Th other day, also playing around with unlimit openfiles, at first glance I
>thoght there was an infinite loop in one of my own pieces of code.
                  ^^^^^^^^^^^^^^^^
>All is was doing, in fact, was executing a for(i=0; i<getdtablesize(); i++)
>loop closing all file descriptors :-(

Well, close to an infinite loop!

We went over this a few months ago.

If you unlimit openfiles, it is set to 2^31 - 1 and that is what
getdtablesize() returns.  Since 386bsd can dynamically grow the
per-process filetable, you can set the openfiles limit to a very
large value (but not infinite).

A short term fix is to *NOT* unlimit openfiles; keep it at 64 or
128 or so.  One proper fix would be to add another syscall which
returns the *highest open file descriptor number* and use that
call for closing or duping all open file descriptors.  Almost all
uses of getdtablesize() are for this purpose so perhaps
getdtablesize() may be redefined to mean this (but then again,
that will probably break some silly program somewhere).

BTW, it is better to code such loops as
	max = getdtablesize();
	for (i = 0; i < max; i++)
The compiler does not know that getdtablesize *usually* returns
the same value.  Also note that the cost of a syscall is still in
100s of microsecond range.

Bakul Shah <bvs@BitBlocks.com>