*BSD News Article 12654


Return to BSD News archive

Path: sserve!newshost.anu.edu.au!munnari.oz.au!constellation!osuunx.ucc.okstate.edu!moe.ksu.ksu.edu!crcnis1.unl.edu!wupost!gumby!yale!yale.edu!ira.uka.de!Germany.EU.net!qwerty!bs
From: bs@Germany.EU.net (Bernard Steiner)
Newsgroups: comp.os.386bsd.bugs
Subject: unlimit / Re: VM problems w/unlimited memory?
Date: 12 Mar 1993 11:15:52 GMT
Organization: EUnet Backbone, Dortmund, Germany
Lines: 34
Distribution: world
Message-ID: <1nprd8$8nj@Germany.EU.net>
References: <C3qpIH.Is9@unx.sas.com>
NNTP-Posting-Host: qwerty.germany.eu.net

In article <C3qpIH.Is9@unx.sas.com>, sastdr@torpid.unx.sas.com (Thomas David Rivers) writes:
|> cputime         unlimited
|> filesize        unlimited
|> coredumpsize    unlimited
|> memorylocked    unlimited
|> maxproc         unlimited
|> openfiles       unlimited
|> % sh 
|> 
|>    < Reboot... no panic or nothin'>

Similarly: Yesterday I read in the srcdist again (finally going for the patch
kit). Well, I was feeling lazy and didn't want to work around the cat-doesn't-
clsoe-its-files-after-reading-them bug, so I told my csh to
"unlimit openfiles". Having re-extracted the srcdist I remembered that, of
course, /usr/include/sys is a soft link to /sys/sys, i.e. I would have to
re-extract part of the bindist as well.

Well, the mtools kept dumping core on me.
Telling the csh to "limit openfiles 64" fixed this again.

No, I have not yet tracked this down, but there seems to be some assumption
in more than one piece of code that you don't get *huge* values from any
inquiry to your rlimit.

Th other day, also playing around with unlimit openfiles, at first glance I
thoght there was an infinite loop in one of my own pieces of code.
All is was doing, in fact, was executing a for(i=0; i<getdtablesize(); i++)
loop closing all file descriptors :-(

Just my DM 0.03 on this topic with  a hint on what might cause these strange
problems.

-Bernard