*BSD News Article 6257


Return to BSD News archive

Newsgroups: comp.unix.bsd
Path: sserve!manuel.anu.edu.au!munnari.oz.au!sgiblab!zaphod.mps.ohio-state.edu!caen!hellgate.utah.edu!fcom.cc.utah.edu!cs.weber.edu!terry
From: terry@cs.weber.edu (A Wizard of Earth C)
Subject: Re: BSD / 386BSD - two kernel/driver questions
Message-ID: <1992Oct8.202520.1901@fcom.cc.utah.edu>
Keywords: BSD 386BSD kernel-malloc / shutdown awareness of a driver
Sender: news@fcom.cc.utah.edu
Organization: Weber State University  (Ogden, UT)
References: <1583@hcshh.hcs.de>
Date: Thu, 8 Oct 92 20:25:20 GMT
Lines: 59

In article <1583@hcshh.hcs.de> hm@hcshh.hcs.de (Hellmuth Michaelis) writes:
>
>1. is it possible to malloc/free some memory inside a driver, or
>	the other way around, is there a kernel_malloc(), kernel_free() ?

Yes.

>	(copy video memory -> temp memory/process contents -> video memory)

But this would not be a good application; I assume you want it for the
X server?  The correct mechanism is to use the "kill -1; sleep 10; kill -9"
mechanism in init (this is done at the current highest patch level) and
to make X respond to "kill -1" (the init patches fix the "block signals
to programs started in /etc/rc" as well) and code the X server to shut
down as a result of this (it doesn't, from my experience, but the new
03 Oct 92 code may be different).


>2. can a driver be made aware of a system shutdown/reboot/halt to
>	perform some special action ?

Only with a modification of the driver interface code.  This should be done
anyway to do things like shutdown the interrupt code on WD80x3 boards so
that they are reset after a warm boot.  Currently, the probe fails on some
systems/ethernet driver combinations because of this, and you have to
"cold boot" to use your ethernet cards; as the non-disabled cards fail
to be identified.

>	(switch a virtual screen system to /dev/console to see all those
>	 messages while the system is going down)

Again, I'd have to say this was a misapplication of the capability.  What
really needs to be done here is to divorce the screen memory for the console
(4000 bytes including attributes plus console "terminal state information"
like current blink/underlin/cursor/color state) from the video board.  The
memory is saved off, but not acted on by the console terminal state machine
(the piece that eats escape sequences and acts on the screen) while it is
saved off.  This is why when you exit the X server, the screen contents are
restored and you are dropped to a prompt, but the clock values aren't
shown on the current screen, nor are "too little memory" and other X server
exit messages displayed to the user.  Of course, this is exactly one step
from being able to support virtual consoles.  The only difference is a set
of controls to coerce the switching, and making the save area an array.
Current SCO implementations use a virtual screen for the X server which is
seperate from the virtual screen where the server was started.  This isn't
a bad approach.


					Terry Lambert
					terry@icarus.weber.edu
					terry_lambert@novell.com
---
Any opinions in this posting are my own and not those of my present
or previous employers.
-- 
-------------------------------------------------------------------------------
                                        "I have an 8 user poetic license" - me
 Get the 386bsd FAQ from agate.berkeley.edu:/pub/386BSD/386bsd-0.1/unofficial
-------------------------------------------------------------------------------