*BSD News Article 19451


Return to BSD News archive

Newsgroups: comp.os.386bsd.development
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!utah-morgan!hellgate.utah.edu!fcom.cc.utah.edu!cs.weber.edu!terry
From: terry@cs.weber.edu (A Wizard of Earth C)
Subject: Re: V86 mode & the BIOS (was Need advice: Which OS to port to?)
Message-ID: <1993Aug13.042831.15754@fcom.cc.utah.edu>
Sender: news@fcom.cc.utah.edu
Organization: Weber State University, Ogden, UT
References: <hastyCBLnIF.Cyq@netcom.com> <1993Aug11.164429.6015@fcom.cc.utah.edu> <kjb.745145142@manda.cgl.citri.edu.au>
Date: Fri, 13 Aug 93 04:28:31 GMT
Lines: 203

In article <kjb.745145142@manda.cgl.citri.edu.au> kjb@cgl.citri.edu.au (Kendall Bennett) writes:
>>I am concerned about the use of a VM86 to do S3 card initialization.  What
>>happens when I take the same card and put it in a machine where a VM86 is
>>either not possible or requires writing an entire hardware emulation?
[ ... ]
>This is a valid point, but you must realise that ignoring the BIOS is not
>really a viable solution. Both Microsoft and IBM tried to do this with
>OS/2 and Windows NT, but have both gone back to using the BIOS at 
>initialisation time to identify the underlying video hardware. This is 
>really the only plausible solution since then you can write an X11 server
>that will work for generic VESA VBE video cards, without any type of
>acceleration.

Microsoft failing at something is not a sterling proof of impossibility;
however, I get your point (and agree with it to an extent).  But I still
think DOS should not be the yardstick by which good software is judged...
it's like judging the quality of a banana on the basis of its resemblance
to a turnip.

>The biggest problem with video card initialisation, is that unless you
>know _everything_ about the underlying hardware, it is very hard to
>set up the correct video mode timings without a lot of hassles (ask 
>anyone involved with the XFree86 and XS3 about this). By using the
>BIOS, you can let the hardware decide how it should be done.
>
>I also don't see that this is that much of a problem. Given the fact that
>you may be on a different arhictecture, you can simply have two compilable
>versions - one that uses the dosemu stuff to access the BIOS, and another
>that goes directly to the hardware for new architectures like the MIPS and
>DEC Alpha.


WHAT I DID OVER MY SUMMER VACATION
Copyright (c) 1993, Terry Lambert
All rights reserved


THE ARGUMENT AGAINST BIOS IF YOU ALREADY NEED A HARDWARE LEVEL DRIVER

If you have the driver for the direct-to-hardware implementation, you don't
need the BIOS implementation.  Following the reasoning that portability is
nice, we end up at wanting a bunch of drivers, only one of which uses the
BIOS, and only as one of several "generic" drivers.


THE MANDATE

There should be a video driver in the kernel... X is not the only consumer
of graphics display services.  A DOS emulator must also consume these
resources, as must console and virtual console implementations.

Why should there be a video driver in the kernel?

1)	The current "console switch" model requires the application being
	switched to save its state and restore a usable video mode prior
	to the switch being allowed.  THIS WILL NOT WORK FOR THE FOLLOWING
	REASONS:

	a)	The console (as opposed to a virtual console instance)
		*must* be able to preempt the current virtual console
		instance from with the kernel in order to activate the
		kernel debugger.  The kernel debugger is a significant
		means of debugging the video drivers and the DOS emulator
		itself, as well as the X floating point problems in an
		emulated FPU environment.  This need not be the result
		of a panic.  The kernel debugger might run as the result
		of a breakpoint being hit.  It is not possible to ask the
		application to restore a known state if the application
		is not running.

	b)	The DOS emulation can not use this model for anything more
		complicated than a MDA/CDA (text only) emulation.  If an
		application such as AutoCAD uses incestuous knowledge of
		the real adapter because it is running its own driver,
		there is no way to have the application restore the adapter
		to the correct state, nor is there a way to restore the
		adapter to the current state instantiated by AutoCAD when
		the DOS emulation resumes control of the adapter, as would
		happen if you were to switch from the VC running the DOS
		emulator to another, and then back.  THE DOS APPLICATION
		DOES NOT EXPECT NOTIFICATION (AND DOES NOT RECEIVE IT), NOR
		IS IT ACCUSTOMED TO SHARING THE VIDEO HARDWARE.  THE DOS
		EMULATOR (THE APPLICATION BEING ASKED TO RESTORE THE STATE)
		CAN NOT DO SO -- IT IS NOT THE APPLICATION WHICH CHANGED
		THE STATE, IT SIMPLY RAN THE APPLICATION.

2)	To support internationalization.  Internationalization requires
	the ability to render character sets other than the default PC
	character set.  Good internationalization requires the ability
	to render large-glyph-set character sets (like Kanji) and to
	render ligatured characters (like Hebrew/Tamil/Devangari/Arabic,
	etc., etc.).

3)	To prevent having to wire mode switching code into every application
	that requires the ability to render directly to console hardware.
	Despite the wonderful efforts of the X gods, it is still much slower
	than direct video I/O for some applications.  The mode handling code
	is available for use by all applications.  This will become more
	important as commercial applications become available.


THE DEFAULT IN KERNEL DRIVER

The default video driver should support detection of MDA/CDA/MGA/CGA/EGA/VGA
and default mode handling for modes of the device it detects.  You might add
HGA if you get ambitious.


APPLICATIONS AS DIPLAY SERVICES CONSUMERS (USING THE X MODEL FOR MORE THAN X)

The X server and any programs that want to consume display services on a
particular virtual console are permitted to make calls (implemented as device
ioctl's on fd 1) to the driver; there is a standard set of calls supported,
one of which returns a list of extended modes with cannonical names which are
supported by the driver.  Applications use the extended mode values to select
modes other than those supported by the default driver.

With such an interface, I can write an X server, GSS/CGI, MGR, PostScript,
Display PostScript, HPGL, a DOS emulator, or any other consumer of the
adapter driver without a single line of device dependent code, and without
duplicating the effort required to produce device specific code for each
application.  This is nearly a postage-stamp description of Intel's iBCSII
standard, but is more flexible in that it does not limit the modes which
can be selected to a subset supported by all cards -- this also renders it
more complex.


LEVERAGING LOADABLE MODULES FOR A SMALLER KERNEL AND FUTURE [UNKNOWN] CARDS

Since the operating system supports loadable kernel modules, I can place a
minimal text-only driver in the kernel as the default and for installation
(personally, I would allow the user to pick a localization language during
the install phase, so my default driver would have to support sufficient
graphics for minimal internationalization).  During the install process,
the user could designate one of the device specific supported drivers to
be loaded as part of the boot process.

Because these modules are loaded, we need not be deterministic in any way
except as concerns an applications  ability to determine the modes supported
and how they are to be selected.  THIS MEANS WE CAN ADD SUPPORT FOR NEW
VIDEO CARDS AS THEY COME ON THE MARKET WITHOUT RECOMPILING ANY OF OUR
APPLICATIONS.


USING THE BIOS AS AN OPTION

*One* potential alternate implementation which might find its way into a
loadable adapter driver would be a VESA VBE driver that uses BIOS calls to
implement the modes that it makes available to its consumers.  I WOULD
CERTAINLY NOT LIMIT THE ABILITY OF THE OPERATING SYSTEM TO SUPPORT MORE
COMPLEX DRIVERS BY RESTRICTING MODES TO THOSE SUPPORTED IN BIOS.  One
example of something you would discard in the process of using "BIOS modes"
is the cpability to support odd-size X displays which offer more real-estate
to the user by utilizing the overscan regions through creative monitor timing.


DEALING WITH DOS EMULATION:

Part of the DOS emulation would be a layer to emulate a generic CGA/EGA/VGA
card on top of the driver.  Since this interface has a well-defined upper
and lower bounds, it could be easily replaced with a more complex bleed
through of the actual card capabilities, up to and including "emulating"
the actual card in the machine by allowing all of the commands supported by
the card itself.  Generally, this emulation would take the form of write-only
register caching so that full state information could be tagged to the virtual
console session by the DOS emulator; more complex cards could require the use
of a finite state automaton if multiple mode transitions were required to
reach a particular mode.

There is another significant advantage, which is the ability to provide a
user space daemon that translates the CGA or VGA calls into X calls to a
particular server connection.  With such a "shim", I can easily run DOS
many applications on remote X servers.  Generally, I will not want to do
this for motion or pixel-intensive graphics; still, this represents a
significant capability -- to run Lotus or DBase III on an X terminal or
beside an X application on the local console and cut and paste between
it and other X applications.


OTHER ISSUES

This text has not dealt with the issues of generalizing the keyboard driver,
supporting keyboard maps under internationalization, seperating the concept
of an internationalized keyboard from that of a user-initiated keyboard
remapping, or the virtualization of the keyboard or the mouse drivers.
These are all issues that bear consideration at the same time as one deals
with the display services issues.  These issues have all been considered at
great length by the members of the console mailing list (membership was
solicited some time ago on these news groups), and much discussion has
resulted, including, but not limited to, the scenario I have outlined here.



All in all, there is no reason why an architecture can not be arrived at
that provides the best of both worlds without precluding the capabilities
of either.


					Terry Lambert
					terry@icarus.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.