*BSD News Article 19608


Return to BSD News archive

Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!agate!howland.reston.ans.net!gatech!prism!gt8134b
From: gt8134b@prism.gatech.EDU (Howlin' Bob)
Newsgroups: comp.os.386bsd.development
Subject: Re: V86 mode & the BIOS (was Need advice: Which OS to port to?)
Message-ID: <109159@hydra.gatech.EDU>
Date: 17 Aug 93 19:50:19 GMT
References: <kjb.745145142@manda.cgl.citri.edu.au> <1993Aug13.042831.15754@fcom.cc.utah.edu> <108738@hydra.gatech.EDU> <1993Aug15.062620.6503@fcom.cc.utah.edu>
Organization: Georgia Institute of Technology
Lines: 327

In <1993Aug15.062620.6503@fcom.cc.utah.edu> terry@cs.weber.edu (A Wizard of Earth C) writes:

>In article <108738@hydra.gatech.EDU> gt8134b@prism.gatech.EDU (Howlin' Bob) writes:
>>In <1993Aug13.042831.15754@fcom.cc.utah.edu> terry@cs.weber.edu (A Wizard of Earth C) writes:
>>
>>>WHAT I DID OVER MY SUMMER VACATION
>>>Copyright (c) 1993, Terry Lambert
>>>All rights reserved
>This was a stab at humor.  Apparently it left it alive and well, and I will
>have to stab again.  8-).

Terry, I was trying to be your straight man.  Every good comedian needs
a straight man, right?

>>>There should be a video driver in the kernel... X is not the only consumer
>>>of graphics display services.  A DOS emulator must also consume these
>>>resources, as must console and virtual console implementations.
>>
>>I agree, there *should*.  I also happen to feel that most applications
>>should keep their grubby hands off the hardware.  X and dosemu are
>>exceptions.  

>I don't think they are exceptions unless you pound a device-as-a-resource
>wedge into the kernel.  The fact that you supported UART emulation at all
>is an indicator that you don't buy dosemu as an exception.  X is an exception
>because of tradition, not need.

No, not really.  The fact that I support UART emulation at all shows how much
harder it is to reliably steal the UARTs from the kernel :-)  Seriously,
not only would I have to tell the kernel that the UARTs are being used by
an application, but I would have to find some way to revector serial
interrupts into dosemu.  This is doable, and probably will be done at some
point for exotic devices, but not something I really relish doing.  

I think we're arguing at cross purposes here: you're aiming for "done right,"
but I'm aiming for "done before I die."  Maybe that's a little pessimistic;
I have 61 years until I'm ripely 80, but do think that the device support
you're talking about has no place in a PC Unix.  With the wide proliferation
of slightly incompatible hardware, non-disclosed programming secrets, etc.,
the PC is a bad platform.  

The "device as a resource" approach of the Amiga Exec was beautiful, letting
you access a device at one of three levels: totally abstracted by the OS,
low-level but abstracted by the device drivers, or "here, much with it all
you want."  The first level is UNIX-like.  The second level is somewhat like
providing OS-arbitrated access to the device, letting you set timers/interrupts,
etc. with device driver calls; the closest UNIX can come is its plethora of
ioctl's, which still aren't up to snuff.  And the third level was basically
telling the OS to keep its hands off the device while you run the show.

What I want implemented is the third, lowest level; it's mostly there for
console control in Linux.  You want something between then first and second 
levels.  It's a nice idea, but it's a lot of effort.  Frankly, all I'm
really concerned about is getting dosemu and X to coexist.  They do under
Linux, albeit somewhat precariously.  

>>A windowing system is the proper place for UNIX graphics.

>I'll quote this again a little later.  8-).

>save mechanisms, such as write-only register shadowing are insufficient
>if the mode is set by "operation A followed by operation B" and a different
>mode is set by "operation B followed by operation A".  I can think of at
>least one ATI card where this is true.  

Ok, you're right.  Magic registers don't fit well into register shadowing
schemes.  I just have to shrug; Mach's DOS emulator only runs on a VGA
console, period.  There's only so much I can expect to do.  Here's my
consolation prize: CGA/EGA/MDA are all pretty standard.  Run them
only as vanilla CGA/EGA/MDA and dosemu can shadow your registers
with some intelligence, allowing you to switch consoles.  It can't find
out the mode from which you entered dosemu, but it can assume 80x25
text.  So, vc-switching and exiting are handled tolerably well.

VGA cards are non-standard: dosemu cannot ever hope to know all the
different card layouts.  So, there are two options:  1) use the card as
a standard VGA, and dosemu can save/restore the state reliably.  Never
tell any application, including AutoCAD, that you have a FooVGA 1200
card.  Always act vanilla.  2) If the BIOS designers are smart, the
save/restore state BIOS call will work as it should, and you can count
on the BIOS to save/restore the state for you.  I still haven't decided
how to handle memory save/restore; if I must, I could do a million
int10h calls to Read Pixel :-(

This is not a completely robust and application-friendly model. You're
right about almost every point: the OS should take care of the hardware.
However, for my goals, implementing that is just not practical.   If
NetBSD 0.10/FreeBSD/386BSD 0.2 implement some such mechanism, believe me,
I will quickly change dosemu to support it.  Furthermore, I will steal
it from your kernel and port it to Linux, rip out all the "non-ideal"
code from dosemu, and require the new video drivers.  

It just seems to me that distributed projects like Linux and *BSD must
hold resource management high in their ideals; we can't expect a
handful of volunteer hobbyists and students to work their fingers to
knubs for intangible benefits.

Not that all your points lead to intangible benefits, but my statement
still holds true :-)

>Any VGA card can emulate a "vanilla" VGA card, by definition, with nearly
>zero overhead in anything but the mode switching operations, which have
>to be trapped (as do all BIOS calls effecting hardware state) anyway.

True.  

>>The best solution would be the int10h,1c function described above.
>>If this really does a complete video state save and restore, then
>>many of the problems are solved.  Of course, you still have to
>>know how to save all of the video memory...

>This is perhaps the most speed-effective soloution (I would -- and have done
>so -- argue the point); however, the standard does not require that states
>outside of the standard be saved.  When we talk about applications like

Hmm.  Do you mean then VGA standard, or VESA?  The call I refer to is
part of the standard VGA BIOS.  I believe that it might support
modes outside of the standard because you must first request the
size needed to hold the state.  Now, if I could depend on the
int10h,ah=4fh VESA functions, then things get even better.  I
know that most modern cards have VESA in the BIOS, and many
other cards (like the ET4000) have VESA TSR's (like tlivesa).
The VESA standard, pitiful BIOS-based thing that is, gives a 
few extra functions like "save/restore SuperVGA state" "Get SVGA
Mode Information" (i.e. supported/not supported, mode stats), 
and "Set SVGA Memory Control" (i.e. set the visible memory window).
I could do a lot with these.

Mind you, non-BIOS save/restore works pretty well for me; right now
I'm using the "INTERVUE" browser that comes with Ralf Brown's
interrupt list; one virtual console in 100x40 text running kermit,
another in 80x25 text running dosemu/INTERVUE.  The model *can*
work.

>None taken.  I am more interested in applications written to take advantage
>of a Unicode (or similar) rendering engine, and providing the applications
>with such an engine, and providing the engine with an interface to use in
>the form of a console.  NT embodies a rendering engine; applications written
>to the NT interface should require minimal modification for other Unicode
>(or other rendering standard) interfaces.  This includes the OS and all

You would know much more about this than I;  like the good ugly American
that I am, I care very little about internationalization technology.
I think the best way to internationalize is to make sure all OSs and
applications only support Esperanto.

Anyway, I admit that applications written for NT's wide character set
(is it necesarily Unicode?  I thought the interface was more general)
could be served by a similar interface on the console, I believe that
similar work put into X would be better spent.  Furthermore, I doubt
many NT applications will be content with international text; you
may have a portable *text* rendering engine, but how about the rest
of the Windows environment?  As I understand, Windows NT doesn't
provide stdin/stdout analogs: every application must create its
own window.

>>This is a good point.  Believe me, I don't have any urge to write
>>said switching code.  I would have been happy if I could have relied on
>>the kernel to do it for me.  But someone's gotta write the switching
>>code, and it's gotta go somewhere.

>Better to put it in the driver and abstract the interface.  My argument in
>a nutshell.

I agree.  Better to put it in a driver.  Much harder.

>>I get better performance with fracting under dosemu than I do with
>>xfractint.  That's not surprising.  X is bound to be slower for some
>>tasks.  (of course, some of the slowdown is because xfractint doesn't
>>use its fast assembly integer math code under Linux).

>I know some of the X servers takes advantage of acceleration not embodied in
>even the DOS fractint.  Again:

Well, seeing as how fractint is necessarily point-oriented, not geometric in
the least, I don't see how the X server really benefit from knowing a little
more about the card.  

>>A windowing system is the proper place for UNIX graphics.

Er, so?  I was just bragging.  And I don't count dosemu+fractint as
"UNIX graphics."  I count that as slumming. 

>>>With such an interface, I can write an X server, GSS/CGI, MGR, PostScript,
>>>Display PostScript, HPGL, a DOS emulator, or any other consumer of the
>>>adapter driver without a single line of device dependent code, and without
>>
>>I wouldn't be so sure.  Rather, I wouldn't be so sure you'd *want* to.
>>Remember, there's more to graphics than setting the mode.  How will you
>>implement actual drawing?  What if the X server wants to use the
>>2048x1536 mode offered by your new ZGA card?  Well, you can 
>>(hopefully) get the programming information from the vendor, put together
>>an LKM with the mode setting magic, and simply add a new mode to your X
>>server's mode list.  Now, let's say it's a command-oriented card,
>>like the S3.  IN fact, it's worse than that: you have no access
>>to the video memory except through commands.  Insert any other
>>VGA-incompatible card architecture here, but the point is that
>>by the time you've added all the primitives needed to effiently use 
>>it (get/put pixel, get block of video ram, put block of video ram,
>>circle, square, palette, etc) you've written a fairly large kernel
>>service.  Don't forget that these operations must be supported
>>on *all* the cards.  So, you have to write the circle,square, etc.
>>code for all the cards.  
>>
>>That's a little too complex for my kernel, thank you.

>OK; here's the re-quote I warned you about:

>>A windowing system is the proper place for UNIX graphics.
>The DOS emulator runs in a UNIX environment.

Yes, yes, and we both know that's a misinterpretation of my statement.
Unless you're IBM or Microsoft, you don't design your OS around 
the other OSs you might want to emulate.  I mean that native UNIX
programs should stick to X.  I have said, and will say it again:
dosemu is an exception.  You can say it's not, and we'll have gotten
nowhere.  To me, dosemu is a kernel service that just happens to
be implemented in user space.

>Direct video I/O can be supported, and some intelligence in terms of the
>knowledge of display memory geometry will have to be supported.  Write and
>read faulting can translate to the "generic" model if we want to run in a
>window in the X (or other distributed diplay environment).

>Note that Phoenix was able to match the speed of a 4MHz PC on a 7MHz 68000
>with little trouble.  I have a hard time believing I can't do a 33MHz PC
>on a 50MHz 486 -- especially since there is much less that needs emulation.

I'm not sure what you're saying here.  Are we still talking about
graphics?

>>Linux has the stubs for SYSVR4 mode setting, as well as mouse control, in
>>the console code, by the way.  It looks alright, I guess, but I think
>>that the X server should not sacrifice the speed its knowledge of specific
>>video cards and features gives it.  

>I'm not asking it to; the state information need not take into account
>operations in progress on accelerated cards... only the display memory
>contents and the vidoe card state at the time of the switch.

This doesn't seem logical.  If you go mucking about with display memory
while an S3 operation is still in progress, you're bound to disconcert
something.

>>>Part of the DOS emulation would be a layer to emulate a generic CGA/EGA/VGA
>>>card on top of the driver.  Since this interface has a well-defined upper
>>
>>Shudder.  People, this is not easy, and it is not fast.  I implemented
>>virtual UARTs for dosemu, and they're not pretty.

>But this very implementation implies it is possible (as do all versions of
>SoftPC 8-)).  It also implies that there exist sound resons to do such things.

Yes, and I have explained them (partially).  The UART is a piece of hardware
that the kernel 
     1) already provides sufficient services for; I lose no functionality
        by using virtual UARTs
     2) already provides sufficient services for; I do not have to write
        kernel UART device drivers.
     3) UARTs are very standard.
     4) For non-standard UARTs (like AST 4-port cards, etc.), a 
        virtualization that resembles a standard UART will be a win:
        qmodem can run on whatever intelligent hardware you have.
        For non-standad VGA's, a standard-constrained virtualization
        is a loss: your games/viewers/applications cannot use the
        increased color palette and resolution.
     5) UARTs are interrupt driven, and the display is not (for all
        practical purposes).  Getting interrupts vectored into a
        user task and serviced bythe user task would be a nightmare.
        Direct access to the UARTs would have to; VGA wouldn't.

I think virtual UARTs are a good idea, and I will work on the code
I already have so that it can function reasonably effectively
at speeds above 7200 bps (where it currently chokes).  I will also
work to complete the functionality; modem control signals and
state change interrupts are currently broken.

>screen memory.  The only reason for emulating the actual card are to "go
>remote"... somthing that is accomplished with adequate speed even with serial
>port/modem technology in products like "PC Anywhere" and "Carbon Copy".

True; I have similar plans to "go remote" into an X window.  Not this
year, mind you.

>>>There is another significant advantage, which is the ability to provide a
>>>user space daemon that translates the CGA or VGA calls into X calls to a
>>>particular server connection.  With such a "shim", I can easily run DOS
>>>many applications on remote X servers.  Generally, I will not want to do

>>I really want to scream.  Let me yell for a second: FOR ANY PROFESSIONAL
>>DOS APPLICATION, THERE ARE **NO** VGA CALLS EXCEPT ONE: SET VIDEO
>>MODE.  Real applications bang on the hardware in the worst way, and
>>there is no such thing as "simply" translating this into X calls.
>>Do you realize that the VGA memory address space (0xa0000-0xbffff)
>>can actually address two different 128k banks, one for writes, one
>>for reads?  Do you realize that the VGA memory access hardware lets
>>you specify that the data be rotated and masked before it gets
>>from the card to the CPU or before it gets from the CPU to the card?
>>If you can suggest a "simple" way to handle this, I'll be indebted
>>to you.  There are ways, but they're neither efficient nor simple.

>Yep.  Copy memory deltas at reasonable time intervals, or fault accesses to
>a virtual display's "video memory".  This is what "Carbon Copy" does (and
>makes so mus money for).

dosemu does this, too.  This is how it provides DOS directly-modified
text screens over a tty.  It gets much more complicated for VGA, and
much slower.  This woud, of course, require the faulting technology
as well as the kernel drivers you want.  So be it.  I certainly
don't have the time.

>>I can see why the *BSD groups get more arguing than development done.

>But even with 90% argument and only 10% developement, as long as you have
>100 times as much arguing as other groups have developement, you end up with
>10 times the developement.  A fair trade, wouldn't you say?  8-).

Well, it was an unfair shot caused by a frustrating day.  I'm glad
you turned it around.  I'll even be so generous as to say that your
percentages are too pessimistic.  However, I think the factor of 100
is a bit off: the *BSD are (thankfully) fairly low-volume.

-- 
Robert Sanders
Georgia Institute of Technology, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt8134b
Internet: gt8134b@prism.gatech.edu