*BSD News Article 65184


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.bhp.com.au!mel.dit.csiro.au!munnari.OZ.AU!news.ecn.uoknor.edu!qns3.qns.com!imci4!newsfeed.internetmci.com!inet-nntp-gw-1.us.oracle.com!news.caldera.com!nntp.et.byu.edu!cwis.isu.edu!news.cc.utah.edu!park.uvsc.edu!usenet
From: Terry Lambert <terry@lambert.org>
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Re: FreeBSD vs Linux
Date: 6 Apr 1996 04:27:11 GMT
Organization: Utah Valley State College, Orem, Utah
Lines: 153
Message-ID: <4k4rqv$2og@park.uvsc.edu>
References: <4issad$h1o@nadine.teleport.com> <4jejjt$cdb@park.uvsc.edu> <4jvdiq$nh4@park.uvsc.edu> <1996Apr4.105626.77611@cc.usu.edu>
NNTP-Posting-Host: hecate.artisoft.com

brandon@cc.usu.edu (Brandon Gillespie) wrote:
]
] In article <4jvdiq$nh4@park.uvsc.edu>, Terry Lambert <terry@lambert.org> writes:
] > It's probably worth most of these to open the doors to future
] > use of additional elf features, however, even if ELF isn't
] > really as wonderful as Nick & company would have you believe.
] 
] Why use 'ELF' at all then?  Sure there is a call for getting a 'better' binary
] format, holding up some of the bonuses of ELF as a reason to change.  I saw
] some from the Linux camp talking about making their own extensions to ELF.  In
] my own experience its generally better to not even try half compliance with
] extensions than it is to simply have your own standard, even if it really is
] just ELF+extensions.  Basically, what is stopping people from developing an
] improved ELF and calling it something else?  Forgive my naive approach here, I
] have absolutely no knowledge of binary formats other than that of a passing
] programmer, but I dont see the reason to hold back to ANY system which has
] acknowledged problems.  I also think it would be an applaudable effort of
] the FreeBSD and the Linux camp could do as was suggested once and work together
] to created an extended ELF format (whatever it would be named).

Number one reason?  ELF is a standard.

The biggest deficiency is that associated standards, notably
the SVR4 EABI, don't specify things like call-gate mechanism,
and ELF doesn't have the ability to distinguish binaries
otherwise.

Right now you can tell Linux binaries from SVR4 EABI binaries
pretty easily.  As you add more mechanisms, you have to do
something silly, like intentionally not comply with the EABI,
unintentionally not comply (like Linux), or add non-standard
segments containing architecture/vendor information to the
binaries (opening yourself to "your binaries are XXX bytes
larger than ours -- what inefficient oafs you guys are!" type
comparisons).


The biggest win for an OS that doesn't have a controlling
marketshare is EABI compliance, actually, and the EABI mandates
ELF.

The other, competing ABI, Win32, makes some fundamental
assumptions about the OS architecture -- assumptions based,
in many cases, on just plain incompetent design or on
putting the "backwards" into "backwards compatability",
to not orphan the existing user base (sidebar: the existing
user base, if they use their existing applications, will
not be paying you money for new application, so they should
not be considered as a target market for any revolutionary
change... that is, maintain binary compatability, but keeping
crufty, limiting, API's like Novell and Microsoft are doing
is just plain stupid).

For example, many of the "aligned" structures in Win32
contain elements not on alignment boundries.  For instance,
they call something like:

#pragma pack(1)		// byte aligned packing
struct foo {
	char	x;
	long	y;
	long	...
	char	fill[ 3];
};

an aligned structure because its footprint is not on an odd
byte boundry.

It still takes multiple bus cycles to load 'y' or the elements
signified by '...'.  Clearly, some people at Microsoft simply
don't know how to program computers.


There are also issues in DLL's and VXD's, where the stack is
not maintained in an aligned state, causing cache misses when
copying local variables ('rep movw' instructions with aligned
source and target areas move a hell of a lot faster).

So Win32 is simply not a viable standard ABI.  Many real
processors can't even perform unaligned accesses, and you
can set a bit in modern Intel processors to prevent them
from working there, too.


Since "common ABI is desirable" is (or should be) a "given",
then the problem is arriving at one instead of putting forth
a bunch of "standards with extensions" (which, like Novell's
implementation of POSIX printing, translates into "won't
interoperate with other peoples code").


The flip side of the coin is that there is no ABI conformance
test ceritification -- ABI compatability means that a vendor
will port to the most convenient platform, and no testing
or product certification will take place on "ABI compliant"
platforms.  This is why the IBCS2 standard specifies install
tools and other "apparently unrelated" run environment pieces:
A vendor who distributes "an IBCS2 binary" instead of "an SCO
binary" will support their binary on any IBCS2 system.


Of course, arriving at a common ABI for Linux, BSD, and,
hopefully, other OS's, to better compete with Win32, is easier
to resolve to do than to actually implement.

Linux passes some system call parameter in registers in the
name of efficiency.  FreeBSD almost went down this road to
hell... it still may.  The problem with doing this is binary
emulation environments.  It's no coincidence that the fact
it runs Windows NT doesn't have non-Intel hardware flying
off the shelves.


Even if these can be resolved, there's the "standards body"
problem.  The X/Open common UNIX standard (formerly Spec 1170)
is a joke: it specifies proprietary technologies, like Motif
and Streams and several other things that the vendors who own
them got jammed in (like the VUE pieces in CDE).  This changes
the standard from a standard into a private club: buy-in for
membership, and the whole 9 yards.  Back to business as usual,
it seems.  I guess no one is learning from the fact that,
despite the best efforts of Novell, Microsoft, and the US
Government (remember GOSIP?), the Internet is *still* based on
TCP/IP.

OK, I'm done with "the desirability of standards", back to "ELF".


ELF is sufficiently rich that, with small compromises, it can
open enough technological doors that are currently locked closed
against a.out (but through which no one is stepping right now
anyway), that by the time we hit the next closed door, it will
be time for a new standard anyway.  Maybe something like q-code
machines that on-the-fly convert ANDF "binaries" into architecture
specific binaries in alternate "forks" from pre-generated quad
trees that haven't yet been through a code generator.  Probably
using link-object-cacheing technologies, like OMOS, to do the
job quickly enough that the user won't notice the load delay.

One file runs on all machines, regardless of OS, and testing and
compilation is only needed on one machine.


But we are a hell of a long way from that, and ELF is the next
logical step, if only because it *does* open many doors, and
it is *already* becoming pervasive.


                                        Terry Lambert
                                        terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.