From: Dan Hildebrand (danh@quantum.on.ca) Subject: Re: Microkernel means "low level services"????? Newsgroups: comp.unix.bsd View: Complete Thread (7 articles) | Original Format Date: 1992-05-13 05:38:22 PST In article <1992May8.222753.2074@walter.bellcore.com> mo@bellcore.com (Michael O'Dell) writes: >On the other hand..... > >One believed-powerful idea is to replace a large set of functions by a >smaller "spanning" set. This is clearly the intent by the >Microkernel efforts - a smaller, yet more general set of functions. >So maybe the problem is not with "microkernels" per se, but rather >with the set of spanning functions chosen? Certainly - our goal with QNX was to implement a microkernel that did only the most essential services and then build everything over those services. This meant message passing, first level interrupt handling, process scheduling and message redirection to an optional network process are all that is in the microkernel ( 6.8 Kbytes, 14 kernel calls ). Everything else is implemented as a process running in a separate, MMU protected space. By making message passing the lowest level primitive, and then using the network hook into the microkernel to connect the microkernels all over the network into a single merged-microkernel, all resources on all nodes are accessible by any process. The context switch times are fast enough ( 12 microseconds on a 33 MHz 486 ) that the distinction between a kernel call and a context switch doesn't matter so much, but the gain in flexibility provides other optimizations. >The Plan 9 system (which doesn't call itself a Microkernel) >is certainly micro in size, micro in the number >of system calls (14 I think) and concepts both exported and contained, >and is extremely powerful. So we do have at least one >pursuasive-argument-by-construction that the basic notion >may be fine, assuming one picks the right spanning set. > >Performance, though, is still a knotty issue. Bershad's talk >at the Microkernel workshop about how "IPC Performance Doesn't Matter" >was particularly unconvincing to me and the people I talked to. We agree, and disagree with him. On the agreement side, for a request going into, say, the file system process, a 12 microsecond context switch to get there is probably irrelevent compared to the amount of processing that will be done once you get there. The flexibility gained is worth it. On the disagreement side, since our customer base does realtime applications, they use those same IPC services to implement their realtime systems. They need to service event rates far greater than a typical disk system would generate. As a result, the faster and leaner the IPC is, the better. Also, being able to write a team of cooperating processes which can then be distributed ( without changes to the executable files ) over a LAN to take advantage of more processors is good for our target market. >Presotto's Plan 9 talk made the point several times that deciding >when and how far to push some idea is the most subtle yet far-reaching >decision one makes in this kind of design. Exactly. Our choice of message passing as the lowest level primitive "feels" inefficient at first glance. But at higher levels in the system - distributed processing, for example, it makes the architecture so much simpler ( ie: faster ) that the overall result is a performance gain. Additionally, with some tricks to how mulipart messaging works, we're able to outperform SVR4 disk, network and device I/O by a large margin. Dan Hildebrand email: danh@quantum.on.ca Quantum Software Systems, Ltd. QUICS: danh (613) 591-0934 (data) (613) 591-0931 (voice) mail: 175 Terrence Matthews (613) 591-3579 (fax) Kanata, Ontario, Canada K2M 1W8