*BSD News Article 90098


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!feed1.news.erols.com!cpk-news-hub1.bbnplanet.com!cam-news-hub1.bbnplanet.com!news.bbnplanet.com!uunet!in3.uu.net!204.191.213.61!ott.istar!istar.net!gateway.qnx.com!not-for-mail
From: doug@qnx.com (Doug Santry)
Newsgroups: comp.programming.threads,comp.unix.bsd.freebsd.misc
Subject: Re: [??] pure kernel vs. dual concurrency implementations
Date: 25 Feb 1997 09:28:43 -0500
Organization: QNX Software Systems
Lines: 91
Message-ID: <5eusur$3bd@qnx.com>
References: <330CE6A4.63B0@cet.co.jp> <5etasa$blt@news.cc.utah.edu>
NNTP-Posting-Host: qnx.com
Xref: euryale.cc.adfa.oz.au comp.programming.threads:3306 comp.unix.bsd.freebsd.misc:36089

In article <5etasa$blt@news.cc.utah.edu>,
Terry Lambert <terry@cs.weber.edu> wrote:
>Hi, Mike!  8-).
>
>
>In article <330CE6A4.63B0@cet.co.jp> Michael Hancock <michaelh@cet.co.jp> writes:
>] I've been talking to some people who are pro pure kernel threading vs.
>] a   dual kernel and userland model when it comes to implementation on a
>] traditional Unix kernel design like FreeBSD.
>] 
>] Assuming a well designed strict kernel implementation and a well
>] designed dual concurrent model, say like Digital UNIX, both using
>] FreeBSD as a starting point, which is the way to go?
>] 
>] Pro strict kernel people say:
>] 
>] * simpler model, less complicated scheduler
>] 
>] * high concurrency
>
>Add:
>
>* Better SMP scalability than pure user space threads
>
>* Higher context switch overhead
>
>* More frequent context switches as blocking calls cause the users
>quantum to be given back to the system
>
>* Potential for CPU starvation duirn N:M scaling of usr contexts to
>  kernel threads for N> M

Huh?  Each user context has an associated kernel context, N=M.

>] Dual concurrency people say:
>] 
>] * better concurrency
>] 
>] * less kernel resource usage problems
>
>Add:
>
>* Equal SMP scalability to "pure kernel"

Nope.  If the kernel doesn't know about it, it can't schedule it to multiple
CPUs.  So the user and/or the lib have to start worrying about LPWs or
whatever.  Madness.

>* Lower context switch overhead

Got any numbers to back this up?  A user level lib has lots of work to do...
At QNX, we found pure kernel level threads were faster.

>* Once the scheduler gives me a quantum, it's *MY* quantum, dammit!

That is silly.  The kernel has a scheduler and now every threaded process?
Let the kernel do all the scheduling, it has more information anyway.

>* better N:M scalability for N user threads and M kernel threads, where
>  N > M

And this buys you what?

>] In DEC's model it doesn't look like you need to worry about converting
>] blocking calls to non-blocking calls as in other userland
>] implementations.  Instead they have some kind of upcall mechanism that
>] supplies a new kernel execution context to the userland process so that
>] another thread can be scheduled if the current one is blocked.
>
>This is a dual concurrency model.  It is about as efficient as you can
>get.  An alternative implementation that would be much simpler for

Got any numbers to back this up?

>conversion of a kernel threading environment, and would save the VM
>mapping overhead of the DEC model, would be to supply an async call gate,
>and generate a new kernel thread *only* for potentially blocking calls,
>using a preallocation pool to have a per-process kernel thread waiting
>to avoid startup latency in the case a conversion was needed.

Let the kernel do what it was designed to do!  Manage system resources!

>] Pure kernel proponents say that in the time all that was done a new
>] kernel thread could have been switched in.
>
>These people are assuming that the VM mapping overhead has to be paid at
>call time.  They are incorrect.  See above.

Where?

DJS