*BSD News Article 18408


Return to BSD News archive

Xref: sserve comp.os.linux:48005 comp.os.386bsd.questions:3821 comp.sys.ibm.pc.hardware:60362 comp.windows.x.i386unix:2518
Newsgroups: comp.os.linux,comp.os.386bsd.questions,comp.sys.ibm.pc.hardware,comp.windows.x.i386unix
Path: sserve!newshost.anu.edu.au!munnari.oz.au!news.Hawaii.Edu!ames!haven.umd.edu!darwin.sura.net!ra!tantalus.nrl.navy.mil!eric
From: eric@tantalus.nrl.navy.mil (Eric Youngdale)
Subject: Re: SUMMARY: 486DX2/66 for Unix conclusions (fairly long)
Message-ID: <CA62J8.7Fs@ra.nrl.navy.mil>
Sender: usenet@ra.nrl.navy.mil
Organization: Naval Research Laboratory
References: <CA3pv5.56D@implode.rain.com> <PCG.93Jul13210635@decb.aber.ac.uk> <michaelv.742625634@ponderous.cc.iastate.edu>
Date: Wed, 14 Jul 1993 18:11:31 GMT
Lines: 51

In article <michaelv.742625634@ponderous.cc.iastate.edu> michaelv@iastate.edu (Michael L. VanLoon) writes:
>4.3BSD *pages* when system load is light.  This means it takes from a
>[...]
>If system load is very heavy, however, paging would take more time
>than actually running processes, so the system *swaps*.  Swapping
>[...]

	Thanks for the explanation.  As has been pointed out before, linux does
not swap in the traditional sense - since the linux memory manager is not
written to be a clone of some other memory manager, things are different in a
number of ways from a classical memory manager.  One major difference is that
there is no minimum number of pages that the memory manager wants to keep in
memory for each process.  This means that a sleeping daemon can in fact have
all of it's pages removed from memory (the kernel stack page and the
upage are not removed from memory as apparently some other schemes allow).

	When the linux kernel needs memory, it goes through and looks for pages
that have not been accessed recently.  If the page is dirty, this means that
instead of writing it to the swap file, it can simply be reused immediately.
Linux demand loads binaries and shared libraries, and the idea is that any
clean page can simply be reloaded by demand loading instead of pulling it from
a swap file.  Thus it tends to be only dirty pages that make their way into the
swap files, but it also means that the kernel can free up some memory by
reusing some code pages without ever having to write them out to disk.

	Linux tends to share pages whenever possible.  For example, all
processes running emacs will share clean pages for both the emacs binary and
sharable libraries (these pages are also shared with the buffer cache).  This
means that swapping out a process that is running the same binary as some other
process gains very little since much of the actual memory cannot be freed.
Paging still works well in this scheme, because it is still easy to find out
which pages not not been used recently by a particular process, and we can
easily remove unused pages from the page tables for processes on the
system.  Once the usage count for a particular page goes to 0 (i.e. not in
anyone's page tables, and not in the buffer cache), we can reclaim the page
entirely to be used for something else.

	I guess the way I see it, the only advantage of swapping is that you
are effectively keeping particular processes out of memory longer than would
otherwise be the case, which tends to reduce thrashing.  The only time when the
linux approach breaks down is when you have too many computable processes
fighting for access to memory, and in principle the linux scheduler could be
modified to temporarily lower the priority of some of these processes and
ultimately achieve the same result with paging.  With the current kernel, any
idle processes will always be "swapped" via paging as it is, so it is not that
clear that this needs to be done.

-Eric
-- 
"When Gregor Samsa woke up one morning from unsettling dreams, he
found himself changed in his bed into a lawyer."