*BSD News Article 50947


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!simtel!swidir.switch.ch!newsfeed.ACO.net!Austria.EU.net!EU.net!howland.reston.ans.net!nntp.crl.com!news.fibr.net!usenet
From: Rob Snow <rsnow@txdirect.net>
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Re: "An HTTP software server can pummel a CPU..."
Date: 15 Sep 1995 02:31:08 GMT
Organization: G3 Research, Inc.
Lines: 63
Message-ID: <43aohc$89j@nimitz.fibr.net>
References: <gary-1309951409030001@bhb17.acadia.net> <438u8f$cok@kadath.zeitgeist.net> <439qed$rdm@lace.Colorado.EDU>
NNTP-Posting-Host: oasis.txdirect.net
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 1.1N (X11; I; BSD/386 uname failed)
X-URL: news:439qed$rdm@lace.Colorado.EDU

apuzzo@snake.colorado.edu (Tony Apuzzo) wrote:
>In article <438u8f$cok@kadath.zeitgeist.net>,
>Amancio Hasty, Jr. <hasty@rah.star-gate.com> wrote:
>>gary@first.acadia.net (Gary Robinson) wrote:
>>>Hello,
>>>
>>>From an InfoWorld 6/19/95 article:
>>>
>>>"An HTTP software server can pummel a CPU, because there's no mechanism in
>>>any existing server to control the amount of processor time allotted.  Ten
>>>users doing SQL questies, for instance, might bring the system to a
>>>standstill while users trying to receive static pages wait."
>>
>>Yeap , this is very true for OSes which don't have priority pre-empted
>>scheduling. For instance, right now I am running a simple program which
>>has a tight loop just chewing up the cpu while I am typing this.
>>Wait let me start five more copies...
>>So much for InfoWorld.
>>
>>Curious which OS was the article referring to ?
>>
>>	Tnks,
>>	Amancio
>
>You're missing the point Try this instead:
>
>   sh$ for i in 1 2 3 4 5 6 7 8 9 0
>   > do
>   > find / -type f -exec grep slow '{}' \; 2>&1 >/dev/null &
>   > sleep 5
>   > done
>
>This is a more realistic (though still pretty poor) simulation of 10 users
>doing complex SQL queries.
>
>This type of thing can only happen if your HTTP server is serving pages
>that support search engines.  If you have other users trying to get regular
>pages (html code, .gif's, etc.)  they will be *significantly* delayed while
>this is going on.  It is possible to avoid problems like this through load
>balancing, or running the SQL on another machine, etc.  I think Infoworld's
>point was that it is easy to slow an otherwise capable HTTP server to a
>crawl if you aren't careful.
>
>-Tony
>-- 
>*                                                         
>* Be a non-conformist like me and don't use a .sig at all.
>*                                                         

Maybe I dont quite understand this, but how about starting your SQL
queries with a high nice level?

ie. In the above example:

	nice +15 find...etc...

My machine is almost always running a couple of processes doing time series
analysis and I just run them nice +15 - +20.

______________________________________________________________________
Rob Snow                                            Powered by FreeBSD
rsnow@txdirect.net                              http://www.freebsd.org