*BSD News Article 79133


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!solace!news.stealth.net!www.nntp.primenet.com!nntp.primenet.com!cs.utexas.edu!news.tamu.edu!news.utdallas.edu!nrchh45.rich.nt.com!ferret.ocunix.on.ca!resurrect
From: merlin@magic.metawire.com (Marc MERLIN)
Newsgroups: comp.os.linux.misc,comp.unix.bsd.freebsd.misc,comp.infosystems.www.misc
Subject: Re: How many hits a second on a unix server?
Date: 20 Sep 1996 11:33:14 -0700
Organization: Private Linux Box based on Red Hat Linux
Lines: 41
Message-ID: <R.51uo1a$bta@magic.metawire.com>
References: <323ED0BD.222CA97F@pobox.com> <51nn4m$gn3@usenet.srv.cis.pitt.edu> <51si5a$79o@magic.metawire.com> <B.A.MCCAULEY.96Sep20114009@wcl-l.bham.ac.uk>
Xref: euryale.cc.adfa.oz.au comp.os.linux.misc:131571 comp.unix.bsd.freebsd.misc:27951 comp.infosystems.www.misc:44080

Reposting article removed by rogue canceller.

In article <B.A.MCCAULEY.96Sep20114009@wcl-l.bham.ac.uk>,
 <B.A.McCauley@bham.ac.uk> wrote:
>In article  <51si5a$79o@magic.metawire.com> merlin@magic.metawire.com (Marc
>MERLIN) writes:

>>>this is asinine. show me NT on an 8M p5/100 serving 300 hits/second at's.
>>>thnot even hard with Linux                                              .
>>
>>300 hits/s = 25,920,000 hits/day,  more than Netscape, and they distribute
>>the load on many servers with DNS rotation.

Several people told me that Netscape was now reaching 100Mhps. It was indeed
on their main web page not long ago,  I used old figures.  I have yet to see
300 hps on a linux machine though.

>>Keep in mind that  each hit takes several seconds to  serve, and that with
>>an average of  5sec/hit (understatement when you look at  most web pages),
>>you would need about  1500 Web servers in Memory (and  of course about 400
>>Megs of memory to fit all these in memory if your unix flavor could handle
>>that many processes).

>I  would like,  however,  to  contest the  estimate  that  1500 web  server
>processes would  take 400Mb  of memory -  remember fork()  doesn't actually
>copy any memory if just marks it for copy-on-write.

You're right, my sentence was misleading: indeed, you don't really have 1500
Web servers in memory since the  code part is not duplicated by fork(). Each
of those servers do need memory for their data segment tough.

I still believe however than 400  Megs is not necessarly an overestimate for
a potential  300 hps (which one  has yet to see  on any machine from  what I
understand) 

>Incidently there are non-forking HTTP  servers for Unix although I've never
>used one.

Yes, but they will still need a lot of memory to handle all those requests.

Marc
-- 
Home page: http://www.efrei.fr/~merlin/ (browser friendly)