*BSD News Article 82021


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!news.cs.su.oz.au!metro!metro!munnari.OZ.AU!news.ecn.uoknor.edu!news.wildstar.net!news.ececs.uc.edu!www.facs.federated-fds.com!news-out.internetmci.com!pull-feed.internetmci.com!newsfeed.internetmci.com!www.nntp.primenet.com!nntp.primenet.com!mr.net!news.mr.net!news
From: fritchie@MR.Net (Scott Lystig Fritchie)
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Performance measurement via vmstat puzzling
Date: 29 Oct 1996 18:04:30 -0600
Organization: Minnesota Regional Network
Lines: 317
Sender: fritchie@data
Message-ID: <ytl20ehs4fl.fsf@data.mr.net>
NNTP-Posting-Host: data.mr.net
X-Newsreader: Gnus v5.1

Greetings --

I've been puzzled for a while now about using "vmstat" to try to
figure out how much oomph! my P5-120 PCI, dual Adaptec 2940, SMC
EtherPower10/100 FreeBSD 2.1.5-RELEASE box has got left.  I've been
looking at "vmstat"'s output, and the more I look, the more puzzled I
got ... so I stopped looking.  :-) Perhaps a gentle reader could
assist.

[Perhaps I should be using a different measuring tool?]

This box is busy acting as an NNTP feeder machine.  On its busiest
24-hour period so far it has successfully sent just over 7 million
articles (not counting the IHAVE/TAKETHIS offers and sent-but-rejected
articles).

At one minute before the hour, all of the "innxmit" processes are
killed, just so that we don't have a big problem with 'hung "innxmit"
processes'.  The machine's performance goes utterly to Hell for 90
seconds or so, since all the innxmit processes are busy trying to
re-write their batch files.  On the hour exactly (and every 10 minutes
thereafter), "nntpsend" is run via cron, which shrinks the batch files
(if necessary) and then runs "innxmit" for each feed.

Scattered throughout the vmstat output is a message, displayed every
minute, of how many "innxmit" processes are running at the time +
"uptime" output.

% vmstat -w 10 sd0 sd1 sd2
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
[first line deleted, since it always seems to contain bogus data]
 6 0 0318504 45860  365 152   0   1 386   0 50 16 16  0 3102 1807 1128 22 52 26
 2 0 0301372 45820  398 272   0   1 476   0 54 16 15  0 3517 1922 1317 19 59 22
 5 0 0310536 45552  291 118   0   0 224   0 13 20 14  0 3522 1628 1195 14 51 35
 8 0 0321820 44936  146 111   0   0  79   0 14 18 14  0 3411 1778 1019 14 68 18
     46 ... 12:56AM  up 4 days, 14:32, 2 users, load averages: 4.43, 3.96, 3.04

For those who don't want to wade through the rest of my message, I'll
get right to the point.  For those 40 seconds worth of samples, the
load average was about 4.4, but the CPU was allegedly idle for at
least 1/4 of the time.  So, is "vmstat" lying about the amount of idle
CPU time?  Or are there too many context switches and/or interrupts
going on to service those processes in the runable queue?  Is the
number of processes in the runable queue inflated?

The reason I ask all of this is that I want to be able to guess how
many more feeds I can put onto this box.  If I go by idle CPU time
(and amount of bandwidth available on the Fast Ethernet), I can add
perhaps 25% more feeds.  If I go by load average, I should stop adding
feeds now (and perhaps move some off).  I'm quite puzzled....

The rest of the "vmstat" output below contains better examples of high
CPU "idle" time yet relatively high runable queue sizes.  If you're
curious, continue reading, otherwise you've probably read enough.

We continue...

 5 2 0315368 45024  350 186   0   2 344   0 54 16 16  0 3548 1951 1223 19 67 14
 3 0 0337528 43740  195 219   0   0 182   0 24 14 16  0 3524 1868 1114 21 79  1
 1 1 0329212 44276  315 134   0   0 255   0 23 21 21  0 4022 1854 1362 24 74  2
 5 0 0320900 43620  267 128   1   3 200   0 30 19 18  0 3619 1692 1335 24 55 21
 7 0 0307228 45708  366 284   0   0 445   0 56 17 12  0 3580 2069 1422 25 62 13
 7 0 0298552 43784  330 163   0   0 259   0 13 17 19  0 3933 1907 1513 24 60 16
12 2 0315892 43048   81 146   1   2  40   0 30 17 16  0 3855 1821 1242 22 68 11
 7 0 0311112 43276  128 172   0   0  93   0 23 17 17  0 4290 1465 1436 24 76  1
15 0 0320016 42212  162 169   0   0 102   0 32 21 17  0 4172 1596 1510 28 72  0
     46 ... 12:57AM  up 4 days, 14:33, 2 users, load averages: 6.01, 4.58, 3.38
11 1 0327772 43084   47 213   0   2 118   0 31 19 15  0 4039 1358 1363 23 77  0
 8 0 0293420 44020  200 222   0   0 323   0 25 15 19  0 3927 1597 1329 25 74  1
 7 0 0301440 44668  191 175   0   0 163   0 16 17 15  0 3887 1482 1386 26 61 13
 3 0 0301876 44052  358 131   0   0 267   0 19 19 20  0 3934 1857 1348 22 61 17
 2 7 0289916 44048  306 125   0   1 239   0 24 18 22  0 3751 1815 1269 20 55 25
 6 2 0298328 43800  303 139   0   0 234   0 15 22 21  0 3392 2220 1164 21 54 25
11 0 0320688 42644  270 121   0   0 186   0  8 23 29  0 3162 2155 1102 13 59 28
 6 0 0337076 41444  233  96   0   1 117   0  9 15 18  0 2658 1945 798 13 86  1

... right here's we're about to kill off all 46 "innxmit" processes.
The activity on sd0, where the batch files are stored, thrashes
something fierce (it's a 2GB Hawk drive, which is capable of more
transactions/second than it shows below).

The size of the blocked queue is pretty funny, too.

     46 ... 12:59AM  up 4 days, 14:35, 2 users, load averages: 6.15, 4.90, 3.60
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 323 0322628 42880  321 107   0   0 244   0 15 14 13  0 2674 1825 857 13 84  3
 037 0309792 42776   71 100   0   0  65   0 69  4  5  0 1034  858 349 32 33 35
 142 0279824 42892   49  45   0   0  90   0 52  2  1  0  503  459 149 30 23 47
 244 0331156 43000   40  28   0   0  72   0 58  2  1  0  414  345 121 29 18 53
 145 0321028 44976   60  21   0   0 192   0 59  1  2  0  358  349 115 26 20 54
 143 0348280 46520  108  31   0   0 330   0 58  2  2  0  423  358 180 24 18 58
 045 0348292 50072  203  15   0   0 709   0 53  1  1  0  342  353 166 19 21 61
 225 0282164 52928  198  25   0   0 439   0 69  2  1  0  489  468 175 24 22 54
15 7 0230668 55164  157  20   0   0 442   0 64  1  1  0  380  407 112 23 22 55
 116 0149232 57544  165   7   0   0 431   0 55  0  0  0  324  423  88 30 24 46
     18 ...  1:00AM  up 4 days, 14:36, 2 users, load averages: 1.98, 3.84, 3.33
 0 4 0149272 63244  397   7   0   0 667   0 52  2  2  0  313  524  99 25 31 45
 3 1 0162324 63196  397   8   0   0 487   0 46  1  1  0  338  436  89 21 20 59
 1 2 0149536 66444  347  10   0   0 750   0 49  4  3  0  430  481  96 14 17 70
 1 2 0133720 65248  477  31   0   0 496   0 65  4  3  0  535  715 173 12 20 68
 1 1 0120512 65432  464  44   0   0 550   0 64  3  3  0  667  953 271 21 23 56
 1 1 0138128 62048  421  27   1   0 321   0 66  5  3  0  853 1221 460 44 22 34
 5 0 0172656 61044  385  21   1   2 257   0 56  3  6  0  845 1275 389 40 47 14
      8 ...  1:02AM  up 4 days, 14:37, 2 users, load averages: 1.78, 3.39, 3.20

... sd0's activity remains high while "nntpsend" shrinks individual
batch files, as required, before starting "innxmit".

 1 1 0168960 60748  378  98   1   0 304   0 60  6  7  0  875 1352 379 51 33 16
 4 0 0160352 62372  234 153   0   0 266   0 58  6  6  0 1173 1298 502 48 31 21
 6 0 0160668 59876  374 170   0   0 369   0 47  6  7  0 1380 1534 697 54 34 12
 1 0 0152824 59920  292  79   0   4 229   0 59  5  8  0 1418 1500 648 50 31 19
 2 1 0161312 59500  382  86   0   0 298   0 48  4  6  0 1427 1397 681 54 29 17
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 0 5 0179268 58256  389 136   0   0 363   0 49  6  8  0 1182 1351 511 48 32 20
 6 2 0183644 58748  242  82   0   4 249   0 46  6  6  0 1184 1034 405 27 66  7
     15 ...  1:03AM  up 4 days, 14:38, 2 users, load averages: 2.64, 3.27, 3.16
 4 0 0200788 58544  397  94   0   0 456   0 50  5  6  0 1428  900 516 39 32 29
 3 0 0183896 58300  458  39   0   0 377   0 46  6  7  0 1582  876 657 52 29 19
 0 3 0183840 58136  384 189   0   0 467   0 56  7  6  0 1510  913 583 37 35 27
 1 4 0193288 58964  395 112   0   0 473   0 60  5  6  0 1545  836 533 26 34 40
 1 1 0202900 56124  356  69   2   0 280   0 53  8  6  0 1629 1221 651 46 32 22
 5 0 0224448 55692  465  74   0   3 364   0 58  6  8  0 1647 1366 601 39 35 26
     22 ...  1:04AM  up 4 days, 14:40, 2 users, load averages: 2.71, 3.10, 3.10
 0 2 0224260 56084  292  57   0   0 202   0 20  4  6  0 1338 1405 454 28 70  2

... so here's a case where the idle time is pretty high, but so is the
size of the runable queue.  The blocked queue is also relatively large,
and sd0's activity is still very high, so are the processes in the
runable queue processes which have just been removed from the blocked
queue and put into the runable queue?

 2 4 0224392 55476  278  67   0   0 217   0 61  8  7  0 1798 1162 649 27 33 40
 2 2 0194820 55728  453 293   0   0 676   0 67  8  9  0 1951 1386 698 22 41 37
 6 3 0216020 55440  350 132   0   4 352   0 64  5  7  0 1766 1074 592 23 36 40
 3 2 0202100 56616  521 224   0   0 664   0 67  5  7  0 1738 1183 635 26 38 36
 3 0 0212272 55836  336 180   0   0 421   0 57 12  7  0 2421 1281 1046 38 46 16
 6 0 0285124 53764  490 162   0   3 358   0 53 11  7  0 1959 1411 748 39 50 11
 8 0 0259120 53616  100 189   1   0 196   0 50  9 10  0 2061 1724 680 25 75  0
     26 ...  1:05AM  up 4 days, 14:41, 2 users, load averages: 3.34, 3.10, 3.09
 3 3 0241744 57260  425 228   0   0 529   0 54 12 11  0 2144 1555 777 37 51 12
 2 2 0246284 54972  249 165   0   4 205   0 44 11  8  0 2186 1466 702 11 36 53
 2 1 0229864 54288  421 176   0   0 500   0 53 10  9  0 2640 1383 860 13 46 41
 2 3 0247308 53744  367 295   0   0 515   0 52  9  9  0 2555 1411 896 16 48 36
 2 0 0247932 54920  465 164   0   0 611   0 71  7 10  0 2512 1271 826 14 43 42
 5 1 0243604 52420  399 174   1   2 375   0 60  9  9  0 2594 1599 953 24 45 31
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 7 0 0288040 50296  250 155   0   0 210   0 37 12 10  0 3377 1389 1021 18 77  6
     33 ...  1:06AM  up 4 days, 14:42, 2 users, load averages: 3.47, 3.08, 3.07
 3 0 0253692 51800  224 222   1   0 329   0 32  8  9  0 2579 1611 821 20 78  2
 1 1 0281712 50296  920 113   1   6 469   0 64 11  9  0 2426 1977 915 33 47 20
 2 0 0272796 48824  585  80   0   0 256   0 12  9  8  0 2973 1169 1241 57 43  0
 3 0 0276856 48248  314  71   0   0 236   0  3  7  8  0 2590 1126 1104 67 33  0
 7 5 0306968 47636  583 117   0   0 435   0 25  8  9  0 2389 1339 985 58 39  3
 322 0304700 47388  426 206   0   0 372   0 76 15 10  0 2873 1803 1041 24 52 24

... Whoa!  22 processes in the blocked queue.  We haven't seen that
many processes reported blocked in quite a while.  Is it a quirk of
fate (or design) that "vmstat" hasn't reported more processes blocked?

 4 0 0160352 62372  234 153   0   0 266   0 58  6  6  0 1173 1298 502 48 31 21
 6 0 0160668 59876  374 170   0   0 369   0 47  6  7  0 1380 1534 697 54 34 12
 1 0 0152824 59920  292  79   0   4 229   0 59  5  8  0 1418 1500 648 50 31 19
 2 1 0161312 59500  382  86   0   0 298   0 48  4  6  0 1427 1397 681 54 29 17
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 0 5 0179268 58256  389 136   0   0 363   0 49  6  8  0 1182 1351 511 48 32 20
 6 2 0183644 58748  242  82   0   4 249   0 46  6  6  0 1184 1034 405 27 66  7
     15 ...  1:03AM  up 4 days, 14:38, 2 users, load averages: 2.64, 3.27, 3.16
 4 0 0200788 58544  397  94   0   0 456   0 50  5  6  0 1428  900 516 39 32 29
 3 0 0183896 58300  458  39   0   0 377   0 46  6  7  0 1582  876 657 52 29 19
 0 3 0183840 58136  384 189   0   0 467   0 56  7  6  0 1510  913 583 37 35 27
 1 4 0193288 58964  395 112   0   0 473   0 60  5  6  0 1545  836 533 26 34 40
 1 1 0202900 56124  356  69   2   0 280   0 53  8  6  0 1629 1221 651 46 32 22
 5 0 0224448 55692  465  74   0   3 364   0 58  6  8  0 1647 1366 601 39 35 26
     22 ...  1:04AM  up 4 days, 14:40, 2 users, load averages: 2.71, 3.10, 3.10
 0 2 0224260 56084  292  57   0   0 202   0 20  4  6  0 1338 1405 454 28 70  2
 2 4 0224392 55476  278  67   0   0 217   0 61  8  7  0 1798 1162 649 27 33 40
 2 2 0194820 55728  453 293   0   0 676   0 67  8  9  0 1951 1386 698 22 41 37
 6 3 0216020 55440  350 132   0   4 352   0 64  5  7  0 1766 1074 592 23 36 40
 3 2 0202100 56616  521 224   0   0 664   0 67  5  7  0 1738 1183 635 26 38 36
 3 0 0212272 55836  336 180   0   0 421   0 57 12  7  0 2421 1281 1046 38 46 16
 6 0 0285124 53764  490 162   0   3 358   0 53 11  7  0 1959 1411 748 39 50 11
 8 0 0259120 53616  100 189   1   0 196   0 50  9 10  0 2061 1724 680 25 75  0
     26 ...  1:05AM  up 4 days, 14:41, 2 users, load averages: 3.34, 3.10, 3.09
 3 3 0241744 57260  425 228   0   0 529   0 54 12 11  0 2144 1555 777 37 51 12
 2 2 0246284 54972  249 165   0   4 205   0 44 11  8  0 2186 1466 702 11 36 53
 2 1 0229864 54288  421 176   0   0 500   0 53 10  9  0 2640 1383 860 13 46 41
 2 3 0247308 53744  367 295   0   0 515   0 52  9  9  0 2555 1411 896 16 48 36
 2 0 0247932 54920  465 164   0   0 611   0 71  7 10  0 2512 1271 826 14 43 42
 5 1 0243604 52420  399 174   1   2 375   0 60  9  9  0 2594 1599 953 24 45 31
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 7 0 0288040 50296  250 155   0   0 210   0 37 12 10  0 3377 1389 1021 18 77  6
     33 ...  1:06AM  up 4 days, 14:42, 2 users, load averages: 3.47, 3.08, 3.07
 3 0 0253692 51800  224 222   1   0 329   0 32  8  9  0 2579 1611 821 20 78  2
 1 1 0281712 50296  920 113   1   6 469   0 64 11  9  0 2426 1977 915 33 47 20
 2 0 0272796 48824  585  80   0   0 256   0 12  9  8  0 2973 1169 1241 57 43  0
 3 0 0276856 48248  314  71   0   0 236   0  3  7  8  0 2590 1126 1104 67 33  0
 7 5 0306968 47636  583 117   0   0 435   0 25  8  9  0 2389 1339 985 58 39  3
 322 0304700 47388  426 206   0   0 372   0 76 15 10  0 2873 1803 1041 24 52 24
 519 0347620 56352  349 240   0   0 709   0 76 15 10  0 2852 1586 1012 19 52 28

... Whoa!  There's another big jump in the blocked queue size.

16 7 0338896 56244  261 400   0   0 506   0 76 11 10  0 2832 1639 991 20 77  3
10 1 0306212 56392  173 291   0   4 357   0 63 12 13  0 3197 1582 1055 21 79  0
     36 ...  1:08AM  up 4 days, 14:44, 2 users, load averages: 7.15, 3.99, 3.40
 3 7 0330700 55532  387 143   0   0 355   0 33 17 18  0 3136 2029 1100 25 71  4
 5 2 0317968 54840  287 254   0   0 305   0 63 13 22  0 3552 1791 1293 26 57 17
 5 0 0293896 54008  429 192   0   3 423   0 42 15 18  0 3418 1859 1277 27 57 16
 6 0 0298084 54288   12 146   0   0  18   0  7 15 21  0 3905 1692 1343 25 54 21
 6 0 0269752 52384   48 200   0   0   0   0  8 17 16  0 3383 1792 1225 33 51 16
 5 0 0261352 52408    2 132   0   1   6   0 17 18 16  0 3506 1942 1228 23 48 29
 6 0 0317228 50716  147 119   0   0  76   0 11 10 13  0 2870 1543 905 17 51 31
 7 0 0285404 50436   60 126   0   0   2   0  3 11 11  0 2975 1677 857 20 80  0

... OK, now we're more-or-less at the state we'll run at for the rest
of the hour.  CPU idle time fluctuates between single digits (rarely)
to a more typical 30-45%.  Yet the load average is pretty high, for a
single CPU system.

Any insight would be appreciated.  More "vmstat" output to follow my
.sig.

-Scott
---
Scott Lystig Fritchie, Network Engineer          MRNet Internet Services, Inc.
fritchie@mr.net, PGP key #152B8725               Minnesota Regional Network
v: 612/362.5820, p: 612/637.9547                 2829 University Ave SE
http://www.mr.net/~fritchie/                     Minneapolis, MN  55414

--- snip --- snip --- snip --- snip --- snip --- snip --- snip --- snip --- 

     44 ...  1:09AM  up 4 days, 14:45, 2 users, load averages: 3.92, 3.63, 3.31
 6 0 0242400 51556   77 120   0   0  71   0  7 14  8  0 3227 1421 1048 17 62 21
 5 0 0277464 53136   12 113   0   0  54   0  2 13 12  0 3437 1395 1105 15 44 42
 8 0 0260464 53376   15 100   0   0  18   0  2 14  9  0 3107 1270 999 13 39 48
 2 0 0295012 52748  541  94   0   0 424   0 15 10  9  0 2586 1399 849 12 41 47
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 5 3 0313312 51624  339 194   0   0 330   0 27 13 10  0 2745 1737 1012 19 49 31
 5 0 0283404 52008  398 102   0   0 396   0  7 14 11  0 2636 1553 859 13 40 47
 7 0 0316888 50500  161  82   0   0  68   0 10 12 11  0 2679 1513 802 12 66 23
     44 ...  1:10AM  up 4 days, 14:46, 2 users, load averages: 3.64, 3.46, 3.26
 2 0 0274652 52096  240  68   0   0 184   0  3  9  9  0 2530 1592 781 11 81  8
 5 0 0296024 52076  355  66   0   0 274   0  5 11 14  0 2579 1362 872 11 38 51
 8 0 0274728 52040  261  76   0   0 197   0  9 12 14  0 2735 1408 912 12 39 49
 7 0 0252376 52044  197  66   0   0 159   0  3 11 12  0 2455 1416 769  9 37 54
 1 3 0278884 51308  397  87   0   0 256   0 33 15 15  0 2945 1786 1101 21 50 29
 6 0 0275980 50824  186 229   0   1 291   0 43 13 13  0 2800 1992 1115 29 51 20
 6 2 0345320 48304  508 122   0   0 336   0 11 12  7  0 2304 2214 955 25 52 22
 8 0 0344756 47960  121  97   0   0  48   0 13 17 12  0 2905 1859 923 17 81  2
     46 ...  1:12AM  up 4 days, 14:47, 2 users, load averages: 5.56, 3.86, 3.41
 2 4 0297832 49588  227 116   0   1 191   0 11 14 14  0 2585 1994 903 25 64 11
 5 1 0310700 49032  344 119   1   0 263   0 27 19 13  0 2854 2032 1051 24 46 30
 5 1 0277732 48048  293 265   1   0 367   0 50 16 18  0 3207 1837 1256 30 56 13
 4 0 0272976 47416  295 111   0   0 217   0 17 13 18  0 2786 2100 1048 26 46 28
 2 0 0295552 47324  367 130   0   3 282   0 32 14 19  0 3245 2131 1241 26 54 20
 5 0 0296184 47248  324 116   0   0 257   0 15 19 19  0 3240 2228 1298 27 52 21
 8 0 0339576 45428  300 120   0   0 190   0 17 18 20  0 3210 1770 1169 28 60 12
 9 0 0339940 45540   85 114   0   3  30   0 31 14 17  0 3187 1747 982 16 83  0
 4 1 0306792 45624  148 102   0   0  91   0 16 16 21  0 3259 1790 1080 23 75  2
     47 ...  1:13AM  up 4 days, 14:49, 2 users, load averages: 4.82, 3.89, 3.46
 5 0 0319784 45968  355 146   0   0 296   0 20 20 18  0 3328 1992 1267 20 56 25
 6 0 0302332 46576  389 112   0   4 318   0 28 17 14  0 3046 1790 1090 15 50 35
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 3 1 0293536 46548  353 113   0   0 284   0 17 15 15  0 2978 1744 1093 16 49 35
 4 0 0294408 45976  257 245   0   0 353   0 42 17 15  0 3361 1973 1237 17 60 23
 1 4 0306428 46156  353 248   0   0 363   0 64 16 15  0 2891 1920 1079 20 56 25
 510 0319156 45592  436 199   0   4 485   0 48 16 12  0 2822 1892 998 19 63 19
 8 0 0327156 44996  189 127   0   0 169   0 15 14 14  0 2508 2014 852 17 83  0
     45 ...  1:14AM  up 4 days, 14:50, 2 users, load averages: 4.45, 3.95, 3.51
 4 0 0301756 46788  276  97   0   0 241   0 15 16 18  0 2720 1936 975 21 65 14
 0 0 0296200 47856  351 104   0   4 277   0 25 16 11  0 2639 1644 928 16 42 42
 2 1 0283608 47556  369 102   0   0 288   0 19 15 10  0 2895 1611 1068 15 46 39
 5 0 0288132 47356  248 243   0   0 354   0 35 12 12  0 3477 1791 1304 18 58 23
 0 6 0279332 47396  304  86   0   2 246   0 12 15 16  0 3194 1361 1199 18 48 34
 3 0 0244100 47392  300  80   0   0 238   0  5 14  8  0 2959 1528 1033 13 44 44
 4 2 0299516 45804  281  74   0   0 174   0  3 13 15  0 2720 1694 835 12 58 30
 6 0 0299888 45756   90 114   0   0  35   0  6 16 16  0 3189 1608 1027 19 80  1
     45 ...  1:16AM  up 4 days, 14:51, 2 users, load averages: 3.96, 3.78, 3.47
 3 0 0305352 47388  239 108   0   0 211   0  8 13 14  0 3276 1719 1056 16 64 20
 2 4 0310012 46948  310  90   0   0 235   0  6 15 16  0 3174 1733 1124 16 46 38
 6 1 0288600 47228  362 133   2   2 339   0 59 16 19  0 3565 1938 1315 20 58 22
 5 0 0280856 46288  340 244   1   0 351   0 33 15 18  0 3606 2080 1376 25 58 17
12 1 0294600 45712  239 146   0   0 180   0 45 15 19  0 3182 1983 1222 28 54 18
 5 3 0298136 47108  319 284   0   4 508   0 67 17 15  0 3515 1711 1366 30 63  7
 9 0 0328140 45732  171 133   0   0  90   0 28 14 14  0 3006 1890 1011 23 71  7
 6 0 0332056 45556  137 124   0   0  62   0 11 13 11  0 2585 1933 857 23 75  2
     42 ...  1:17AM  up 4 days, 14:53, 2 users, load averages: 4.50, 3.88, 3.53
 5 0 0307072 46392  328 122   0   0 275   0 17 14 21  0 3160 2133 1155 24 55 20
 procs   memory     page                    disks         faults      cpu
 r b w   avm   fre  flt  re  pi  po  fr  sr s0 s1 s2 f0   in   sy  cs us sy id
 317 0311648 46408  291 161   0   3 224   0 31 18 15  0 3240 1966 1241 30 50 20
 7 0 0281324 46516  213 135   0   0 174   0  8 18 18  0 3242 1976 1223 24 50 26
 3 0 0286340 45672  315 209   0   0 237   0 28 26 16  0 3538 1900 1378 32 56 12
 5 0 0276916 46416  295 181   0   4 246   0 48 19 16  0 3307 1932 1219 25 52 22
 0 0 0289980 46032  316 124   0   0 245   0 26 15 10  0 2866 1749 1042 18 46 36
10 0 0337140 44740  149 109   0   0  70   0  6 19 21  0 3199 2218 993 15 74 11
 6 0 0291556 44864   49 126   0   2   7   0 10 33 28  0 3258 2677 977 15 84  1
     44 ...  1:18AM  up 4 days, 14:54, 2 users, load averages: 4.75, 3.97, 3.59
 6 0 0271084 46448   53 146   0   0  69   0  3 30 34  0 3125 3476 1099 17 60 23
 2 0 0276104 46912   18 120   0   0  27   0  5 23 33  0 3000 2826 1087 22 47 31
 1 0 0254972 46880    1  73   0   0   1   0  7 10 16  0 2827 1170 969 14 38 48
 5 0 0249624 46828    1  88   0   0   5   0  3 15 13  0 3041 1408 999 13 39 48
 2 0 0241180 46812    1  81   0   0   0   0  1 14  9  0 2684 1383 874 11 36 53
 6 0 0241172 46828    0  84   0   0   0   0  3 12 11  0 2769 1281 888 12 37 51
 0 9 0296780 45224  144  92   0   0  53   0  3 15  8  0 3039 1318 915 20 71  9
     43 ...  1:19AM  up 4 days, 14:55, 2 users, load averages: 3.38, 3.63, 3.48
 3 0 0262396 46816   93  99   0   0  71   0  2 11 11  0 2549 1360 830 19 78  3