*BSD News Article 59559


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.mel.connect.com.au!munnari.OZ.AU!news.ecn.uoknor.edu!news.ysu.edu!usenet.ins.cwru.edu!pravda.aa.msen.com!nntp.coast.net!news.kei.com!newsfeed.internetmci.com!swrinde!sdd.hp.com!hamblin.math.byu.edu!park.uvsc.edu!usenet
From: Terry Lambert <terry@lambert.org>
Newsgroups: comp.unix.bsd.netbsd.misc,comp.unix.bsd.bsdi.misc,comp.unix.solaris,comp.unix.aix
Subject: Re: ISP hardware/software choices (performance comparison)
Date: 20 Jan 1996 00:37:36 GMT
Organization: Utah Valley State College, Orem, Utah
Lines: 106
Distribution: inet
Message-ID: <4dpdgg$csu@park.uvsc.edu>
References: <4cmopu$d35@vixen.cso.uiuc.edu> <4d9has$qo9@park.uvsc.edu> <4de3db$n6a@engnews2.Eng.Sun.COM> <4depms$bi5@park.uvsc.edu> <4dnjeh$b48@engnews2.Eng.Sun.COM>
NNTP-Posting-Host: hecate.artisoft.com
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.netbsd.misc:1940 comp.unix.bsd.bsdi.misc:2085 comp.unix.solaris:57176 comp.unix.aix:68668

thurlow@peyto.eng.sun.com (Robert Thurlow) wrote:
] >Client caching prior to NFSv3 violates the protocol specification.
] 
] You're of course not being specific here once again, but I'd
] guess you wouldn't mean that client caching of data read from
] the server must not be done; that would be quite ridiculous.
] Caching data and checking its consistency with GETATTRs has
] been done in all useful implementations.

[ ... ]

] Did I fail to read your mind completely?  If so, please feel
] free to *be specific* about your comment.

I thought it was intuitively obvious from the spec, though it
doesn't really say a great deal about the client one way or
another, spending most of its effort on the server.

Please see my followup to Casper's request for clarification;
I quote the RFC, a repested Internals book, and (by reference)
two Usenix papers.


] >Server caching of writes violates the protocol specification.
] 
] Yes, the server acknowledging write requests before having the
] data safe in stable storage is a protocol violation.  No version
] of SunOS officially supports a way to permit a server to cheat
] like this, and it's drawn lots of criticism over the years.  I
] remember most of the criticism from when I worked at Convex, which
] had implemented a per-mount option to enable async writes; I
] always thought the option was good to have.


It is unofficially supported.  I can post the Solaris patch if
you want.

] >The one possible win (and you have yet to claim it, or anything
] >other than a blanket statement that 5.x is faster than 4.x,
] >without providing numbers or rationale) is kernel threading of
] >the biod's.
] 
] Oh, horse droppings.  There were lots of room for streamlining
] of the code; for reducing the number of times the data buffer
] was copied or checksummed; for optimizing VM interactions.

There still is room: for instance, using a non-Streams TCP/IP
implementation to reduce latency.

[ ... ]

] I don't keep a machine running SunOS 4.1.x near my desk so that I
] can provide instant gratification to people who weren't really
] paying attention the last time they did touch Solaris, but I did
] get you some performance numbers.  I found two low-end machines
] (SparcStation 1's) running 4.1.3 and 5.5, and ran Connectathon
] against that same equidistant server to see how they compared.
] With 5.5, I gave you both NFS V2 and NFS V3 results, and I also
] tried it from my more modern desktop machine.  Given your comment
] above, I thought writes were most interesting thing to compare.
] I'll let others talk about SPEC SFS / LADDIS; it's not my area.

[ ... good numbers ... ]

Thank you.  This does show that SunOS 5.5, NFS V2 outperforms
that on SunOS 4.1.3_U1 on (identically configured?) SS 1 machines.

You don't make clear whether the server had the Solaris "fast
NFS write" option (not async) enabled or not.  Since the default
is "enabled", I suspect this might skew the number, if the 5.5
knew about the option and was written to take advanatage of it
and the 4.1.3_U1 did not.

This would probably not account for the overall throughput,
however.

Were both connections over the same transport (ie: both UDP)?

I would also be interested in the "ttcp" (the test program, not
T/TCP the protocol) numbers, which gauge raw througput on the
interface, so we can seperate the speed difference by protocol
and driver implementation vs. NFS implementation.  Or the UDP
raw throughput number, if the transport for both NFS' was UDP
and not TCP.

I suspect the 4.1.3_U1 box would suffer because of its Lance
driver, though I can't think of a way, off the top of my head,
of divorcing TCP/IP implementation from the driver to actually
further break down the overhead (whort of writing a packet
turnaround driver).


Again, thanks for the numbers -- they are quite interesting,
even if they aren't compelling without the other information
as well.  They make a good case for Solaris for an ISP who
isn't up to the level of coding/tuning necessary to make the
other factors make a difference, and has more than a 2.3 Mbit/S
connection to his NSP which he wants to fully utilize.


					Regards,
                                        Terry Lambert
                                        terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.