*BSD News Article 97775


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!news.cs.su.oz.au!inferno.mpx.com.au!news.ci.com.au!brian.telstra.net!act.news.telstra.net!news.telstra.net!psgrain!news-stk-11.sprintlink.net!news-stk-3.sprintlink.net!news-west.sprintlink.net!news-peer.sprintlink.net!news.sprintlink.net!Sprint!news.maxwell.syr.edu!stdio!uninett.no!not-for-mail
From: sthaug@nethelp.no (Steinar Haug)
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: New Netperf throughput numbers for FreeBSD
Date: 12 Jun 1997 20:19:30 GMT
Organization: Nethelp Consulting, Trondheim, Norway
Lines: 37
Message-ID: <5nplkj$16f@verdi.nethelp.no>
NNTP-Posting-Host: dole.uninett.no
Cache-Post-Path: dole.uninett.no!unknown@verdi.nethelp.no
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.freebsd.misc:42936

I just submitted a new set of throughput numbers for Fast Ethernet and
FreeBSD to Rick Jones, the Netperf maintainer. In short, we can fill
the wire :-)

I measured 93.57 Mbit/s with Netperf. This was the best of several
measurements with different socket buffer sizes and read/write sizes.
However, *all* of the measurements were above 93 Mbit/s.

If you look at the bits on the wire, 93.57 Mbit/s (application to
application) corresponds to 93.57 * (1538/1440) = 99.94 Mbit/s on the
wire. Note that 1440 is the correct number, not 1460 - because I made
no changes to the default use of RFC 1323 and RFC 1644 extensions (so
each packet has 20 bytes of TCP options).

The setup was: 

Sender: noname machine, PPro-200 with 256 KB cache, 440FX chipset,
BCM Advanced Research SQ600 mainboard, 64 MB memory.
Kingston (DEC 21140 based) 100BaseTX network card.
FreeBSD 2.2-BETA operating system.

Receover: noname machine, P-133 with 256 KB cache, 430VX chipset,
QDI P5I430VX motherboard, 32 MB memory.
Intel Pro 100/B 100BaseTX network card.
FreeBSD 3.0-970124-SNAP operating system.

Both machines were run with no other load during the test. They were
connected via a Cisco Catalyst 5000 switch. Connections to both hosts
were full duplex. The network was isolated.

The P-133 was clearly a limiting factor. Running 'top' showed that it
was spending something like 99% of the time in kernel or interrupt
mode during this test - so even with a network faster than 100 Mbps
Ethernet, *this* particular setup wouldn't run much faster. However,
the PPro-200 still had plenty of CPU left.

Steinar Haug, Nethelp consulting, sthaug@nethelp.no