*BSD News Article 82486


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.mel.connect.com.au!news.mel.aone.net.au!news-out.internetmci.com!newsfeed.internetmci.com!swrinde!www.nntp.primenet.com!nntp.primenet.com!news1.best.com!nntp1.best.com!usenet
From: dillon@flea.best.net (Matt Dillon)
Newsgroups: comp.unix.bsd.freebsd.misc,comp.unix.bsd.misc
Subject: Re: 100Base-TX: where's the bottleneck?
Date: 7 Nov 1996 23:39:04 GMT
Organization: BEST Internet Communications, Inc.
Lines: 44
Message-ID: <55truo$d87@nntp1.best.com>
References: <327D4EE8.2D@cplabs.com> <55jkrt$7jc@cynic.portal.ca> <327E37E4.5AD4@cplabs.com> <55t6lf$9km@hpindda.cup.hp.com>
NNTP-Posting-Host: flea.best.net
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.freebsd.misc:30776 comp.unix.bsd.misc:1432

:In article <55t6lf$9km@hpindda.cup.hp.com>, Rick Jones <raj@cup.hp.com> wrote:
:>Henry Wong (henry@cplabs.com) wrote:
:>: 	Yes, the driver just uses PIO mode, not Bus Master mode. But I
:>: think even if it works on PIO mode, it shouldn't get so low
:>: performance.  I changed hub to 10M hub, I got the performance about
:>: 7Mbs. Maybe that is a litter faster than normal 10M ethernet
:>: adpter's performance.  	Also I used netperf to test. I got the
:>: same result either on 100M ethernet or 10M ethernet.
:>
:>If the driver is using PIO instead of DMA, then it will be consuming
:>great quantities of CPU time - the CPU is being used to move the data
:>between the host and card.
:>
:>There is nothing "magical" about 100 Mbit Ethernet that makes your
:>system go faster for bulk throughput. The MTU's are no larger, so it
:>takes just as many packets to transmit a MB of data on 10BaseT as it
:>does on 100BT which means it takes just as many CPU cycles to transmit
:>a MB. If the 10BT solution takes say 40% of the CPU to do its thing,
:>then I would not be surprised to see that same card in 100 Mbit mode
:>only being driven at say 20 Mbit/s.
:>
:>Just imagine what fun it will be to try and drive Gigabit Ethernet at
:>link rate. Maybe people will finally recognize the price being paid
:>for keeping that tiny 1500 byte MTU...
:>
:>rick jones

    This sort of thing is where PCI DMA really starts to show its suds
    as compared to ISA.  You actually *could* drive a gigabit ethernet
    (100 MBytes/sec = 1000 MBits/sec) via PCI, though it wouldn't leave
    much room for anything else.  Still, with a DMA based driver and
    a pentium pro > 150 MHz, there would possibly even be enough cpu to
    run a TCP stack at that speed :-)

    A pentium pro 200 with parity non-edo ram has about 200 MBytes/sec of 
    dynamic ram throughput.  The PCI bus has 133 MBytes/sec of throughput
    (and I've actually run PCI DMA at 115 MBytes/sec, so I know it can
    do it!).  No problem !  :-) :-)

    The real question is what is the Interrupt/TCP/IP stack overhead for
    FreeBSD on a pentium-pro 200?

					-Matt