*BSD News Article 67830


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.mira.net.au!inquo!hookup!uwm.edu!vixen.cso.uiuc.edu!newsfeed.internetmci.com!in1.uu.net!news.artisoft.com!usenet
From: Terry Lambert <terry@lambert.org>
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Re: FreeBSD as a router
Date: Mon, 06 May 1996 14:14:25 -0700
Organization: Me
Lines: 80
Message-ID: <318E6BB1.6A71C39B@lambert.org>
References: <4lfm8j$kn3@nuscc.nus.sg> <317CAABE.7DE14518@FreeBSD.org> <4lt098$erq@itchy.serv.net> <Pine.SUN.3.90.960427140735.3161C-100000@tulip.cs.odu.edu> <4mj7f2$mno@news.clinet.fi>
NNTP-Posting-Host: hecate.artisoft.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 2.01 (X11; I; Linux 1.1.76 i486)

Mika.J.H.Tuupola wrote:
] 
] Jamie Bowden (bowden@cs.odu.edu) wrote:
] 
] : The classic failing of unix boxes as a router is that the
] : max throughput is about 2mbit...it's a limit of the os...this
] : is not just a freebsd
] 
]         But since unix boxes are propably not used as routers by
]         universities or big companies but smaller user groups as
]         small bussinesses or hobbyists etc. the throughput of 2mb
]         should be enough.
] 
]         BTW. The big companies _do_ have the money to buy their
]         systems from cisco :)

The University of Utah (4th site on the Arpanet) used RS6000
hardware to route T3 to T1 fanout for The WestNet/Denver NSF
link.

They found that an RS6000 was able to handle traffic that
dedicated routing hardware was incapable of handling... it all
being built for median range traffic, where people buy hardware
for commercial use... T3 wasn't very widespread at the time.


NetRail uses a P166 FreeBSD box for routing to MAE-East, one of
the two main US NAP's.


The biggest issues in routing are:

1)	Acceptable latency
2)	Pool size for pool retention time for any given packet
	(related to *expected* latency).

Real computer networks are built with sliding windows, not on
request/response, like MS or Novell networks.

For any given packet run of N packets, the latency over the run
is one packet in (the initial packet) and one packet out (the
final response).  For large packet runs, this means that the
latency is pretty much irrelevant... it's the ability to actually
push that many packets to keep the pipes full, not the amount of
pool retention, that matters.  In other words, it's an average
latency of 1_packet_latency/N, *not* 1_packet_latency*N.

A 33Mhz 32 bit PCI can handle a burst transfer rate of 132MB/S
(Mega*bytes*).  The sustained transfer rate, while lower, is
enough for 6 100Mb/S (100 Mega*bits*)... the burst rate being
1 Gb/S for 32 bit PCI (read the PCI spec, and don't use non-PCI
cards in the machine, and don't buy shared interrupt boards from
the Intel OEM products division, buy the discrete interrupt ones
from their server products division).

64 Bit PCI is still caught in the standardization process, but
will be double that -- sufficient for a 1Gb -> 100Mb fanout
interconnect.

Latency is irrelevant as long as average overhead remains low.
Only if people are *stupid* and use MS or Novell (or other
request/response) networking protocols over these links will
packet latency ever become apparent.


NB: This does not mean that I prefer the current server-based
connection model, only that it can be implemented on *good* PC
hardware as easily as dedicated routing hardware just getting
into the high end as the high end becomes more economically
viable -- a place that was only handled by high end general
purpose hardware before the router companies ever got into the
act.


					Regards,
                                        Terry Lambert
                                        terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.