*BSD News Article 86464


Return to BSD News archive

Path: euryale.cc.adfa.oz.au!newshost.carno.net.au!harbinger.cc.monash.edu.au!munnari.OZ.AU!news.ecn.uoknor.edu!feed1.news.erols.com!insync!uunet!in3.uu.net!192.220.251.22!netnews.nwnet.net!Symiserver2.symantec.com!news
From: tedm@agora.rdrop.com
Newsgroups: comp.unix.bsd.misc,comp.unix.bsd.bsdi.misc,comp.unix.bsd.freebsd.misc,comp.unix.bsd.netbsd.misc
Subject: Re: Running several networking cards in one system?
Date: 13 Jan 1997 10:22:40 GMT
Organization: Symantec Corp.
Lines: 52
Message-ID: <5bd2dg$rkf@Symiserver2.symantec.com>
References: <6OBfLaMbNgB@me-tech.pfm-mainz.de> <6OhJND_6NgB@me-tech.PFM-Mainz.de>
Reply-To: tedm@agora.rdrop.com
NNTP-Posting-Host: shiva2.central.com
X-Newsreader: IBM NewsReader/2 v1.2.5
Xref: euryale.cc.adfa.oz.au comp.unix.bsd.misc:1887 comp.unix.bsd.bsdi.misc:5544 comp.unix.bsd.freebsd.misc:33805 comp.unix.bsd.netbsd.misc:5112

In <6OhJND_6NgB@me-tech.PFM-Mainz.de>, mschmidt@me-tech.PFM-Mainz.de (Michael Schmidt) writes:
>  In article  <5b6eop$o0h@uriah.heep.sax.de>
>              j@uriah.heep.sax.de   (J Wunsch)
>  wrote:
>
>
>It may look pointless to you as it seems that you missed the point. In the  
>mentioned setup each NIC has to have a separate wire/cable (goes without  

If each NIC in the server has a separate wire/cable, what your talking about is
good old fashioned routing.  If these separate wires are all plugged into
each other, there is no point under 10baseT to have multiple NIC's, since they
are all one ethernet, and a single 10BaseT card can easily be driven by a Pentium
to saturate the 10baseT segment.  What is so complicated about all of this?

All this load-balancing nonsense arose from the marketing departments of
companies producing 100BaseT hubs, as a way of selling more hubs.  There are
a very few hubs out there that can take multiple connections from a server,
and make it appear as though it is a single faster connection from the server
to the hub.  The drawbacks are that the hub has to be a switching hub, and
you also have to run software on the server to do this.

However, with these "load balancing with a switch" schemes, in essense they
can be reduced to a network of separate ethernet segments, routed within
the switch by mac address.  This is not any different than implementing the
same thing with standard hubs and multiple servers configured as routers,
and just adding multiple NICs on different segments to the server you want
to be well-connected.

The thing that drives these "load balanced" schemes is the sucky IPX protocol,
because by default IPX broadcasts are forwarded.  This makes an administrative
nightmare when you are attempting to build a network 'fabric" within a building,
because of the nature of broadcast traffic as you continue to add network
segments you can get into situations where you lose broadcasts from remote
servers on a segment, then you run into clients connecting to servers through
non-optimal routes.  Thus, the IPX crowd hates to deal with multiple routers,
they often don't understand IPX routing to begin with, and NetWare servers
until recently offered insufficient control over IPX route advertisement to begin
with.  This, plus the use of the NetBEUI protocol has really driven the sale of
expensive switching hubs intended to replace routed networks that were built
improperly to begin with.

If you use a decent protocol like TCP/IP, and interconnect your segments with
each other through a network of servers routing each other, even a very
simple routing protocol like RIP is often sufficient to use.  In this case, you can
approach the advantages of a switching hub network without the cost of
a large switch.  In this case, you simply add multiple NIC's to whatever server
you want to be well connected, and spread your clients out over the network.
You don't need any fancy load-balancing nonsense here, because your
routing is taking the place of that.