*BSD News Article 69098


Return to BSD News archive

#! rnews 4409 bsd
Path: euryale.cc.adfa.oz.au!newshost.anu.edu.au!harbinger.cc.monash.edu.au!news.rmit.EDU.AU!news.unimelb.EDU.AU!cs.mu.OZ.AU!munnari.OZ.AU!news.ecn.uoknor.edu!solace!nntp.uio.no!news.cais.net!bofh.dot!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!news.artisoft.com!usenet
From: Terry Lambert <terry@lambert.org>
Newsgroups: comp.unix.bsd.freebsd.misc
Subject: Re: FreeBSD as a router
Date: Tue, 21 May 1996 22:58:50 -0700
Organization: Me
Lines: 95
Message-ID: <31A2AD1A.13FED5B6@lambert.org>
References: <4lfm8j$kn3@nuscc.nus.sg> <317CAABE.7DE14518@FreeBSD.org> <4lt098$erq@itchy.serv.net> <Pine.SUN.3.90.960427140735.3161C-100000@tulip.cs.odu.edu> <4mj7f2$mno@news.clinet.fi> <318E6BB1.6A71C39B@lambert.org> <4mtfsg$14l8@serra.unipi.it> <319407B4.32F4B8B6@lambert.org> <4nc6v9$jib@news.siemens.at> <319BD085.3EE5FAF9@lambert.org> <4ns90e$c67@news.siemens.at>
NNTP-Posting-Host: hecate.artisoft.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 2.01 (X11; I; Linux 1.1.76 i486)

Ingo Molnar wrote:
] trying to put together some factoids:
] 
] i assume that both boards can be acked separately. (by acking
] the IRQ on the board itself, so it releases the line).

Ah.  This is a significant factoid.  8-).

My problem was the potential for board 1 to assert a new IRQ
before the processing on board 1 was complete, and board 2
thinking it had been processed and dumping the data before you
got to it.  Both are bad things.

Don't you need an APIC on the card to run "virtual wire" mode
like this?

I don't know (since I'm coming at it from the system side of
things) whether the PCI interface chip used on most cards
supports this or not... I guess from your statement it does.
8-).


This may make it more difficult to share drivers between PCI
and non-PCI cards.  I guess you could argue "insufficient
abstraction" in that case; I'd probably agree with you on
general principles.

[ ... ]

]    shared irq handler does a second pass, to detect interrupts
]                              received during the first pass.

This makes me queasy.  8-(.


[ ... ]

] this isnt exactly what should be done, there might be some races left.

Yes, I think so.

] : ] Internal serialization is a clear loss. Interrupt latencies can
] : ] be up to several tens of usecs. Checking for a board is a few PCI
] : ] cycles, much faster.
] :
] : You are serializing latency by delaying the first board
] : generating its next interrupt until you have processed the
] : interrupt for the second board.  This is nore harmful than
] : having to handle two interrupts.
] 
] no, i ack the first board after i've processed it.

Yes, I see how this would work on boards that could be acked
seperately from the bus IACK.

] (btw, i think network cards are not halted till the IRQ is
] not acked, the 3C509 has a status stack for example, if i
] remember right. other boards might differ [and surely they
] do, judging from your comment :) ])

I was thinking two BusLogic PCI SCSI cards -- they *really*
hate many Intel motherboards when you put in more than one.

] You are right that shared IRQs dont scale well with the number of
] cards. But i would say that up to 3-5 cards on one interrupt, in a
] busy system, there is lower overhead than in a system with 3-5
] separate interrupts.
] 
] And separate IRQs are right if you want to prioritize your cards,
] since in a shared IRQ system the first card detected will be
] serviced first.
] 
] True, the name of the game is concurrency. The shared IRQ system is
] event driven too, with an overhead of (1-2 * N) bus cycles. (this is
] a definite overhead). But this event driven system turns out to
] "clusterise" interrupts when the system gets busy.  Which is the
] right thing IMHO ...


Yep.  Assuming seperate acks, it all holds together, just like
you've described it.

] [ i'm talking about putting 3 $50 cards into the box, instead of
]   using a one-irq, internal-sharing, $500 thingie ]

I wasn't thinking particularly of internal PCI-PCI bridging;
I know that Zynx and Cogent PCI quat 100bt boards do this,
but it wasn't an issue for me -- I don't think it's the common
case (yet, anyway).

                                        Terry Lambert
                                        terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.