> it's interesting
> to do a line count on some of the linux drivers. they tend toward
> many tens of kloc.
This is just history repeating itself. When I first encountered Unix
(6th edition) after working in the IBM mainframe world, my reaction
was "Amazing! the whole Unix kernel has fewer line
> > Has anyone done up a driver for an infiniband interface?
>
> let's not go there.
>
> we looked at it in 2005 and the IB stack is so awful that it's not
> something anyone wants to touch. It works on LInux, works on few other
> systems, it's pretty much a headache.
>
> There's a reason that s
i tried a pair of (i think) ultrastor 34f (VESA Local Bus!) connected back to
back
with Fast/Wide/Indifferent SCSI (one was host, the other was target)
between cpu server and file server and it was fine while it lasted.
when something went wrong, the target's bus would hang.
in fact, it also wasn'
On Wed, Jul 6, 2011 at 12:08 PM, Lyndon Nerenberg (VE6BBM/VE7TFX)
wrote:
>> This seems to come up with every new generation of a new bus, such as
>> pci or lately pcie. It has a lot of limits and, eventually, people
>> just go with a fast network. For one thing, it doesn't grow that big
>> and, fo
> This seems to come up with every new generation of a new bus, such as
> pci or lately pcie. It has a lot of limits and, eventually, people
> just go with a fast network. For one thing, it doesn't grow that big
> and, for another, error handling can be interesting.
Has anyone done up a driver for
On 7/6/2011 4:06 AM, Steve Simon wrote:
Any of the HPC guys who read this list know of anyone using
pcie with a non-transparent bridge to send data between hosts
as a very fast, very local network?
I seem to remember IBM did somthing like this with back-to-back DMAs in
RS6000 in the early 1990s,
http://www.epn-online.com/page/new59551/pcie-switch-devices.html and
http://www.plxtech.com/products/expresslane/switches
This seems to come up with every new generation of a new bus, such as
pci or lately pcie. It has a lot of limits and, eventually, people
just go with a fast network. For one th
> I seem to remember IBM did somthing like this with back-to-back DMAs in
> RS6000 in the early 1990s, but does anyone do it now? or do we feel that
> UDP over 40gE is fast enough for anything anyone needs (at present)?
40gbe has more bandwidth than an 8 lane pcie 2.0 slot, but obviously
bandwidth
> I seem to remember IBM did somthing like this with back-to-back DMAs in
> RS6000 in the early 1990s
... and before that (1970s) you could join 360 and 370 mainframes
into what we would nowadays call a "cluster" by splicing I/O channels
together with a CTC (channel-to-channel adapter).
That doe