Micha <[EMAIL PROTECTED]> writes:

> I'm looking to setup a cluster for out uni lab and am debating at
> the moment for the internal network between the cluster machines (4
> machines which are supposed to be all connected to each other)
> whether to use on dual port pci-e ethernet card + one single port
> pci-e or the onboard card vs. using three single port pci-e cards
> and the onboard card as an external internet connection (they will
> serve a dual purpuse as cluster machines and workstations)
>
> besides the obvious issue of leaving one (first,third options) or
> two (second option) pci-e slots free can anyone tell if there is are
> any other considerations that I'm not aware of ? (mainly are there
> differences in bus/cpu/speed overheads)

Difficult to say without knowing the details of your workload. I
assume the NICs are "dumb" commodity ones, i.e., no offloading
engines, virtualization support, etc. (I didn't check). I also assume
there are no power budget problems with either option. It looks like
you intend to make a full 4-node crossbar without switches, plus each
machine will have an extra port (or NIC) to connect to the rest of the
Universe, and you don't expect to grow. Right?

If your network load in the cluster is not going to stress the HW and
you don't have serious HA requirements, I am guessing you won't feel a
difference between a dual port NIC and two single port ones. If you
are going to put a serious stress on the setup, there are various
questions you can ask.

* Are your machines pressed for slots or likely to be slot-deficient
  in the future? Judging from your post, no, but are you likely to
  consider adding FC HBAs later or anything of the kind?

* You are not going to trunk ports, are you? If you do, a single
  dual-port NIC will probably be an advantage. ;-)

* Are there any redundancy considerations (a single NIC being a single
  point of failure)? Again, my guess from your post is there are none,
  but...

* What packet rates / interrupt rates are you expecting? Will it be
  better or worse for your CPU to deal with one NIC or two NICs? Where
  do you expect the bottleneck, if any, to be?

* Are the machines multi-core? Will there be any advantage for your
  application to map the cores to different NICs?

* Will the network load be symmetric across the cluster? If some links
  are between single port NICs, others are between a single port NIC
  and a port of a dual port one, and yet others will connect dual port
  NICs, the difference might affect your application, of which I know
  nothing, of course.

* Are you going to run multiple virtual machines? If yes, do you
  expect it to be advantageous to map each VM to a different NIC,
  maybe even bypass the hypervisor? Are you going to move the VMs
  around? If so, 3 NICs might not be enough for bliss.

These are just some things I could think of off the top of my head. My
first guess is that you don't care about any of these things enough to
feel the difference. Had you cared, you would have mentioned them in your
post, and you wouldn't consider multi-tasking the machines as
workstations. I may be totally wrong.

Bottom line, these things depend on the workload.

-- 
Oleg Goldshmidt | [EMAIL PROTECTED]

=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to