On Thu, 11 Sep 2008 01:38:30 +0300 Oleg Goldshmidt <[EMAIL PROTECTED]> wrote:
> Micha <[EMAIL PROTECTED]> writes: > > > I'm looking to setup a cluster for out uni lab and am debating at > > the moment for the internal network between the cluster machines (4 > > machines which are supposed to be all connected to each other) > > whether to use on dual port pci-e ethernet card + one single port > > pci-e or the onboard card vs. using three single port pci-e cards > > and the onboard card as an external internet connection (they will > > serve a dual purpuse as cluster machines and workstations) > > > > besides the obvious issue of leaving one (first,third options) or > > two (second option) pci-e slots free can anyone tell if there is are > > any other considerations that I'm not aware of ? (mainly are there > > differences in bus/cpu/speed overheads) > > Difficult to say without knowing the details of your workload. I > assume the NICs are "dumb" commodity ones, i.e., no offloading > engines, virtualization support, etc. (I didn't check). I also assume > there are no power budget problems with either option. It looks like > you intend to make a full 4-node crossbar without switches, plus each That is right > machine will have an extra port (or NIC) to connect to the rest of the > Universe, and you don't expect to grow. Right? > Considering the next purchase will most probably be after I finish my PhD (hopefully ;-) and most probably at least two years away that is my educated guess. Considering these are consumer machines and not industrial boards, acquiring another machine with the same spec will probably be nearly impossible and building a non-homogeneous cluster usually spells trouble. Also, whatever option I take there is still an option to add at least one extra machine to the setup. > If your network load in the cluster is not going to stress the HW and > you don't have serious HA requirements, I am guessing you won't feel a > difference between a dual port NIC and two single port ones. If you > are going to put a serious stress on the setup, there are various > questions you can ask. > The idea is to run MPI and hopefully provide some solution or other to parallelize matlab (I know it's problematic, will be happy for pointers there too). I'm thus expecting mostly burst communications although with a large amount of data and most probably to one machine at a time. > * Are your machines pressed for slots or likely to be slot-deficient > in the future? Judging from your post, no, but are you likely to > consider adding FC HBAs later or anything of the kind? > The current suggested board has three pci-e slots (also another 3 pci but sticking to the pci-e slots for the cluster NICs is probably better). This is in addition to the video slot. > * You are not going to trunk ports, are you? If you do, a single > dual-port NIC will probably be an advantage. ;-) > Not expecting to. > * Are there any redundancy considerations (a single NIC being a single > point of failure)? Again, my guess from your post is there are none, > but... > Since the machines will be dual purpose also as workstation that may be an issue but I'm expecting full node failure rather than NIC failure if so. As long as it's not often redundancy is not an issue as a job can be rerun in case of trouble (daily will be an issue but I can live with losing a job a couple times a week). > * What packet rates / interrupt rates are you expecting? Will it be > better or worse for your CPU to deal with one NIC or two NICs? Where > do you expect the bottleneck, if any, to be? > I don't have enough experience with clusters to tell. Part of the idea is to use this pet project to learn. > * Are the machines multi-core? Will there be any advantage for your > application to map the cores to different NICs? > Quad core machines (2.85 Ghz if I recall correctly) > * Will the network load be symmetric across the cluster? If some links > are between single port NICs, others are between a single port NIC > and a port of a dual port one, and yet others will connect dual port > NICs, the difference might affect your application, of which I know > nothing, of course. > I'm guessing that most of the time they will be asymmetric. Forcing them to be symmetric will probably be difficult. They are expected to result from mpi implementations on application so they will mostly result when a chunk finishes it's work. > * Are you going to run multiple virtual machines? If yes, do you > expect it to be advantageous to map each VM to a different NIC, > maybe even bypass the hypervisor? Are you going to move the VMs > around? If so, 3 NICs might not be enough for bliss. > Not expecting to run virtual machines at all. Maybe once in a while for development of some small stuff that needs windows (we are playing around with programing simple computer vision algorithms on a symbian nokia n95 for student workshops and I'm not aware of a linux based compiler for c++ on the symbian so ...) > These are just some things I could think of off the top of my head. My > first guess is that you don't care about any of these things enough to > feel the difference. Had you cared, you would have mentioned them in your > post, and you wouldn't consider multi-tasking the machines as > workstations. I may be totally wrong. > > Bottom line, these things depend on the workload. > I thought about most of these. The main thing is that it's been a few years since I worked with networks at the hardware level. Regrettably I've been wasting some time with camera driver bugs lately but these days I'm trying to stick to things much farther up the stack (image processing algorithms). The problem is that I'm not really knowledgeable regarding the hardware difference. It does mean less interrupts to have one card although since these are not listening on a hub and I don't expect them to have high traffic most of the time, just (hopefully) high speed bursts I don't think that should be too much of an issue. I don't plan on bonding so I probably won't gain anything there. On the other hand it's a multi core machine so this will allow different NICs to be mapped to different cores, also don't expect that to be much of an gain. Looks like it's boiling down to the number of available slots and whether we would expect to acquire pci-e cameras or other pci-e equipment before these machines retire. There is firewire and eSata on board and we don't work with high speed stereo cameras so I don't expect that to be an issue. The original board I was thinking of (Asus p5k) had only 1 pci-e slot so there were no options. this one that the store is recommending (intel dragontail peak) has three so there is more room for play. Thanks. Unless there is some other input, I think I will go with three single port NICs. This way I won't have to worry at the moment regarding how stable the onboard chip is, keep things symmetric and save a couple hundred NIS per machine. ================================================================= To unsubscribe, send mail to [EMAIL PROTECTED] with the word "unsubscribe" in the message body, e.g., run the command echo unsubscribe | mail [EMAIL PROTECTED]