[O-MPI users] MacResearch.org announces iPod giveaway contest

2006-02-10 Thread Joel Dudley
Help MacResearch.org expand its Script Repository and you could win a black 2GB iPod Nano. Eligible contestants must submit a research- oriented script that can run natively (no emulators) on Mac OS X 10.3 or higher without modification before the contest end date. Scripts for all scientific

[O-MPI users] Anyone has build (used) OpenMPI with BLCR??

2006-02-10 Thread Alexandre Carissimi
Hi; I'm trying to use BLCR to checkpoint OpenMPI applications but I'm having lots of problems. To begin, I'm note sure that openmpi recognizes blcr. I compiled open mpi with the --with options like I used to do with lam versions. The ompi_info doesn't seems to show blcr support. Any hints? Som

[O-MPI users] Strange errors when using open-mpi

2006-02-10 Thread Berend van Wachem
Hi, I have always used MPICH for my MPI projects, but changed to open-mpi for its better integration with eclipse. First of all, I got an error when using gcc 4.x when compiling the code, but I think this was discussed earlier on the mailinglist. I downgraded gcc and have succesfully compil

Re: [O-MPI users] MacResearch.org announces iPod giveaway contest

2006-02-10 Thread Jeff Squyres
This list is for the discussion of Open MPI. Please do not use it as a mechanism for 3rd party announcements. On Feb 10, 2006, at 2:47 AM, Joel Dudley wrote: Help MacResearch.org expand its Script Repository and you could win a black 2GB iPod Nano. Eligible contestants must submit a researc

Re: [O-MPI users] Anyone has build (used) OpenMPI with BLCR??

2006-02-10 Thread Josh Hursey
Alex, Checkpoint/Restart is not supported in Open MPI, at the moment. The integration of LAM/MPI style of process fault tolerance using a single process checkpointer (e.g. BLCR) is currently under active development. Unfortunately, I cannot say exactly when you will see it released, but k

Re: [O-MPI users] Anyone has build (used) OpenMPI with BLCR??

2006-02-10 Thread Alexandre Carissimi
Josh; Thanks a lot!! I was affraid about that :) I looked at your docummentations but was not sure if it was updated or not... so I tried to install/compil openmpi in the same way that I used to do with lam... Configure doesn't complained about my --with-xyz. I suspected when I saw the out

Re: [O-MPI users] direct openib btl and latency

2006-02-10 Thread Galen M. Shipman
I've been working for the MVAPICH project for around three years. Since this thread is discussing MVAPICH, I thought I should post to this thread. Galen's description of MVAPICH is not accurate. MVAPICH uses RDMA for short message to deliver performance benefits to the applications. However, i

Re: [OMPI users] [O-MPI users] problem running Migrate with open-MPI

2006-02-10 Thread Andy Vierstraete
Hi Brian and Peter, I tried the nightly build like Brian said, and I was able to compile Migrate without errors-message (that was not the case before, like Peter suggested, I had to set openmpi in my path). But is is still not running : now it can't find "libmpi.so.0", and the directory wher

Re: [OMPI users] [O-MPI users] problem running Migrate with open-MPI

2006-02-10 Thread Andy Vierstraete
Hi Brian and Peter, It works with lam-mpi, so probably still something wrong with open-mpi ? Greets, Andy avierstr@muscorum:~> lamboot hostfile LAM 7.1.1/MPI 2 C++/ROMIO - Indiana University avierstr@muscorum:~> mpiexec migrate-n migrate-n migrate-n ===

Re: [OMPI users] [O-MPI users] problem running Migrate with open-MPI

2006-02-10 Thread George Bosilca
There are 2 things that have to be done in order to be able to run a Open MPI application. First the runtime environment need access to some of the files in the bin directory so you have to add the Open MPI bin directory to your path. And second, as we use shared libraries the OS need to kn

[OMPI users] Cannonical ring program and Mac OSX 10.4.4

2006-02-10 Thread James Conway
Brian et al, Original thread was "[O-MPI users] Firewall ports and Mac OS X 10.4.4" On Feb 9, 2006, at 11:26 PM, Brian Barrett wrote: Open MPI uses random port numbers for all it's communication. (etc) Thanks for the explanation. I will live with the open Firewall, and look at the ipfw doc

[OMPI users] Bug in OMPI 1.0.1 using MPI_Recv with indexed datatypes

2006-02-10 Thread Yvan Fournier
Hello, I seem to have encountered a bug in Open MPI 1.0 using indexed datatypes with MPI_Recv (which seems to be of the "off by one" sort). I have joined a test case, which is briefly explained below (as well as in the source file). This case should run on two processes. I observed the bug on 2 di

Re: [OMPI users] [O-MPI users] A few benchmarks

2006-02-10 Thread Jeff Squyres
On Feb 6, 2006, at 8:32 PM, Glen Kaukola wrote: Anyway, here are the times on a few runs I did with Open MPI 1.1a1r887. Basically what I'm seeing, my jobs run ok when they're local to one machine, but as soon as they're split up between multiple machines performance can vary: 4 cpu jobs: 2:

Re: [OMPI users] Bug in OMPI 1.0.1 using MPI_Recv with indexed datatypes

2006-02-10 Thread George Bosilca
Yvan, I'm looking into this one. So far I cannot reproduce it with the current version from the trunk. I will look into the stable versions. Until I figure out what's wrong, can you please use the nightly builds to run your test. Once the problem get fixed it will be included in the 1.0.2

Re: [OMPI users] [O-MPI users] mpirun with make

2006-02-10 Thread Jeff Squyres
On Feb 8, 2006, at 3:29 AM, Andreas Fladischer wrote: I tested this example with hostname before and it worked well: the hostfile contains only 2 lines pc86 pc92 and the user wolf doesn't need a password when linking to the other pc.the user wolf have the same uid and gui on both pc. i have a

Re: [OMPI users] [O-MPI users] "alltoall" vs "alltoallv"

2006-02-10 Thread George Bosilca
Konstantin, The all2all scheduling works only because we know they will all send the same amount of data, so the communications will take "nearly" the same time. Therefore, we can predict how to schedule the communications to get the best out of the network. But this approach can lead to

Re: [OMPI users] [O-MPI users] A few benchmarks

2006-02-10 Thread Glen Kaukola
Jeff Squyres wrote: On Feb 6, 2006, at 8:32 PM, Glen Kaukola wrote: Anyway, here are the times on a few runs I did with Open MPI 1.1a1r887. Basically what I'm seeing, my jobs run ok when they're local to one machine, but as soon as they're split up between multiple machines performance can