[OMPI users] Accessing OpenMPI processes over Internet using ssh

2011-11-23 Thread Jaison Paul
Hi all, I am trying to access OpenMPI processes over Internet using ssh and not quite successful, yet. I believe that I should be able to do it. I have to run one process on my PC and the rest on a remote cluster over internet. I have set the public keys (at .ssh/authorized_keys) to access r

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-23 Thread Ralph Castain
Yes, that would indeed break things. The 1.5 series isn't correctly checking connections across multiple interfaces until it finds one that works - it just uses the first one it sees. :-( The solution is to specify -mca oob_tcp_if_include ib0. This will direct the run-time wireup across the IP

Re: [OMPI users] orte_debugger_select and orte_ess_set_name failed

2011-11-23 Thread MM
Hi Shiqing, Is the info provided useful to understand what's going on? Alternatively, is there a way to get the provided binaries for win but off trunk rather than off 1.5.4 as on the website, because I don't have this problem when I link against those libs, Thanks MM -Original Message-

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-23 Thread TERRY DONTJE
On 11/23/2011 2:02 PM, Paul Kapinos wrote: Hello Ralph, hello all, Two news, as usual a good and a bad one. The good: we believe to find out *why* it hangs The bad: it seem for me, this is a bug or at least undocumented feature of Open MPI /1.5.x. In detail: As said, we see mystery hang-ups

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-23 Thread Paul Kapinos
Hello Ralph, hello all, Two news, as usual a good and a bad one. The good: we believe to find out *why* it hangs The bad: it seem for me, this is a bug or at least undocumented feature of Open MPI /1.5.x. In detail: As said, we see mystery hang-ups if starting on some nodes using some permu

[OMPI users] problem with fortran, MPI_REDUCE and MPI_IN_PLACE

2011-11-23 Thread Arjen van Elteren
Dear All, I'm running a complex program with a number of MPI_REDUCE calls, every call uses MPI_IN_PLACE as the first parameter (the send buffer). I'm currently testing this program on Mac 10.6 with macports installed. Unfortunately all MPI_REDUCE calls with MPI_IN_PLACE, seem to fail. I've pi