Oops - sorry, just saw your subject was for Windows. If you look at the notes
for Windows on the download page, you will see that we no longer support
building natively on Windows. The cygwin package is available as an option.
On May 30, 2013, at 10:32 AM, Ralph Castain wrote:
> Please see th
Please see the FAQ:
http://www.open-mpi.org/faq/?category=building
On May 30, 2013, at 10:22 AM, "Hodgess, Erin" wrote:
> Hello!
>
> I'm sure that this question has been asked a million times before, but I
> can't find the solution.
>
> I have downloaded Open MPI 1.6.4 and done the tar comm
Victor you might want to take a look at the Open MPI version available from
http://fault-tolerance.org/. It provides additional features to graciously
handle node failures.
George.
On May 30, 2013, at 17:55 , Victor Vysotskiy
wrote:
> Hi Ralph,
>
>> -mca orte_abort_non_zero_exit 0
>
>
Hello!
I'm sure that this question has been asked a million times before, but I can't
find the solution.
I have downloaded Open MPI 1.6.4 and done the tar command.
Now what do I do, please?
Thanks,
Erin
On May 30, 2013, at 8:55 AM, Victor Vysotskiy
wrote:
> Hi Ralph,
>
>> -mca orte_abort_non_zero_exit 0
>
> Thank you for the hint. That it is exactly what I need! BTW, does it help if
> one of the working node occasionally dies during the MPMD run?
I'm afraid not - failure of a node is a te
Hi Ralph,
> -mca orte_abort_non_zero_exit 0
Thank you for the hint. That it is exactly what I need! BTW, does it help if
one of the working node occasionally dies during the MPMD run?
With best regards,
Victor.
There is such an option in the 1.7 series and on the trunk, but I don't see it
in v1.6.
-mca orte_abort_non_zero_exit 0
On May 30, 2013, at 3:40 AM, Victor Vysotskiy
wrote:
> Dear OpenMPI Developers and Users,
>
> I have general question on signal trapping/handling within mpiexec/mpirun.
Dear OpenMPI Developers and Users,
I have general question on signal trapping/handling within mpiexec/mpirun. Let
me assume that I have 2 cores and I start two different (independent) prog1 and
prog2 programs in parallel via the mpirun/mpiexec strartup command:
mpiexec -n 1 prog1 : -n 1 prog2
Hi
> I'm a bit confused by your final table:
>
> > local machine| -host
> > | sunpc1 | linpc1 | rs1
> > -+++---
> > sunpc1 (Solaris 10, x86_64) | ok | hangs | hangs
> > linpc1 (Solaris 10, x86_64)