By the way, this if fortran code, which uses F77 bindings.
--
Anton Starikov.
On May 12, 2009, at 3:06 AM, Anton Starikov wrote:
Due to rankfile fixes I switched to SVN r21208, now my code dies
with error
[node037:20519] *** An error occurred in MPI_Comm_dup
[node037:20519] *** on communic
Due to rankfile fixes I switched to SVN r21208, now my code dies with
error
[node037:20519] *** An error occurred in MPI_Comm_dup
[node037:20519] *** on communicator MPI COMMUNICATOR 32 SPLIT FROM 4
[node037:20519] *** MPI_ERR_INTERN: internal error
[node037:20519] *** MPI_ERRORS_ARE_FATAL (you
Due to rankfile fixes I switched to SVN r21208, now my code dies with
error
[node037:20519] *** An error occurred in MPI_Comm_dup
[node037:20519] *** on communicator MPI COMMUNICATOR 32 SPLIT FROM 4
[node037:20519] *** MPI_ERR_INTERN: internal error
[node037:20519] *** MPI_ERRORS_ARE_FATAL (you
This is fixed as of r21208.
Thanks for reporting it!
Ralph
On May 11, 2009, at 12:51 PM, Anton Starikov wrote:
Although removing this check solves problem of having more slots in
rankfile than necessary, there is another problem.
If I set rmaps_base_no_oversubscribe=1 then if, for example:
Although removing this check solves problem of having more slots in
rankfile than necessary, there is another problem.
If I set rmaps_base_no_oversubscribe=1 then if, for example:
hostfile:
node01
node01
node02
node02
rankfile:
rank 0=node01 slot=1
rank 1=node01 slot=0
rank 2=node02 slot=1
What versions of BLCR and Open MPI are you using?
Have you tried to checkpoint/restart a single (non-MPI) application
with BLCR? BLCR ships with some examples, and I would suggest trying
to make sure those work before moving onto Open MPI.
Typically this type of failure is the result of BLC
It looks like you have a heterogeneous setup -- the error is
complaining that the executable you compiled on one machine will not
run on the other because the executable format is different.
You'll probably need to have different executables compiled for each
node (there's probably other wa
This message was cross-posted to devel and answered there.
On May 11, 2009, at 2:12 AM,
wrote:
Hello All,
I am trying to build openmpi-1.3.2 with "--without-rte-support". I am
getting bunch of errors. Is this support fully functioning or not?
I was trying to reduce the time OMPI takes to
On May 10, 2009, at 3:23 AM, Katz, Jacob wrote:
I see that MPI2.1 says about mpiexec that “If the program named in
command does not call MPI_INIT, but instead forks a process that
calls MPI_INIT, the results are undefined. Implementations may allow
this case to work but are not required to.
That configure option does work, but you appear to be on a system that
has SLURM installed - yes? Are you planning on running with SLURM?
Building --without-rte-support will remove a lot more than just the
allocator and mapper. You have to be on a system like a Cray that has
its own launch,
Hello All,
I am trying to build openmpi-1.3.2 with "--without-rte-support". I am
getting bunch of errors. Is this support fully functioning or not?
I was trying to reduce the time OMPI takes to load on a homogenous
system by removing the Resource Discovery/Allocation/mapping stuff by
givi
11 matches
Mail list logo