Dear Jeff, thanks for the information.
>Open MPI currently has very limited cartesian support -- it actually
doesn't remap anything.
I see, OpenMPI doesn't remap anything; this explains probably why my runtime
of my simulation varies sometimes between 30% for the same setup.
>Would you have any
On Jun 29, 2010, at 9:35 PM, 王睿 wrote:
> Thanks for the feedback. More below:
>
> Is there any MPI implementions which meet the following requirements:
>
> 1, it doesn't terminate the whole job when a node is dead?
>
> 2, it allows the spare node to replace the dead node and take over the work
and it's conceivable that you might have better performance with
>
> CALL MPI_ISEND()
> DO I = 1, N
> call do_a_little_of_my_work() ! no MPI progress is being made here
> CALL MPI_TEST()! enough MPI progress is being made here
> that the receiver has something t
amjad ali wrote:
and it's conceivable that you
might have better performance with
CALL MPI_ISEND()
DO I = 1, N
call do_a_little_of_my_work() ! no MPI progress is being made
here
CALL MPI_TEST() ! enough MPI progress is being made
here that the
Dear All,
I am using Open MPI, I got the error:
n337:37664] *** Process received signal ***[n337:37664] Signal: Segmentation
fault (11)[n337:37664] Signal code: Address not mapped (1)[n337:37664] Failing
at address: 0x7fffcfe9[n337:37664] [ 0] /lib64/libpthread.so.0
[0x3c50e0e4c0][n337:376
When I got segmentation faults, it has always been my coding mistakes.
Perhaps your code is not robust against number of processes not divisible by
2?
On Wed, Jun 30, 2010 at 8:47 AM, Jack Bryan wrote:
> Dear All,
>
> I am using Open MPI, I got the error:
>
> n337:37664] *** Process received si
Dear E. loh,
Thank u very much for your help.
Actually i was doing the same according to your earlier suggestions
---and now in the program; but error was there.
At last i found the blunder made by myself.
It was a typo mistake infact of a variable name.
i will let u know about the performan
Based on my experiences, I would FULLY endorse (100% agree with) David
Zhang.
It is usually a coding or typo mistake.
At first, Ensure that array sizes and dimension are correct.
I experience that if openmpi is compiled with gnu compilers (not with Intel)
then it also point outs the subroutine ex
Hello,
I just re-compiled OMPI, and noticed this in the
"ompi_info --all" output:
Open MPI: 1.4.3a1r23323
...
Thread support: posix (mpi: yes, progress: no)
...
what is this "progress thread support"? Is it the "asynchronous
progress ...
A stale, unsupported option - we have removed it in future releases to remove
confusion.
On Jun 30, 2010, at 2:16 PM, Riccardo Murri wrote:
> Hello,
>
> I just re-compiled OMPI, and noticed this in the
> "ompi_info --all" output:
>
>Open MPI: 1.4.3a1r23323
>...
Hello,
The FAQ states: "Support for MPI_THREAD_MULTIPLE [...] has been
designed into Open MPI from its first planning meetings. Support for
MPI_THREAD_MULTIPLE is included in the first version of Open MPI, but
it is only lightly tested and likely still has some bugs."
The man page of "mpirun" fr
Hi all,
I am working on a parallel tempering MCMC code using OpenMPI scripts. I am a
bit confused about proposing swaps between chains running on different
cores.
I know how to propose swaps but I am not sure as to where to to do it (i.e.
how to specify an independent node or core for it.). If som
12 matches
Mail list logo