We just switched the default compile-mode to MPI_THREAD_MULTIPLE (with
prior testing). Thus, we are relatively confident it should work on all
BTLs. If not we would be happy to hear about.
George.
On Sun, Apr 24, 2016 at 11:02 AM, dpchoudh . wrote:
> Hello Gilles
>
> That idea crossed my min
As far as I understand, the tcp btl is ok
Cheers,
Gilles
On Monday, April 25, 2016, dpchoudh . wrote:
> Hello Gilles
>
> That idea crossed my mind as well, but I was under the impression that
> MPI_THREAD_MULTIPLE is not very well supported on OpenMPI? I believe it is
> not supported on OpenIB
Hello Gilles
That idea crossed my mind as well, but I was under the impression that
MPI_THREAD_MULTIPLE is not very well supported on OpenMPI? I believe it is
not supported on OpenIB, but the original poster seems to be using TCP.
Does it work for TCP?
Thanks
Durga
1% of the executables have 99%
an other option is to use MPI_THREAD_MULTIPLE, and MPI_Recv() on the master
task in a dedicated thread, and use a unique tag (or MPI_Comm_dup()
MPI_COMM_WORLD) to separate the traffic.
If this is not the desired design, then the master task has to post
MPI_Irecv() and "poll" with MPI_Probe() / MPI
Hello
I am not sure I am understanding your requirements correctly, but base on
what I think it is, how about this: you do an MPI_Send() from all the
non-root nodes to the root node and pack all the progress related data into
this send. Use a special tag for this message to make it stand out from
Hello,
With a miniature case of 3 linux quadcore boxes, linked via 1Gbit Ethernet,
I have a UI that runs on 1 of the 3 boxes, and that is the root of the
communicator.
I have a 1-second-running function on up to 10 parameters, my parameter
space fits in the memory of the root, the space of it is N