Dear all:
In the ongoing investigation into why a particular in-house program is
not working in parallel over multiple nodes using OpenMPI, running with
"--mca btl self,sm,tcp" I have been running into the following error:
[compute-6-15.local][[8185,1],0
[btl_tcp_endpoint.c:653:mca_btl_tcp_
On Mon, Apr 21, 2014 at 08:53:02AM +0200, Tobias Burnus wrote:
> Dear all,
>
> I would like to do one-sided communication as implementation of a
> Fortran coarray library. "MPI provides three synchronization
> mechanisms:
>
> "1. The MPI_WIN_FENCE collective synchronization call supports a
> simp
On Mar 13, 2014, at 3:15 PM, Ross Boylan wrote:
> The motivation was
> http://www.stats.uwo.ca/faculty/yu/Rmpi/changelogs.htm notes
> --
> 2007-10-24, version 0.5-5:
>
> dlopen has been used to load libmpi.so explicitly. This is mainly useful
> for Rmpi under Open
On Apr 23, 2014, at 4:45 PM, Ross Boylan wrote:
>> is OK. So, if any nonblocking calls are used, one must use mpi.test or
>> mpi.wait to check if they are complete before trying any blocking calls.
That is also correct -- it's MPI semantics (communications initiated by
MPI_Isend / MPI_Irecv mus
Hi
Is there a way of getting hold of hcoll-v3 required for OMPI 1.8.1?
Thanks
Jamil
Sounds like either a routing problem or a firewall. Are there multiple NICs on
these nodes? Looking at the quoted NIC in your error message, is that the
correct subnet we should be using?
Have you checked to ensure no firewalls exist on that subnet between the nodes?
On Apr 24, 2014, at 8:41 A