Well, if you’re trying to get Open MPI running on a platform for which we don’t
have atomics support, built-in atomics solves a problem for you…
Brian
> On Aug 1, 2017, at 9:42 AM, Nathan Hjelm wrote:
>
> So far only cons. The gcc and sync builtin atomic provide slower performance
> on x86-64
Hi,
> Am 01.08.2017 um 18:36 schrieb Dave Love :
>
> Gilles Gouaillardet writes:
>
>> Dave,
>>
>>
>> unless you are doing direct launch (for example, use 'srun' instead of
>> 'mpirun' under SLURM),
>>
>> this is the way Open MPI is working : mpirun will use whatever the
>> resource manager p
So far only cons. The gcc and sync builtin atomic provide slower performance on
x86-64 (and possible other platforms). I plan to investigate this as part of
the investigation into requiring C11 atomics from the C compiler.
-Nathan
> On Aug 1, 2017, at 10:34 AM, Dave Love wrote:
>
> What are
Gilles Gouaillardet writes:
> Dave,
>
>
> unless you are doing direct launch (for example, use 'srun' instead of
> 'mpirun' under SLURM),
>
> this is the way Open MPI is working : mpirun will use whatever the
> resource manager provides
>
> in order to spawn the remote orted (tm with PBS, qrsh wi
What are the pros and cons of configuring with --enable-builtin-atomics?
I haven't spotted any discussion of the option.
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
ompi_info et al print absolute compiler paths for some reason. What
would they ever be used for, and are they intended to refer to the OMPI
build or application building? They're an issue for packaging in Guix,
at least. Similarly, what's io_romio_complete_configure_params intended
to be used fo
On Aug 1, 2017, at 5:56 AM, Diego Avesani wrote:
>
> If I do this:
>
> CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, MASTER_COMM,
> iErr)
>
> I get an error. This because some CPU does not belong to MATER_COMM. The
> alternative should be:
>
> IF(rank.LT.0)THEN
> CALL MP
Dear all, Dear George,
now it seems getting better:
* CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)*
* !!Create a new communicator based on the group*
* CALL
MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr)*
* IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN*
Hi,
I'm trying to run MPI applications (using openmpi 1.10 or 2.10) on a SUSE
SLE12-SP3.
I have two server connected through IB and run a simple:
mpirun --prefix /usr/lib64/mpi/gcc/openmpi2/ --host 192.168.0.1,192.168.0.2
-npernode 1 --allow-run-as-root /usr/lib64/mpi/gcc/openmpi2/tests/IMB/IM