[OMPI users] configure fails to detect missing libcrypto

2014-07-24 Thread Jeff Hammond
I am trying to build Open MPI SVN trunk on NERSC Babbage with PBS
support.  configure completes without error but the build fails
because libcrypto.so is missing.

I consider it a desirable property that configure detect all the
necessary dependencies for a build to complete, rather than defer
errors to the compilation phase.

I will file a Trac ticket as soon as my account is reset (in-progress).

Making all in mca/plm/tm
make[2]: Entering directory
`/chos/global/u1/j/jhammond/MPI/ompi-trunk/build-intel/orte/mca/plm/tm'
  CC   plm_tm_component.lo
  CC   plm_tm_module.lo
  CCLD mca_plm_tm.la
ld: cannot find -lcrypto
make[2]: *** [mca_plm_tm.la] Error 1
make[2]: Leaving directory
`/chos/global/u1/j/jhammond/MPI/ompi-trunk/build-intel/orte/mca/plm/tm'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory
`/chos/global/u1/j/jhammond/MPI/ompi-trunk/build-intel/orte'
make: *** [all-recursive] Error 1

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


config.log.tbz
Description: Binary data


Re: [OMPI users] configure fails to detect missing libcrypto

2014-07-24 Thread Jeff Hammond
That could be the case.  I've reported the missing libcrypto issue to
NERSC already.  But neither Intel MPI nor MVAPICH care about libcrypto
and they are both supporting PBS, so I'm not entirely convinced that
PBS depends upon it.

Thanks,

Jeff

Subject: Re: [OMPI users] configure fails to detect missing libcrypto
From: Ralph Castain (rhc_at_[hidden])
List-Post: users@lists.open-mpi.org
Date: 2014-07-24 17:12:16

Previous message: Jeff Hammond: "[OMPI users] configure fails to
detect missing libcrypto"
In reply to: Jeff Hammond: "[OMPI users] configure fails to detect
missing libcrypto"
I'm not aware of our tm module requiring libcrypto - is this something
specific to your PBS install, so it wants to pull libcrypto in when we
link against the Torque lib?

On Jul 24, 2014, at 1:49 PM, Jeff Hammond  wrote:
> I am trying to build Open MPI SVN trunk on NERSC Babbage with PBS
> support. configure completes without error but the build fails
> because libcrypto.so is missing.
>
> I consider it a desirable property that configure detect all the
> necessary dependencies for a build to complete, rather than defer
> errors to the compilation phase.
>
> I will file a Trac ticket as soon as my account is reset (in-progress).
>
> Making all in mca/plm/tm
> make[2]: Entering directory
> `/chos/global/u1/j/jhammond/MPI/ompi-trunk/build-intel/orte/mca/plm/tm'
> CC plm_tm_component.lo
> CC plm_tm_module.lo
> CCLD mca_plm_tm.la
> ld: cannot find -lcrypto
> make[2]: *** [mca_plm_tm.la] Error 1
> make[2]: Leaving directory
> `/chos/global/u1/j/jhammond/MPI/ompi-trunk/build-intel/orte/mca/plm/tm'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory
> `/chos/global/u1/j/jhammond/MPI/ompi-trunk/build-intel/orte'
> make: *** [all-recursive] Error 1
>
> Thanks,
>
> Jeff
>
> --
> Jeff Hammond
> jeff.science_at_[hidden]
> http://jeffhammond.github.io/
> ___
> users mailing list
> users_at_[hidden]
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/07/24864.php

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-21 Thread Jeff Hammond
Best performance per dollar for CPU systems is usually one generation past
mid core count single socket system, such as Intel Haswell or Broadwell
Core i7. Might get lucky and find eg 12-core Xeon processors cheap now.

If you want lots of MPI ranks per dollar, look at Intel Knights Corner Xeon
Phi cards in a cheap host.

You can also go small with an array of Raspberry PI, Arduino,
Adapteva Parallella, Intel NUC, etc.

However, if you are doing non-commercial research, you should just apply
for supercomputer time at a government-sponsored center like NERSC or
XSEDE.

Jeff
(Who works for Intel, and thus may be accused of excessive familiarity with
Intel products)

On Friday, May 20, 2016, MM  wrote:

> Hello,
>
> Say I don't have access to a actual cluster, yet I'm considering cloud
> compute solutions for my MPI program ultimately, but such a cost may be
> highly prohibitive at the moment.
> In terms of middle ground, if I am interesting in compute only, no
> storage, what are possible hardware solutions out there to deploy my MPI
> program?
> By no storage, I mean that my control linux box running the frontend of
> the program, but is also part of the mpi communicator always gathers all
> results and stores them locally.
> At the moment, I have a second box over ethernet.
>
> I am looking at something like Intel Compute Stick (is it possible at all
> to buy a few, is linux running on them, the arch seems to be the same
> x86-64, is there a possible setup with tcp for those and have openmpi over
> tcp)?
>
> Is it more cost-effective to look at extra regular linux commodity boxes?
> If a no hard drive box is possible, can the executables of my MPI program
> sendable over the wire before running them?
>
> If we exclude GPU or other nonMPI solutions, and cost being a primary
> factor, what is progression path from 2boxes to a cloud based solution
> (amazon and the like...)
>
> Regards,
> MM
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Broadcast faster than barrier

2016-05-30 Thread Jeff Hammond
> So, you mean that it guarantees the value received after the bcast call is
> consistent with value sent from root, but it doesn't have to wait till all
> the ranks have received it?
>
> this is what i believe, double checking the standard might not hurt though
> ...
>

No function has barrier semantics, except a barrier, although some
functions have barrier semantics due to data-dependencies for non-zero
counts (allgather, alltoall, allreduce).

Reduce, Bcast, gather, and scatter should never have barrier semantics and
should not synchronize more than the explicit data decencies require. The
send-only ranks may return long before the recv-only ranks do, particularly
when the messages go via an eager protocol.

One can imagine barrier as a 1-byte allreduce, but there are more efficient
implantations. Allreduce should never be faster than Bcast, as Gilles
explained.

There's a nice paper on self-consistent performance of MPI implementations
that has lots of details.

Jeff


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] max buffer size

2016-06-05 Thread Jeff Hammond
Check out the BigMPI project for details on this topic.

Some (many?) MPI implementations still have internal limitations that
prevent one from sending more than 2 gigabytes using MPI datatypes. You can
use the BigMPI tests to identify these.

https://github.com/jeffhammond/BigMPI
https://github.com/jeffhammond/BigMPI-paper

Jeff

On Sunday, June 5, 2016, Alexander Droste <
alexander.ra.dro...@googlemail.com> wrote:

> Hi Gus,
>
> thanks a lot for the intro, that helps.
>
> Best regards,
> Alex
>
> On 05.06.16 18:30, Gustavo Correa wrote:
>
>>
>> On Jun 5, 2016, at 12:03 PM, Alexander Droste wrote:
>>
>> Hi everyone,
>>>
>>> I'd like to know what the maximum buffer size
>>> for sends/receives is. Besides the count being limited
>>> to INT_MAX, how is the max buffer size limited?
>>>
>>> Best regards,
>>> Alex
>>>
>>>
>>
>> Hi Alexander
>>
>> As far as I know, the usual solution to circumvent
>> this type of large count problem is to declare an MPI user type to hold
>> a large number of MPI native types (say,
>> an MPI_Type_Contiguous or MPI_Type_Vector to hold a bunch of floating
>> point numbers).
>>
>> https://www.open-mpi.org/doc/v1.8/man3/MPI_Type_contiguous.3.php
>>
>> Also, an OMPI pro may correct me for saying foolish things on the list,
>> but AFAIK, not all sends/receives are buffered, and the buffer size is
>> set by the default eager/rendevous message threshold (or the value that you
>> set it to be at runtime with OMPI mca parameters). That buffer size may
>> also vary according to the btl (sm,vader, tcp, openib, etc).
>>
>> Search for "eager" and "rendevous" on the FAQ:
>>
>> https://www.open-mpi.org/faq/?category=all
>>
>>
>> I hope this helps,
>> Gus Correa
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29371.php
>>
>> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29372.php
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
Do you know for sure that MKL is only using one thread or do you merely see
that the performance is consistent with it using one thread?

If MPI does process pinning, it is possible for all OpenMP threads to run
on one core, which means one will observe no speedup from threads (and
potentially a slowdown due to oversubscription).  I do not know the option
to disable this with Open-MPI, but I assume either you can find it in the
docs or one of the Open-MPI experts will provide it.

I have observed this issue with MVAPICH2 and Pthread applications in the
past (it has been fixed both in MVAPICH2 and the relevant applications),
but not in Open-MPI with OpenMP, although I am not a heavy user of Open-MPI.

Best,

Jeff

On Wed, Jun 22, 2016 at 9:17 AM, remi marchal 
wrote:

> Dear openmpi users,
>
> Today, I faced a strange problem.
>
> I am compiling a quantum chemistry software (CASTEP-16) using intel16, mkl
> threaded libraries and openmpi-18.1.
>
> The compilation works fine.
>
> When I ask for MKL_NUM_THREAD=4 and call the program in serial mode
> (without mpirun), it works perfectly and use 4 threads.
>
> However, when I start the program with mpirun, even with 1 mpi process,
> the program ran but only with 1 thread.
>
> I never add such kind of trouble.
>
> Does anyone have an explanation.
>
> Regards,
>
> Rémi
>
>
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29495.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
Linux should not put more than one thread on a core if there are free
cores.  Depending on cache/bandwidth needs, it may or may not be better to
colocate on the same socket.

KMP_AFFINITY will pin the OpenMP threads.  This is often important for MKL
performance.  See https://software.intel.com/en-us/node/522691 for details.

Jeff

On Wed, Jun 22, 2016 at 9:47 AM, Gilles Gouaillardet 
wrote:

> Remi,
>
>
> Keep in mind this is still suboptimal.
>
> if you run 2 tasks per node, there is a risks threads from different ranks
> end up bound to the same core, which means time sharing and a drop in
> performance.
>
>
> Cheers,
>
>
> Gilles
>
> On 6/22/2016 4:45 PM, remi marchal wrote:
>
> Dear Gilles,
>
> Thanks a lot.
>
> The mpirun --bind-to-none solve the problem.
>
> Thanks a lot,
>
> Regards,
>
> Rémi
>
>
>
>
>
> Le 22 juin 2016 à 09:34, Gilles Gouaillardet  a écrit :
>
> Remi,
>
>
> in the same environment, can you
>
> mpirun -np 1 grep Cpus_allowed_list /proc/self/status
>
> it is likely Open MPI allows only one core, and in this case, i suspect
> MKL refuses to do some time sharing and hence transparently reduce the
> number of threads to 1.
> /* unless it *does* time sharing, and you observed 4 threads with the
> performance of one */
>
>
> mpirun --bind-to none ...
>
> will tell Open MPI *not* to bind on one core, and that should help a bit.
>
> note this is suboptimal, you should really ask mpirun to allocate 4 cores
> per task, but i cannot remember the correct command line for that
>
> Cheers,
>
> Gilles
>
>
>
>
> On 6/22/2016 4:17 PM, remi marchal wrote:
>
> Dear openmpi users,
>
> Today, I faced a strange problem.
>
> I am compiling a quantum chemistry software (CASTEP-16) using intel16, mkl
> threaded libraries and openmpi-18.1.
>
> The compilation works fine.
>
> When I ask for MKL_NUM_THREAD=4 and call the program in serial mode
> (without mpirun), it works perfectly and use 4 threads.
>
> However, when I start the program with mpirun, even with 1 mpi process,
> the program ran but only with 1 thread.
>
> I never add such kind of trouble.
>
> Does anyone have an explanation.
>
> Regards,
>
> Rémi
>
>
>
>
>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/06/29495.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29497.php
>
>
>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/06/29498.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29499.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
 KMP_AFFINITY is essential for performance. One just needs to set it to
something that distributes the threads properly.

Not setting KMP_AFFINITY means no affinity and thus inheriting from process
affinity mask.

Jeff

On Wednesday, June 22, 2016, Gilles Gouaillardet  wrote:

> my bad, I was assuming KMP_AFFINITY was used
>
>
> so let me put it this way :
>
> do *not* use KMP_AFFINITY with mpirun -bind-to none, otherwise, you will
> very likely end up doing time sharing ...
>
>
> Cheers,
>
>
> Gilles
>
> On 6/22/2016 5:07 PM, Jeff Hammond wrote:
>
> Linux should not put more than one thread on a core if there are free
> cores.  Depending on cache/bandwidth needs, it may or may not be better to
> colocate on the same socket.
>
> KMP_AFFINITY will pin the OpenMP threads.  This is often important for MKL
> performance.  See  <https://software.intel.com/en-us/node/522691>
> https://software.intel.com/en-us/node/522691 for details.
>
> Jeff
>
> On Wed, Jun 22, 2016 at 9:47 AM, Gilles Gouaillardet <
> gil...@rist.or.jp
> > wrote:
>
>> Remi,
>>
>>
>> Keep in mind this is still suboptimal.
>>
>> if you run 2 tasks per node, there is a risks threads from different
>> ranks end up bound to the same core, which means time sharing and a drop in
>> performance.
>>
>>
>> Cheers,
>>
>>
>> Gilles
>>
>> On 6/22/2016 4:45 PM, remi marchal wrote:
>>
>> Dear Gilles,
>>
>> Thanks a lot.
>>
>> The mpirun --bind-to-none solve the problem.
>>
>> Thanks a lot,
>>
>> Regards,
>>
>> Rémi
>>
>>
>>
>>
>>
>> Le 22 juin 2016 à 09:34, Gilles Gouaillardet > > a écrit :
>>
>> Remi,
>>
>>
>> in the same environment, can you
>>
>> mpirun -np 1 grep Cpus_allowed_list /proc/self/status
>>
>> it is likely Open MPI allows only one core, and in this case, i suspect
>> MKL refuses to do some time sharing and hence transparently reduce the
>> number of threads to 1.
>> /* unless it *does* time sharing, and you observed 4 threads with the
>> performance of one */
>>
>>
>> mpirun --bind-to none ...
>>
>> will tell Open MPI *not* to bind on one core, and that should help a bit.
>>
>> note this is suboptimal, you should really ask mpirun to allocate 4 cores
>> per task, but i cannot remember the correct command line for that
>>
>> Cheers,
>>
>> Gilles
>>
>>
>>
>>
>> On 6/22/2016 4:17 PM, remi marchal wrote:
>>
>> Dear openmpi users,
>>
>> Today, I faced a strange problem.
>>
>> I am compiling a quantum chemistry software (CASTEP-16) using intel16,
>> mkl threaded libraries and openmpi-18.1.
>>
>> The compilation works fine.
>>
>> When I ask for MKL_NUM_THREAD=4 and call the program in serial mode
>> (without mpirun), it works perfectly and use 4 threads.
>>
>> However, when I start the program with mpirun, even with 1 mpi process,
>> the program ran but only with 1 thread.
>>
>> I never add such kind of trouble.
>>
>> Does anyone have an explanation.
>>
>> Regards,
>>
>> Rémi
>>
>>
>>
>>
>>
>>
>> ___
>> users mailing listus...@open-mpi.org 
>> 
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/06/29495.php
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org 
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> <http://www.open-mpi.org/community/lists/users/2016/06/29497.php>
>> http://www.open-mpi.org/community/lists/users/2016/06/29497.php
>>
>>
>>
>>
>> ___
>> users mailing listus...@open-mpi.org 
>> 
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2016/06/29498.php
>>
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org 
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29499.php
>>
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> 
> http://jeffhammond.github.io/
>
>
> ___
> users mailing listus...@open-mpi.org 
> 
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/06/29500.php
>
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Continuous integration question...

2016-06-22 Thread Jeff Hammond
The following may be a viable alternative.  Just a suggestion.

git clone --depth 10 -b v2.x https://github.com/open-mpi/ompi-release.git
open-mpi-v2.x

Jeff

On Wed, Jun 22, 2016 at 8:30 PM, Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:

> Excellent!
>
> I will put all in place, then try both URLs and see which one is
> "manageable" for me!
>
> Thanks,
>
> Eric
>
> On 22/06/16 02:10 PM, Jeff Squyres (jsquyres) wrote:
>
>> On Jun 22, 2016, at 2:06 PM, Eric Chamberland <
>> eric.chamberl...@giref.ulaval.ca> wrote:
>>
>>>
>>> We have a similar mechanism already (that is used by the Open MPI
>>>> community for nightly regression testing), but with the advantage that it
>>>> will give you a unique download filename (vs. "openmpi-v2.x-latest.bz2"
>>>> every night).  Do this:
>>>>
>>>> wget https://www.open-mpi.org/nightly/v2.x/latest_snapshot.txt
>>>> wget https://www.open-mpi.org/nightly/v2.x/openmpi-`cat
>>>> <https://www.open-mpi.org/nightly/v2.x/openmpi-cat>
>>>> latest_snapshot.txt`.tar.bz2
>>>>
>>>> The nightly snapshots are created each night starting around 9pm US
>>>> Eastern.  New snapshots are created if there were commits to the tree that
>>>> day.
>>>>
>>>
>>> Nice!  But I have a concern about taking the nightly: it it "just" a
>>> snapshot, or is it "somewhat validated" before beeing a snapshot?
>>>
>>
>> It's just a snapshot.
>>
>> Or I could ask: is this snapshot stable enough to be tested by
>>> "outsiders"?  Is there any more "stable" branch to wget?
>>>
>>
>> This is a different branch than our head of development (master).  It
>> tends to be pretty stable, but it does break sometimes.
>>
>> If not, I would ask if there is a similar wget trick to get the latest
>>> "release candidate" or something more "stable" than a snapshot of the
>>> repository...
>>>
>>
>> Release candidates move much more slowly than the nightly snapshots --
>> they're released at controlled points (e.g., we just did v2.0.0rc3, and
>> we're likely to do a v2.0.0rc4 shortly with just a few more cleanups beyond
>> rc3).  Those are found here:
>>
>>  https://www.open-mpi.org/software/ompi/v2.x/downloads/
>>
>> I.e., you can do the same latest_snapshot.txt thing there:
>>
>> wget
>> https://www.open-mpi.org/software/ompi/v2.x/downloads/latest_snapshot.txt
>> wget https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-`cat
>> <https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-cat>
>> latest_snapshot.txt`.tar.bz2
>>
>>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29519.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Using Open MPI as a communication library

2016-07-08 Thread Jeff Hammond
Why wouldn't https://www.open-mpi.org/doc/v1.8/man3/MPI_Comm_connect.3.php
and friends work after MPI_Init is called, regardless of how the process is
spawned?

Jeff

On Fri, Jul 8, 2016 at 9:55 AM, Ralph Castain  wrote:

> You’d need to have some rendezvous mechanism. I suppose one option would
> be to launch a set of PMIx servers on the nodes (and ensure they know about
> each other) to support these things, but that’s all mpirun really does
> anyway.
>
>  What did you have in mind?
>
> On Jul 8, 2016, at 9:49 AM, Supun Kamburugamuve 
> wrote:
>
> Thanks for the quick response. Is there a way for extending OpenMPI so
> that it can discover the processes using other means?
>
> Supun.
>
> On Fri, Jul 8, 2016 at 12:45 PM, Ralph Castain  wrote:
>
>> If not spawned by mpirun, and not spawned by a resource manager, then it
>> won’t work. There is no way for the procs to wireup.
>>
>>
>> On Jul 8, 2016, at 9:42 AM, Supun Kamburugamuve 
>> wrote:
>>
>> Yes, the processes are not spawned by MPI and they are not spawned by
>> something like Slurm/PBS.
>>
>> How does MPI get to know what processes running in what nodes in a
>> general sense? Do we need to write some plugin so that it can figure out
>> this information? I guess this must be the way it is supporting Slurm/PBS
>> etc.
>>
>> Thanks,
>> Supun..
>>
>> On Fri, Jul 8, 2016 at 12:06 PM, Ralph Castain  wrote:
>>
>>> You mean you didn’t launch those procs via mpirun, yes? If you started
>>> them via some resource manager, then you might just be able to call
>>> MPI_Init and have them wireup.
>>>
>>>
>>> > On Jul 8, 2016, at 8:55 AM, Supun Kamburugamuve <
>>> skamburugam...@gmail.com> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I have a set of processes running and these are not managed/spawned by
>>> Open MPI. Is it possible to use Open MPI as a pure communication library
>>> among these processes?
>>> >
>>> > Thanks,
>>> > Supun..
>>> > ___
>>> > users mailing list
>>> > us...@open-mpi.org
>>> > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>> > Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/07/29612.php
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/07/29613.php
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/07/29614.php
>>
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/07/29615.php
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29616.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29617.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


[OMPI users] ompi_info -c does not print configure arguments

2016-07-23 Thread Jeff Hammond
 Fort real8 align: 1

   Fort real16 align: 1

 Fort dbl prec align: 1

 Fort cplx align: 1

 Fort dbl cplx align: 1

Fort cplx8 align: 1

   Fort cplx16 align: 1

   Fort cplx32 align: 1

 C profiling: yes

   C++ profiling: yes

   Fort mpif.h profiling: yes

  Fort use mpi profiling: yes

   Fort use mpi_f08 prof: yes

  C++ exceptions: no

  Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
yes,

  OMPI progress: no, ORTE progress: yes, Event lib:

  yes)

   Sparse Groups: no

Build CFLAGS: -O3 -DNDEBUG -finline-functions

  -fno-strict-aliasing -restrict

  -Qoption,cpp,--extended_float_types -pthread

  Build CXXFLAGS: -O3 -DNDEBUG -finline-functions -pthread

   Build FCFLAGS:

   Build LDFLAGS: -L/home/projects/x86-64-knl/hwloc/1.11.3/lib

  Build LIBS: -lrt -lutil   -lhwloc

Wrapper extra CFLAGS: -pthread

  Wrapper extra CXXFLAGS: -pthread

   Wrapper extra FCFLAGS:

   Wrapper extra LDFLAGS: -Wl,-rpath -Wl,@{libdir} -Wl,--enable-new-dtags

  Wrapper extra LIBS: -ldl -lutil

  Internal debug support: no

  MPI interface warnings: yes

 MPI parameter check: runtime

Memory profiling support: no

Memory debugging support: no

  dl support: yes

   Heterogeneous support: no

 mpirun default --prefix: no

 MPI I/O support: yes

   MPI_WTIME support: gettimeofday

 Symbol vis. support: yes

   Host topology support: yes

  MPI extensions:

   FT Checkpoint support: no (checkpoint thread: no)

   C/R Enabled Debugging: no

 VampirTrace support: yes

  MPI_MAX_PROCESSOR_NAME: 256

MPI_MAX_ERROR_STRING: 256

 MPI_MAX_OBJECT_NAME: 64

MPI_MAX_INFO_KEY: 36

MPI_MAX_INFO_VAL: 256

   MPI_MAX_PORT_NAME: 1024

  MPI_MAX_DATAREP_STRING: 128

How do I extract configure arguments from an OpenMPI installation?  I am
trying to reproduce a build exactly and I do not have access to config.log
from the origin build.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] mpi_f08 Question: set comm on declaration error, and other questions

2016-08-21 Thread Jeff Hammond
 to MPI_Comm_Rank, and yet, it worked and the error code was 0
> (which I'd take as success). Even if you couldn't detect this at compile
> time, I'm surprised it doesn't trigger a run-time error.  Is this the
> correct behavior according to the Standard?
>
> I think you're passing an undefined value, so the results will be
> undefined.
>
> It's quite possible that the comm%mpi_val inside the comm is (randomly?)
> assigned to 0, which is the same value as mpif.f's MPI_COMM_WORLD, and
> therefore your comm is effectively the same as mpi_f08's MPI_COMM_WORLD --
> which is why MPI_COMM_RANK and MPI_COMM_SIZE worked for you.
>
> Indeed, when I run your program, I get:
>
> -
> $ ./foo
> [savbu-usnic-a:31774] *** An error occurred in MPI_Comm_rank
> [savbu-usnic-a:31774] *** reported by process [756088833,0]
> [savbu-usnic-a:31774] *** on communicator MPI_COMM_WORLD
> [savbu-usnic-a:31774] *** MPI_ERR_COMM: invalid communicator
> [savbu-usnic-a:31774] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [savbu-usnic-a:31774] ***and potentially your MPI job)
> -
>
> I.e., MPI_COMM_RANK is aborting because the communicator being passed in
> is invalid.
>
>
>
>
>
> Huh. I guess I'd assumed that the MPI Standard would have made sure a
> declared communicator that hasn't been filled would have been an error to
> use.
>
>
>
> When I get back on Monday, I'll try out some other compilers as well as
> try different compiler options (e.g., -g -O0, say). Maybe this is just an
> undefined behavior, but it's not one I'm too pleased about. I'd have
> expected the result you got. Now I'm scared that somewhere in my code, in
> the future, there could be a rogue comm declared and never nulled out, so I
> think it's executing on some subcomm, but it runs on MCW.
>
>
>
> Welp, maybe for safety it's time to make a vim macro that does:
>
>
>
> type(MPI_Comm) :: comm
>
> comm = MPI_COMM_NULL
>
>
>
> I'm pretty sure that will *never* execute anything on that comm until I
> fill it with what I want later on. Just wish I could do that in one
> statement.
>
>
>
>
>
> --
>
> Matt Thompson
>
> Man Among Men
>
> Fulcrum of History
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jeff Hammond
On Monday, August 22, 2016, Jingchao Zhang  wrote:

> Hi all,
>
>
> We compiled openmpi/2.0.0 with gcc/6.1.0 and intel/13.1.3. Both of them
> have odd behaviors when trying to read from standard input.
>
>
> For example, if we start the application lammps across 4 nodes, each node
> 16 cores, connected by Intel QDR Infiniband, mpirun works fine for the
> 1st time, but always stuck in a few seconds thereafter.
>
> Command:
>
> mpirun ./lmp_ompi_g++ < in.snr
>
> in.snr is the Lammps input file. compiler is gcc/6.1.
>
>
> Using stdin with MPI codes is at best brittle. It is generally
discouraged. Furthermore, it is straight up impossible on some
supercomputer architectures.

> Instead, if we use
>
> mpirun ./lmp_ompi_g++ -in in.snr
>
> it works 100%.
>
> Just do this all the time. AFAIK Quantum Espresso has the same option. I
never need stdin to run MiniDFT (i.e. QE-lite).

Since both codes you name already have the correct workaround for stdin, I
would not waste any time debugging this. Just do the right thing from now
on and enjoy having your applications work.

Jeff



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] job distribution issue

2016-09-21 Thread Jeff Hammond
Please include full details, such as:
- complete input file, unless it is included in the output (this happens if
you set "echo" in your input file, which you should always do as the first
line)
- complete output file, with both stdout and stderr
- how you compiled NWChem
-- all environment variables, especially ARMCI_NETWORK, MPI_LIB and LIBMPI
-- which compilers you used, and if you set CC=mpicc, then
--- which mpicc
--- mpicc -show (this works for MPICH; there is an equivalent for Open-MPI
that I do not remember)
- how you run NWChem, including
-- ldd nwchem
-- which mpirun
- platform details

Gilles may be right, but you should always include such things when
reporting application problems, particularly for NWChem, because it is very
easy to run NWChem such that it will crash, independent of MPI issues.

You may find that http://www.nwchem-sw.org/index.php/Special:AWCforum is a
better place to report NWChem issues.

Thanks,

Jeff, who has some experience with things that can go wrong with NWChem :-)

On Wed, Sep 14, 2016 at 4:27 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> That typically occurs if nwchem is linked with MPICH and you are using
> OpenMPI mpirun.
> A first, i recommend you double check your environment, and run
> ldd nwchem
> the very same Open MPI is used by everyone
>
> Cheers,
>
> Gilles
>
>
> On Wednesday, September 14, 2016, abhisek Mondal 
> wrote:
>
>> Hi,
>> I'm on a single socket, 20 threaded machine.
>> I'm trying to run a job of "nwchem" with parallel processing mode (with
>> load balancing).
>>
>> I was trying with: "mpirun -np 4 nwchem my_file.nw"
>> But this is launching the same job 4 times in a row and resulting in a
>> crash. Am I going wrong in this scenario ?
>>
>> A little advice would have been really great.
>>
>> Thank you.
>>
>>
>> --
>> Abhisek Mondal
>>
>> *Research Fellow*
>>
>> *Structural Biology and Bioinformatics Division*
>> *CSIR-Indian Institute of Chemical Biology*
>>
>> *Kolkata 700032*
>>
>> *INDIA*
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-10-16 Thread Jeff Hammond
If you want to keep long-waiting MPI processes from clogging your CPU
pipeline and heating up your machines, you can turn blocking MPI
collectives into nicer ones by implementing them in terms of MPI-3
nonblocking collectives using something like the following.

I typed this code straight into this email, so you should validate it
carefully.

Jeff

#ifdef HAVE_UNISTD_H
#include #include 
const int myshortdelay = 1; /* microseconds */
const int mylongdelay = 1; /* seconds */
#else
#define USE_USLEEP 0
#define USE_SLEEP 0
#endif

#ifdef HAVE_SCHED_H
#include 
#else
#define USE_YIELD 0
#endif

int MPI_Bcast( void *buffer, int count, MPI_Datatype datatype, int root,
MPI_Comm comm )
{
  MPI_Request request;
  {
int rc = PMPI_Ibcast(buffer, count, datatype, root, comm, &request);
if (rc!=MPI_SUCCESS) return rc;
  }
  int flag = 0;
  while (!flag)
  {
int rc = PMPI_Test(&request, &flag, MPI_STATUS_IGNORE)
if (rc!=MPI_SUCCESS) return rc;

/* pick one of these... */
#if USE_YIELD
sched_yield();
#elif USE_USLEEP
usleep(myshortdelay);
#elif USE_SLEEP
sleep(mylongdelay);
#elif USE_CPU_RELAX
cpu_relax(); /*
http://linux-kernel.2935.n7.nabble.com/x86-cpu-relax-why-nop-vs-pause-td398656.html
*/
#else
#warning Hard polling may not be the best idea...
#endif
  }
  return MPI_SUCCESS;
}

On Sun, Oct 16, 2016 at 2:24 AM, MM  wrote:
>
> I would like to see if there are any updates re this thread back from
2010:
>
> https://mail-archive.com/users@lists.open-mpi.org/msg15154.html
>
> I've got 3 boxes at home, a laptop and 2 other quadcore nodes . When the
CPU is at 100% for a long time, the fans make quite some noise:-)
>
> The laptop runs the UI, and the 2 other boxes are the compute nodes.
> The user triggers compute tasks at random times... In between those times
when no parallelized compute is done, the user does analysis, looks at data
and so on.
> This does not involve any MPI compute.
> At that point, the nodes are blocked in a mpi_broadcast with each of the
4 processes on each of the nodes polling at 100%, triggering the cpu fan:-)
>
> homogeneous openmpi 1.10.3  linux 4.7.5
>
> Nowadays, are there any more options than the yield_when_idle mentioned
in that initial thread?
>
> The model I have used for so far is really a master/slave model where the
master sends the jobs (which take substantially longer than the MPI
communication itself), so in this model I would want the mpi nodes to be
really idle and i can sacrifice the latency while there's nothing to do.
> if there are no other options, is it possible to somehow start all the
processes outside of the mpi world, then only start the mpi framework once
it's needed?
>
> Regards,
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users




--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Performing partial calculation on a single node in an MPI job

2016-10-17 Thread Jeff Hammond
George:

http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm

Jeff

On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca  wrote:

> Vahid,
>
> You cannot use Fortan's vector subscript with MPI.
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Fortran and MPI-3 shared memory

2016-10-25 Thread Jeff Hammond
I can reproduce this with Intel 17 and MPICH on Mac OSX so it's not an
Open-MPI issue.  I added VOLATILE to the shared memory objects to prevent
Fortran compiler optimizations as well as a bunch of MPI_Win_sync calls
(after replacing fence with lock_all/unlock_all), but neither changed the
outcome.

While I claim a modest understanding of MPI-3 RMA and Fortran 2008,
unfortunately, I have never figured out how to use MPI-3 shared memory from
Fortran, which is especially unfortunate since it seems to be a fantastic
source of frustration to both real users such as yourself and MPI+Fortran
standard experts (Rolf).

Sorry for the unsatisfying response, but my suspicion is that this is a
program correctness issue.  I can't point to any error, but I've ruled out
the obvious alternatives.

Jeff

On Tue, Oct 25, 2016 at 11:29 AM, Tom Rosmond  wrote:

> All:
>
> I am trying to understand the use of the shared memory features of MPI-3
> that allow direct sharing of the memory space of on-node processes.
> Attached are 2 small test programs, one written in C (testmpi3.c), the
> other F95 (testmpi3.f90) .  They are solving the identical 'halo' exchange
> problem.  'testmpi3.c' is a simplified version of an example program from a
> presentation by Mark Lubin of Intel.  I wrote 'testmpi3.f90' to mimic the C
> version.
>
>  Also attached are 2 text files of the compile, execution, and output of
> the respective programs:
>
> CC_testmpi3.txt
> F95_testmpi3.txt
>
> Note: All 4 files are contained in the attached 'testmpi3.tar.gz'.
>
> Comparing the outputs of each version, it is clear that the shared memory
> copies in 'testmpi3.c' are working correctly, but not in 'testmpi3.f90'.
> As far as I can tell, the 2 programs are equivalent up to line 134 of
> 'testmpi3.c' and lines 97-101 of 'testmpi3.f90'. I thought the calls to
> 'c_f_pointer' would produce Fortran pointers that would access the correct
> shared memory addresses as the C-pointers do in 'testmpi3.c', but clearly
> that isn't happening. Can anyone explain why not, and what is needed to
> make this happen. Any suggestions are welcome.
>
> My environment:
>  Scientific Linux 6.8
>  INTEL FORTRAN and ICC version 15.0.2.164
>  OPEN-MPI 2.0.1
>
>
> T. Rosmond
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] OMPI users] Fortran and MPI-3 shared memory

2016-10-27 Thread Jeff Hammond
Yes, I tried -O0 and -O3.  But VOLATILE is going to thwart a wide range of
optimizations that would break this code.

Jeff

On Thu, Oct 27, 2016 at 2:19 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Jeff,
>
> Out of curiosity, did you compile the Fortran test program with -O0 ?
>
> Cheers,
>
> Gilles
>
> Tom Rosmond  wrote:
> Jeff,
>
> Thanks for looking at this.  I know it isn't specific to Open-MPI, but it
> is a frustrating issue vis-a-vis MPI and Fortran.  There are many very
> large MPI applications around the world written in Fortran that could
> benefit greatly from this MPI-3 capability.  My own background is in
> numerical weather prediction, and I know it would be welcome in that
> community.  Someone knowledgeable in both C and Fortran should be able to
> get to bottom of it.
>
> T. Rosmond
>
>
>
> On 10/25/2016 03:05 PM, Jeff Hammond wrote:
>
> I can reproduce this with Intel 17 and MPICH on Mac OSX so it's not an
> Open-MPI issue.  I added VOLATILE to the shared memory objects to prevent
> Fortran compiler optimizations as well as a bunch of MPI_Win_sync calls
> (after replacing fence with lock_all/unlock_all), but neither changed the
> outcome.
>
> While I claim a modest understanding of MPI-3 RMA and Fortran 2008,
> unfortunately, I have never figured out how to use MPI-3 shared memory from
> Fortran, which is especially unfortunate since it seems to be a fantastic
> source of frustration to both real users such as yourself and MPI+Fortran
> standard experts (Rolf).
>
> Sorry for the unsatisfying response, but my suspicion is that this is a
> program correctness issue.  I can't point to any error, but I've ruled out
> the obvious alternatives.
>
> Jeff
>
> On Tue, Oct 25, 2016 at 11:29 AM, Tom Rosmond 
> wrote:
>
>> All:
>>
>> I am trying to understand the use of the shared memory features of MPI-3
>> that allow direct sharing of the memory space of on-node processes.
>> Attached are 2 small test programs, one written in C (testmpi3.c), the
>> other F95 (testmpi3.f90) .  They are solving the identical 'halo' exchange
>> problem.  'testmpi3.c' is a simplified version of an example program from a
>> presentation by Mark Lubin of Intel.  I wrote 'testmpi3.f90' to mimic the C
>> version.
>>
>>  Also attached are 2 text files of the compile, execution, and output of
>> the respective programs:
>>
>> CC_testmpi3.txt
>> F95_testmpi3.txt
>>
>> Note: All 4 files are contained in the attached 'testmpi3.tar.gz'.
>>
>> Comparing the outputs of each version, it is clear that the shared memory
>> copies in 'testmpi3.c' are working correctly, but not in 'testmpi3.f90'.
>> As far as I can tell, the 2 programs are equivalent up to line 134 of
>> 'testmpi3.c' and lines 97-101 of 'testmpi3.f90'. I thought the calls to
>> 'c_f_pointer' would produce Fortran pointers that would access the correct
>> shared memory addresses as the C-pointers do in 'testmpi3.c', but clearly
>> that isn't happening. Can anyone explain why not, and what is needed to
>> make this happen. Any suggestions are welcome.
>>
>> My environment:
>>  Scientific Linux 6.8
>>  INTEL FORTRAN and ICC version 15.0.2.164
>>  OPEN-MPI 2.0.1
>>
>>
>> T. Rosmond
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>
>
> ___
> users mailing 
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-11-07 Thread Jeff Hammond
On Mon, Nov 7, 2016 at 8:54 AM, Dave Love  wrote:
>
> [Some time ago]
> Jeff Hammond  writes:
>
> > If you want to keep long-waiting MPI processes from clogging your CPU
> > pipeline and heating up your machines, you can turn blocking MPI
> > collectives into nicer ones by implementing them in terms of MPI-3
> > nonblocking collectives using something like the following.
>
> I see sleeping for ‘0s’ typically taking ≳50μs on Linux (measured on
> RHEL 6 or 7, without specific tuning, on recent Intel).  It doesn't look
> like something you want in paths that should be low latency, but maybe
> there's something you can do to improve that?  (sched_yield takes <1μs.)

I demonstrated a bunch of different implementations with the instruction to
"pick one of these...", where establishing the relationship between
implementation and performance was left as an exercise for the reader :-)
 If latency is of the utmost importance to you, you should use the pause
instruction, but this will of course keep the hardware thread running.

Note that MPI implementations may be interested in taking advantage of
https://software.intel.com/en-us/blogs/2016/10/06/intel-xeon-phi-product-family-x200-knl-user-mode-ring-3-monitor-and-mwait.
It's not possible to use this from outside of MPI because the memory
changed when the ibcast completes locally may not be visible to the user,
but it would allow blocking MPI calls to park hardware threads.

> > I typed this code straight into this email, so you should validate it
> > carefully.
>
> ...
>
> > #elif USE_CPU_RELAX
> > cpu_relax(); /*
> >
http://linux-kernel.2935.n7.nabble.com/x86-cpu-relax-why-nop-vs-pause-td398656.html
> > */
>
> Is cpu_relax available to userland?  (GCC has an x86-specific intrinsic
> __builtin_ia32_pause in fairly recent versions, but it's not in RHEL6's
> gcc-4.4.)

The pause instruction is available in ring3.  Just use that if cpu_relax
wrapper is not implemented.

Jeff

--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Follow-up to Open MPI SC'16 BOF

2016-11-22 Thread Jeff Hammond
>
>
>
>1. MPI_ALLOC_MEM integration with memkind
>
> It would sense to prototype this as a standalone project that is
integrated with any MPI library via PMPI.  It's probably a day or two of
work to get that going.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Cast MPI inside another MPI?

2016-11-27 Thread Jeff Hammond
Have you tried subcommunicators? MPI is well-suited to hierarchical
parallelism since MPI-1 days.

Additionally, MPI-3 enables MPI+MPI as George noted.

Your question is probably better suited for Stack Overflow, since it's not
implementation-specific...

Jeff

On Fri, Nov 25, 2016 at 3:34 AM Diego Avesani 
wrote:

> Dear all,
>
> I have the following question. Is it possible to cast an MPI inside
> another MPI?
> I would like to have to level of parallelization, but I would like to
> avoid the MPI-openMP paradigm.
>
> Another question. I normally use openMPI but I would like to read
> something to understand and learn all its potentialities. Can anyone
> suggest me any book or documentation?
>
> Thanks
>
> Diego
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Issues building Open MPI 2.0.1 with PGI 16.10 on macOS

2016-11-28 Thread Jeff Hammond
t; PGC/x86-64 OSX 16.10-0: compilation aborted
> make[3]: *** [proc.lo] Error 1
> make[2]: *** [all-recursive] Error 1
> make[1]: *** [all-recursive] Error 1
> make: *** [all-recursive] Error 1
>
> I guess my question is whether this is an issue with PGI or Open MPI or
> both? I'm not too sure. I've also asked about this on the PGI forums as
> well (http://www.pgroup.com/userforum/viewtopic.php?t=5413&start=0) since
> I'm not sure. But, no matter what, does anyone have thoughts on how to
> solve this?
>
> Thanks,
> Matt
>
> --
> Matt Thompson
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-11-28 Thread Jeff Hammond
>
>
> > Note that MPI implementations may be interested in taking advantage of
> > https://software.intel.com/en-us/blogs/2016/10/06/intel-
> xeon-phi-product-family-x200-knl-user-mode-ring-3-monitor-and-mwait.
>
> Is that really useful if it's KNL-specific and MSR-based, with a setup
> that implementations couldn't assume?
>
>
Why wouldn't it be useful in the context of a parallel runtime system like
MPI?  MPI implementations take advantage of all sorts of stuff that needs
to be queried with configuration, during compilation or at runtime.

TSX requires that one check the CPUID bits for it, and plenty of folks are
happily using MSRs (e.g.
http://www.brendangregg.com/blog/2014-09-15/the-msrs-of-ec2.html).


> >> Is cpu_relax available to userland?  (GCC has an x86-specific intrinsic
> >> __builtin_ia32_pause in fairly recent versions, but it's not in RHEL6's
> >> gcc-4.4.)
> >
> > The pause instruction is available in ring3.  Just use that if cpu_relax
> > wrapper is not implemented.
>
> [OK; I meant in a userland library.]
>
> Are there published measurements of the typical effects of spinning and
> ameliorations on some sort of "representative" system?
>
>
None that are published, unfortunately.

Best,

Jeff


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Issues building Open MPI 2.0.1 with PGI 16.10 on macOS

2016-11-28 Thread Jeff Hammond
attached config.log that contains the details of the following failures is
the best way to make forward-progress here.  that none of the system
headers are detected suggests a rather serious compiler problem that may
not have anything to do with headers.

checking for sys/types.h... no
checking for sys/stat.h... no
checking for stdlib.h... no
checking for string.h... no
checking for memory.h... no
checking for strings.h... no
checking for inttypes.h... no
checking for stdint.h... no
checking for unistd.h... no


On Mon, Nov 28, 2016 at 9:49 AM, Matt Thompson  wrote:

> Hmm. Well, I definitely have /usr/include/stdint.h as I previously was
> trying work with clang as compiler stack. And as near as I can tell, Open
> MPI's configure is seeing /usr/include as oldincludedir, but maybe that's
> not how it finds it?
>
> If I check my configure output:
>
> 
> 
> == Configuring Open MPI
> 
> 
>
> *** Startup tests
> checking build system type... x86_64-apple-darwin15.6.0
> 
> checking for sys/types.h... yes
> checking for sys/stat.h... yes
> checking for stdlib.h... yes
> checking for string.h... yes
> checking for memory.h... yes
> checking for strings.h... yes
> checking for inttypes.h... yes
> checking for stdint.h... yes
> checking for unistd.h... yes
>
> So, the startup saw it. But:
>
> --- MCA component event:libevent2022 (m4 configuration macro, priority 80)
> checking for MCA component event:libevent2022 compile mode... static
> checking libevent configuration args... --disable-dns --disable-http
> --disable-rpc --disable-openssl --enable-thread-support --d
> isable-evport
> configure: OPAL configuring in opal/mca/event/libevent2022/libevent
> configure: running /bin/sh './configure' --disable-dns --disable-http
> --disable-rpc --disable-openssl --enable-thread-support --
> disable-evport  '--disable-wrapper-rpath' 'CC=pgcc' 'CXX=pgc++'
> 'FC=pgfortran' 'CFLAGS=-m64' 'CXXFLAGS=-m64' 'FCFLAGS=-m64' '--w
> ithout-verbs' 
> '--prefix=/Users/mathomp4/installed/Compiler/pgi-16.10/openmpi/2.0.1'
> 'CPPFLAGS=-I/Users/mathomp4/src/MPI/openmpi-
> 2.0.1 -I/Users/mathomp4/src/MPI/openmpi-2.0.1 
> -I/Users/mathomp4/src/MPI/openmpi-2.0.1/opal/include
>   -I/Users/mathomp4/src/MPI/o
> penmpi-2.0.1/opal/mca/hwloc/hwloc1112/hwloc/include -Drandom=opal_random'
> --cache-file=/dev/null --srcdir=. --disable-option-che
> cking
> checking for a BSD-compatible install... /usr/bin/install -c
> 
> checking for sys/types.h... no
> checking for sys/stat.h... no
> checking for stdlib.h... no
> checking for string.h... no
> checking for memory.h... no
> checking for strings.h... no
> checking for inttypes.h... no
> checking for stdint.h... no
> checking for unistd.h... no
>
> So, it's like whatever magic found stdint.h for the startup isn't passed
> down to libevent when it builds? As I scan the configure output, PMIx sees
> stdint.h in its section and ROMIO sees it as well, but not libevent2022.
> The Makefiles inside of libevent2022 do have 'oldincludedir =
> /usr/include'. Hmm.
>
>
>
> On Mon, Nov 28, 2016 at 11:39 AM, Bennet Fauber  wrote:
>
>> I think PGI uses installed GCC components for some parts of standard C
>> (at least for some things on Linux, it does; and I imagine it is
>> similar for Mac).  If you look at the post at
>>
>> http://www.pgroup.com/userforum/viewtopic.php?t=5147&sid=17f
>> 3afa2cd0eec05b0f4e54a60f50479
>>
>> The problem seems to have been one with the Xcode configuration:
>>
>> "It turns out my Xcode was messed up as I was missing /usr/include/.
>>  After rerunning xcode-select --install it works now."
>>
>> On my OS X 10.11.6, I have /usr/include/stdint.h without having the
>> PGI compilers.  This may be related to the GNU command line tools
>> installation...?  I think that is now optional and may be needed.
>>
>> Sorry for the noise if this is irrelevant.
>>
>>
>>
>> On Mon, Nov 28, 2016 at 11:18 AM, Jeff Hammond 
>> wrote:
>> > The following is the code that fails.  The comments indicate the likely
>> > source of the error.
>> >
>> > Please see
>> > http://www.pgroup.com/userforum/viewtopic.php?t=5147&sid=17f
>> 3afa2cd0eec05b0f4e54a60f50479
>> > and other entries on https://www.google.com/search?q=pgi+stdint.h.
>> >
>> > You may want to debug libevent by itself
>> >

Re: [OMPI users] Rounding errors and MPI

2017-01-18 Thread Jeff Hammond
If compiling with -O0 solves the problem, then you should use -assume
protect-parens and/or one of the options discussed in the PDF you will find
at
https://software.intel.com/en-us/articles/consistency-of-floating-point-results-using-the-intel-compiler.
Disabling optimization is a heavy hammer that you don't want to use if you
care about performance at all.  If you are using Fortran and MPI, it seems
likely you care about performance.

Jeff

On Mon, Jan 16, 2017 at 8:31 AM, Oscar Mojica  wrote:

> Thanks guys for your answers.
>
>
> Actually, the optimization was not disabled, and that was the problem,
> compiling it with -o0 solves it. Sorry.
>
>
> Oscar Mojica
> Geologist Ph.D. in Geophysics
> SENAI CIMATEC Supercomputing Center
> Lattes: http://lattes.cnpq.br/0796232840554652
>
>
>
> --
> *From:* users  on behalf of Yann Jobic <
> yann.jo...@univ-amu.fr>
> *Sent:* Monday, January 16, 2017 12:01 PM
> *To:* Open MPI Users
> *Subject:* Re: [OMPI users] Rounding errors and MPI
>
> Hi,
>
> Is there an overlapping section in the MPI part ?
>
> Otherwise, please check :
> - declaration type of all the variables (consistency)
> - correct initialization of the array "wave" (to zero)
> - maybe use temporary variables like
> real size1,size2,factor
> size1 = dx+dy
> size2 = dhx+dhy
> factor = dt*size2/(size1**2)
> and then in the big loop:
> wave(it,j,k)= wave(it,j,k)*factor
> The code will also run faster.
>
> Yann
>
> Le 16/01/2017 à 14:28, Oscar Mojica a écrit :
>
> Hello everybody
>
> I'm having a problem with a parallel program written in fortran. I have a
> 3D array which is divided in two in the third dimension so thats two
> processes
>
> perform some operations with a part of the cube, using a subroutine. Each
> process also has the complete cube. Before each process call the
> subroutine,
>
> I compare its sub array with its corresponding part of the whole cube. These
> are the same. The subroutine simply performs point-to-point operations in
> a loop, i.e.
>
>
>  do k=k1,k2
>   do j=1,nhx
>do it=1,nt
> wave(it,j,k)= wave(it,j,k)*dt/(dx+dy)*(dhx+dhy)/(dx+dy)
>  end do
>end do
>   enddo
>
>
> where, wave is the 3D array and the other values are constants.
>
>
> After leaving the subroutine I notice that there is a difference in the
> values calculated by process 1 compared to the values that I get if the
> whole cube is passed to the subroutine but that this only works on its
> part, i.e.
>
>
> ---complete2017-01-12 10:30:23.0 -0400
> +++ half  2017-01-12 10:34:57.0 -0400
> @@ -4132545,7 +4132545,7 @@
>-2.5386049E-04
>-2.9899486E-04
>-3.4697619E-04
> -  -3.7867704E-04
> + -3.7867710E-04
> 0.000E+00
> 0.000E+00
> 0.000E+00
>
>
> When I do this with more processes the same thing happens with all
> processes other than zero. I find it very strange. I am disabling the
> optimization when compiling.
>
> In the end the results are visually the same, but not numerically. I am
> working with simple precision.
>
>
> Any idea what may be going on? I do not know if this is related to MPI
>
>
> Oscar Mojica
> Geologist Ph.D. in Geophysics
> SENAI CIMATEC Supercomputing Center
> Lattes: http://lattes.cnpq.br/0796232840554652
>
>
>
> ___
> users mailing 
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-02-21 Thread Jeff Hammond
This is fine if each thread interacts with a different window, no?

Jeff
On Sun, Feb 19, 2017 at 5:32 PM Nathan Hjelm  wrote:

> You can not perform synchronization at the same time as communication on
> the same target. This means if one thread is in
> MPI_Put/MPI_Get/MPI_Accumulate (target) you can’t have another thread in
> MPI_Win_flush (target) or MPI_Win_flush_all(). If your program is doing
> that it is not a valid MPI program. If you want to ensure a particular put
> operation is complete try MPI_Rput instead.
>
> -Nathan
>
> > On Feb 19, 2017, at 2:34 PM, Joseph Schuchart  wrote:
> >
> > All,
> >
> > We are trying to combine MPI_Put and MPI_Win_flush on locked (using
> MPI_Win_lock_all) dynamic windows to mimic a blocking put. The application
> is (potentially) multi-threaded and we are thus relying on
> MPI_THREAD_MULTIPLE support to be available.
> >
> > When I try to use this combination (MPI_Put + MPI_Win_flush) in our
> application, I am seeing threads occasionally hang in MPI_Win_flush,
> probably waiting for some progress to happen. However, when I try to create
> a small reproducer (attached, the original application has multiple layers
> of abstraction), I am seeing fatal errors in MPI_Win_flush if using more
> than one thread:
> >
> > ```
> > [beryl:18037] *** An error occurred in MPI_Win_flush
> > [beryl:18037] *** reported by process [4020043777,2]
> > [beryl:18037] *** on win pt2pt window 3
> > [beryl:18037] *** MPI_ERR_RMA_SYNC: error executing rma sync
> > [beryl:18037] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
> abort,
> > [beryl:18037] ***and potentially your MPI job)
> > ```
> >
> > I could only trigger this on dynamic windows with multiple concurrent
> threads running.
> >
> > So: Is this a valid MPI program (except for the missing clean-up at the
> end ;))? It seems to run fine with MPICH but maybe they are more tolerant
> to some programming errors...
> >
> > If it is a valid MPI program, I assume there is some race condition in
> MPI_Win_flush that leads to the fatal error (or the hang that I observe
> otherwise)?
> >
> > I tested this with OpenMPI 1.10.5 on single node Linux Mint 18.1 system
> with stock kernel 4.8.0-36 (aka my laptop). OpenMPI and the test were both
> compiled using GCC 5.3.0. I could not run it using OpenMPI 2.0.2 due to the
> fatal error in MPI_Win_create (which also applies to
> MPI_Win_create_dynamic, see my other thread, not sure if they are related).
> >
> > Please let me know if this is a valid use case and whether I can provide
> you with additional information if required.
> >
> > Many thanks in advance!
> >
> > Cheers
> > Joseph
> >
> > --
> > Dipl.-Inf. Joseph Schuchart
> > High Performance Computing Center Stuttgart (HLRS)
> > Nobelstr. 19
> > D-70569 Stuttgart
> >
> > Tel.: +49(0)711-68565890
> > Fax: +49(0)711-6856832
> > E-Mail: schuch...@hlrs.de
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-03-07 Thread Jeff Hammond
;> otherwise)?
>>>>
>>>> I tested this with OpenMPI 1.10.5 on single node Linux Mint 18.1 system
>>>> with stock kernel 4.8.0-36 (aka my laptop). OpenMPI and the test were both
>>>> compiled using GCC 5.3.0. I could not run it using OpenMPI 2.0.2 due to the
>>>> fatal error in MPI_Win_create (which also applies to
>>>> MPI_Win_create_dynamic, see my other thread, not sure if they are related).
>>>>
>>>> Please let me know if this is a valid use case and whether I can
>>>> provide you with additional information if required.
>>>>
>>>> Many thanks in advance!
>>>>
>>>> Cheers
>>>> Joseph
>>>>
>>>> --
>>>> Dipl.-Inf. Joseph Schuchart
>>>> High Performance Computing Center Stuttgart (HLRS)
>>>> Nobelstr. 19
>>>> D-70569 Stuttgart
>>>>
>>>> Tel.: +49(0)711-68565890
>>>> Fax: +49(0)711-6856832
>>>> E-Mail: schuch...@hlrs.de
>>>>
>>>> ___
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>>>
>>
>>
> --
> Dipl.-Inf. Joseph Schuchart
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-15 Thread Jeff Hammond
anyone reported this issue to the cp2k people?  I know it's not
> > their problem, but I assume they'd like to know for users' sake,
> > particularly if it's not going to be addressed.  I wonder what else
> > might be affected.
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users




--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-15 Thread Jeff Hammond
On Wed, Mar 15, 2017 at 5:44 PM Jeff Squyres (jsquyres) 
wrote:

> On Mar 15, 2017, at 8:25 PM, Jeff Hammond  wrote:
> >
> > I couldn't find the docs on mpool_hints, but shouldn't there be a way to
> disable registration via MPI_Info rather than patching the source?
>
> Yes; that's what I was thinking, but wanted to get the data point first.
> Specifically: if this test works (i.e., commenting out the de/registration
> avoids the slowdown), there's at least two things we devs should consider:
>
> 1. Disable the entire de/registration code path for ALLOC/FREE_MEM (e.g.,
> perhaps the lazy method is just better, anyway).
>
> 2. Provide an MCA param to disable the de/registration code path for
> ALLOC/FREE_MEM.
>
> Let's see how the test goes.
>

I agree that data is good.

Just don't forget that RMA codes are supposed to use alloc_mem and the
benefits of registering here are quite clear, unlike the 2-sided case where
eager may not need it and rendezvous is able to do on-the-fly easily
enough.

Ideally, for RMA, alloc_mem can also have an option to use shm, in cases
where win_allocate(_shared) isn't a good fit.

Jeff



> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-27 Thread Jeff Hammond
On Sat, Mar 25, 2017 at 7:15 AM Jeff Squyres (jsquyres) 
wrote:

> On Mar 25, 2017, at 3:04 AM, Ben Menadue  wrote:
> >
> > I’m not sure about this. It was my understanding that HyperThreading is
> implemented as a second set of e.g. registers that share execution units.
> There’s no division of the resources between the hardware threads, but
> rather the execution units switch between the two threads as they stall
> (e.g. cache miss, hazard/dependency, misprediction, …) — kind of like a
> context switch, but much cheaper. As long as there’s nothing being
> scheduled on the other hardware thread, there’s no impact on the
> performance. Moreover, turning HT off in the BIOS doesn’t make more
> resources available to now-single hardware thread.
>
> Here's an old post on this list where I cited a paper from the Intel
> Technology Journal.  The paper is pretty old at this point (2002, I
> believe?), but I believe it was published near the beginning of the HT
> technology at Intel:
>
>
> https://www.mail-archive.com/hwloc-users@lists.open-mpi.org/msg01135.html
>
> The paper is attached on that post; see, in particular, the section
> "Single-task and multi-task modes".
>
> All this being said, I'm a software wonk with a decent understanding of
> hardware.  But I don't closely follow all the specific details of all
> hardware.  So if Haswell / Broadwell / Skylake processors, for example, are
> substantially different than the HT architecture described in that paper,
> please feel free to correct me!
>

I don't know the details, but HPC centers like NERSC noticed a shift around
Ivy Bridge (Edison) that caused them to enable it.

https://www.nersc.gov/users/computational-systems/edison/performance-and-optimization/hyper-threading/

I know two of the authors of that 2002 paper on HT. Will ask them for
insight next time we cross paths.

Jeff

>
> > This matches our observations on our cluster — there was no
> statistically-significant change in performance between having HT turned
> off in the BIOS and turning the second hardware thread of each core off in
> Linux. We run a mix of architectures — Sandy, Ivy, Haswell, and Broadwell
> (all dual-socket Xeon E5s), and KNL, and this appears to hold true across
> of these.
>
> These are very complex architectures; the impacts of enabling/disabling HT
> are going to be highly specific to both the platform and application.
>
> > Moreover, having the second hardware thread turned on in Linux but not
> used by batch jobs (by cgroup-ing them to just one hardware thread of each
> core) substantially reduced the performance impact and jitter from the OS —
> by ~10% in at least one synchronisation-heavy application. This is likely
> because the kernel began scheduling OS tasks (Lustre, IB, IPoIB, IRQs,
> Ganglia, PBS, …) on the second, unused hardware thread of each core, which
> were then run when the batch job’s processes stalled the CPU’s execution
> units. This is with both a CentOS 6.x kernel and a custom (tickless) 7.2
> kernel.
>
> Yes, that's a pretty clever use of HT in an HPC environment.  But be aware
> that you are cutting on-core pipeline depths that can be used by
> applications to do this.  In your setup, it sounds like this is still a net
> performance win (which is pretty sweet).  But that may not be a universal
> effect.
>
> This is probably a +3 on the existing trend from the prior emails in this
> thread: "As always, experiment to find the best for your hardware and
> jobs."  ;-)
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI I/O gives undefined behavior if the amount of bytes described by a filetype reaches 2^32

2017-05-02 Thread Jeff Hammond
, with filenames
> "test_too_large_blocks_single" and "test_too_large_blocks_single_even_larger",
> respectively).
>
> There are, of course, many other things one could test.
> It seems that the implementations use 32bit integer variables to compute
> the byte composition inside the filetype. Since the filetype is defined
> using two 32bit integer variables, this can easily lead to integer
> overflows if the user supplies large values. It seems that no
> implementation expects this problem and therefore they do not act
> gracefully on its occurrence.
>
> I looked at ILP64 <https://software.intel.com/en-us/node/528914> Support,
> but it only adapts the function parameters and not the internally used
> variables and it is also not available for C.
>
>
As far as I know, this won't fix anything, because it will run into all the
internal implementation issues with overflow.  The ILP64 feature for
Fortran is just to workaround the horrors of default integer width
promotion by Fortran compilers.


> I looked at integer overflow
> <https://www.gnu.org/software/libc/manual/html_node/Program-Error-Signals.html#Program%20Error%20Signals>
> (FPE_INTOVF_TRAP) trapping, which could help to verify the source of the
> problem, but it doesn't seem to be possible for C. Intel does not
> <https://software.intel.com/en-us/forums/intel-c-compiler/topic/306156>
> offer any built-in integer overflow trapping.
>
>
You might be interested in http://blog.regehr.org/archives/1154 and linked
material therein.  I think it's possible to implement the effective
equivalent of a hardware trap using the compiler, although I don't know any
(production) compiler that supports this.


> There are ways to circumvent this problem for most cases. It is only
> unavoidable if the logic of a program contains complex, non-repeating data
> structures with sizes of over (or equal) 4GiB. Even then, one could split
> up the filetype and use a different displacement in two distinct write
> calls.
>
> Still, this problem violates the standard as it produces undefined
> behavior even when using the API in a consistent way. The implementation
> should at least provide a warning for the user, but should ideally use
> larger datatypes in the filetype computations. When a user stumbles on this
> problem, he will have a hard time to debug it.
>
>
Indeed, this is a problem.  There is an effort to fix the API in MPI-4 (see
https://github.com/jeffhammond/bigmpi-paper) but as you know, there are
implementation defects that break correct MPI-3 programs that use datatypes
to workaround the limits of C int.  We were able to find a bunch of
problems in MPICH using BigMPI but clearly not all of them.

Jeff


> Thank you very much for reading everything ;)
>
> Kind Regards,
>
> Nils
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] mpi_scatterv problem in fortran

2017-05-15 Thread Jeff Hammond
ims(1),coords(1),ista,iend)
> ! Decomposition in second (ie. Y) direction
> CALL MPE_DECOMP1D(n,dims(2),coords(2),jsta,jend)
> end subroutine fnd2ddecomp
>
> SUBROUTINE MPE_DECOMP1D(n,numprocs,myid,s,e)
> integer n,numprocs,myid,s,e,nlocal,deficit
> nlocal  = n / numprocs
> s   = myid * nlocal + 1
> deficit = mod(n,numprocs)
> s   = s + min(myid,deficit)
> ! Give one more slice to processors
> if (myid .lt. deficit) then
> nlocal = nlocal + 1
> endif
> e = s + nlocal - 1
> if (e .gt. n .or. myid .eq. numprocs-1) e = n
> end subroutine MPE_DECOMP1D
>
> END program SendRecv
>
> I am generating a 4x4 matrix, and using scatterv I am sending the blocks
> of matrices to other processors. Code works fine for 4,2 and 16 processors.
> But throws a error for three processors. What modifications I have to do
> make it work for any number of given processors.
>
> Global matrix in Root:
>
> [ 0  4  8 12
>   1  5  9 13
>   2  6 10 14
>   3  7 11 15 ]
>
> For 4 processors each processors gets.
>
> Rank =0 : [0 4
>   1 5]
> Rank =1 : [8 12
>   9 13]
> Rank =2 : [2 6
>   3 7]
> Rank =3 : [10 14
>   11 15]
>
> Code works for 4, 2 and 16 processors; in fact it works when sub-arrays
> are of similar size. It fails for 3 processors. For 3 processors I am
> expecting:
>
> Rank =0 : [0 4 8 12
>1 5 9 13]
> Rank =1 : [2 6 10 14]
> Rank =2 : [3 7 11 15]
>
> But I am getting the following error. Where I am missing? what
> modifications I have to make to make it work.
>
> Fatal error in PMPI_Scatterv: Message truncated, error stack:
> PMPI_Scatterv(671): MPI_Scatterv(sbuf=0x6b58c0, 
> scnts=0xf95d90, displs=0xfafbe0, dtype=USER, rbuf=0xfafc00, 
> rcount=4, MPI_INTEGER, root=0, MPI_COMM_WORLD) failed
> MPIR_Scatterv_impl(211)...:
> I_MPIR_Scatterv_intra(278): Failure during collective
> I_MPIR_Scatterv_intra(272):
> MPIR_Scatterv(147):
> MPIDI_CH3U_Receive_data_found(131): Message from rank 0 and tag 6 truncated; 
> 32 bytes received but buffer size is 16
> Fatal error in PMPI_Scatterv: Message truncated, error stack:
> PMPI_Scatterv(671): MPI_Scatterv(sbuf=0x6b58c0, 
> scnts=0x240bda0, displs=0x240be60, dtype=USER, rbuf=0x240be80, 
> rcount=4, MPI_INTEGER, root=0, MPI_COMM_WORLD) failed
> MPIR_Scatterv_impl(211)...:
> I_MPIR_Scatterv_intra(278): Failure during collective
> I_MPIR_Scatterv_intra(272):
> MPIR_Scatterv(147):
> MPIDI_CH3U_Receive_data_found(131): Message from rank 0 and tag 6 truncated; 
> 32 bytes received but buffer size is 16
> forrtl: error (69): process interrupted (SIGINT)
> Image  PCRoutineLineSource
> a.out  00479165  Unknown   Unknown  Unknown
> a.out  00476D87  Unknown   Unknown  Unknown
> a.out  0044B7C4  Unknown   Unknown  Unknown
> a.out  0044B5D6  Unknown   Unknown  Unknown
> a.out  0042DB76  Unknown   Unknown  Unknown
> a.out  004053DE  Unknown   Unknown  Unknown
> libpthread.so.07F2327456790  Unknown   Unknown  Unknown
> libc.so.6  7F2326EFE2F7  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327B899E8  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327C94E39  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327C94B32  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327B6E44A  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327B6DD5D  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327B6DBDC  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327B6DB0C  Unknown   Unknown  Unknown
> libmpi.so.12   7F2327B6F932  Unknown   Unknown  Unknown
> libmpifort.so.12   7F2328294B1C  Unknown   Unknown  Unknown
> a.out  0040488B  Unknown   Unknown  Unknown
> a.out  0040385E  Unknown   Unknown  Unknown
> libc.so.6  7F2326E4DD5D  Unknown   Unknown  Unknown
> a.out      000000403769  Unknown   Unknown  Unknown
>
> _
> *SAVE WATER **  ~  **SAVE ENERGY**~ **~ **SAVE EARTH *[image:
> Earth-22-june.gif (7996 bytes)]
>
> http://sites.google.com/site/kolukulasivasrinivas/
>
> Siva Srinivas Kolukula, PhD
> *Scientist - B*
> Indian Tsunami Early Warning Centre (ITEWC)
> Advisory Services and Satellite Oceanography Group (ASG)
> Indian National Centre for Ocean Information Services (INCOIS)
> "Ocean Valley"
> Pragathi Nagar (B.O)
> Nizampet (S.O)
> Hyderabad - 500 090
> Telangana, INDIA
>
> Office: 040 23886124
>
>
> *Cell: +91 9381403232; +91 8977801947*
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] "undefined reference to `MPI_Comm_create_group'" error message when using Open MPI 1.6.2

2017-06-08 Thread Jeff Hammond
"I can't upgrade Open MPI on the computing nodes of this system" is false.
Open-MPI can be installed entirely in userspace in your home directory.

If you read the MPI_Comm_create_group paper, there should be instructions
on how to implement this using MPI-2 features. Jim Dinan wrote a working
version but I don't know where it is now.

Jeff

On Thu, Jun 8, 2017 at 2:59 AM Arham Amouie via users <
users@lists.open-mpi.org> wrote:

> Hello. Open MPI 1.6.2 is installed on the cluster I'm using. At the moment
> I can't upgrade Open MPI on the computing nodes of this system. My C code
> contains many calls to MPI functions. When I try to 'make' this code on the
> cluster, the only error that I get is "undefined reference to
> `MPI_Comm_create_group'".
>
> I'm able to install a newer version (like 2.1.1) of Open MPI only on the
> frontend of this cluster. Using newer version, the code is compiled and
> linked successfully. But in this case I face problem in running the
> program, since the newer version of Open MPI is not installed on the
> computing nodes.
>
> Is there any way that I can compile and link the code using Open MPI 1.6.2?
>
> Thanks,
> Arham Amouei
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Double free or corruption with OpenMPI 2.0

2017-06-13 Thread Jeff Hammond
If you are not using external32 in datatypes code, this issue doesn't
matter. I don't think most implementations support external32...

Double free indicates application error. Such errors are possible but
extremely rare inside of MPI libraries. The incidence of applications
corrupting memory is about a million times higher than MPI libraries in my
experience.

Jeff

On Tue, Jun 13, 2017 at 5:23 AM ashwin .D  wrote:

> Also when I try to build and run a make check I get these errors - Am I
> clear to proceed or is my installation broken ? This is on Ubuntu 16.04
> LTS.
>
> ==
>Open MPI 2.1.1: test/datatype/test-suite.log
> ==
>
> # TOTAL: 9
> # PASS:  8
> # SKIP:  0
> # XFAIL: 0
> # FAIL:  1
> # XPASS: 0
> # ERROR: 0
>
> .. contents:: :depth: 2
>
> FAIL: external32
> 
>
> /home/t/openmpi-2.1.1/test/datatype/.libs/lt-external32: symbol lookup
> error: /home/openmpi-2.1.1/test/datatype/.libs/lt-external32: undefined
> symbol: ompi_datatype_pack_external_size
> FAIL external32 (exit status:
>
> On Tue, Jun 13, 2017 at 5:24 PM, ashwin .D  wrote:
>
>> Hello,
>>   I am using OpenMPI 2.0.0 with a computational fluid dynamics
>> software and I am encountering a series of errors when running this with
>> mpirun. This is my lscpu output
>>
>> CPU(s):4
>> On-line CPU(s) list:   0-3
>> Thread(s) per core:2
>> Core(s) per socket:2
>> Socket(s): 1 and I am running OpenMPI's mpirun in the following
>>
>> way
>>
>> mpirun -np 4  cfd_software
>>
>> and I get double free or corruption every single time.
>>
>> I have two questions -
>>
>> 1) I am unable to capture the standard error that mpirun throws in a file
>>
>> How can I go about capturing the standard error of mpirun ?
>>
>> 2) Has this error i.e. double free or corruption been reported by others ? 
>> Is there a Is a
>>
>> bug fix available ?
>>
>> Regards,
>>
>> Ashwin.
>>
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Double free or corruption with OpenMPI 2.0

2017-06-14 Thread Jeff Hammond
The "error *** glibc detected *** $(PROGRAM): double free or corruption" is
ubiquitous and rarely has anything to do with MPI.


As Gilles said, use a debugger to figure out why your application is
corrupting the heap.


Jeff



On Wed, Jun 14, 2017 at 3:31 AM, ashwin .D  wrote:

> Hello,
>   I found a thread with Intel MPI(although I am using gfortran
> 4.8.5 and OpenMPI 2.1.1) - https://software.intel.com/en-
> us/forums/intel-fortran-compiler-for-linux-and-mac-os-x/topic/564266 but
> the error the OP gets is the same as mine
>
> *** glibc detected *** ./a.out: double free or corruption (!prev):
> 0x7fc6dc80 ***
> 04 === Backtrace: =
> 05 /lib64/libc.so.6[0x3411e75e66]
> 06/lib64/libc.so.6[0x3411e789b3]
>
> So the explanation given in that post is this -
> "From their examination our Development team concluded the underlying
> problem with openmpi 1.8.6 resulted from mixing out-of-date/incompatible
> Fortran RTLs. In short, there were older static Fortran RTL bodies
> incorporated in the openmpi library that when mixed with newer Fortran RTL
> led to the failure. They found the issue is resolved in the newer
> openmpi-1.10.1rc2 and recommend resolving requires using a newer openmpi
> release with our 15.0 (or newer) release." Could this be possible with my
> version as well ?
>
>
> I am willing to debug this provided I am given some clue on how to
> approach my problem. At the moment I am unable to proceed further and the
> only thing I can add is I ran tests with the sequential form of my
> application and it is much slower although I am using shared memory and all
> the cores are in the same machine.
>
> Best regards,
> Ashwin.
>
>
>
>
>
> On Tue, Jun 13, 2017 at 5:52 PM, ashwin .D  wrote:
>
>> Also when I try to build and run a make check I get these errors - Am I
>> clear to proceed or is my installation broken ? This is on Ubuntu 16.04
>> LTS.
>>
>> ==
>>Open MPI 2.1.1: test/datatype/test-suite.log
>> ==
>>
>> # TOTAL: 9
>> # PASS:  8
>> # SKIP:  0
>> # XFAIL: 0
>> # FAIL:  1
>> # XPASS: 0
>> # ERROR: 0
>>
>> .. contents:: :depth: 2
>>
>> FAIL: external32
>> 
>>
>> /home/t/openmpi-2.1.1/test/datatype/.libs/lt-external32: symbol lookup
>> error: /home/openmpi-2.1.1/test/datatype/.libs/lt-external32: undefined
>> symbol: ompi_datatype_pack_external_size
>> FAIL external32 (exit status:
>>
>> On Tue, Jun 13, 2017 at 5:24 PM, ashwin .D  wrote:
>>
>>> Hello,
>>>   I am using OpenMPI 2.0.0 with a computational fluid dynamics
>>> software and I am encountering a series of errors when running this with
>>> mpirun. This is my lscpu output
>>>
>>> CPU(s):4
>>> On-line CPU(s) list:   0-3
>>> Thread(s) per core:2
>>> Core(s) per socket:2
>>> Socket(s): 1 and I am running OpenMPI's mpirun in the following
>>>
>>> way
>>>
>>> mpirun -np 4  cfd_software
>>>
>>> and I get double free or corruption every single time.
>>>
>>> I have two questions -
>>>
>>> 1) I am unable to capture the standard error that mpirun throws in a file
>>>
>>> How can I go about capturing the standard error of mpirun ?
>>>
>>> 2) Has this error i.e. double free or corruption been reported by others ? 
>>> Is there a Is a
>>>
>>> bug fix available ?
>>>
>>> Regards,
>>>
>>> Ashwin.
>>>
>>>
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Jeff Hammond
111 Hudson Hall
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> ___
> > > > >>>> users mailing list
> > > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >>>>
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
> >
> > > > >>> ___
> > > > >>> users mailing list
> > > > >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >>>
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=e9pjil1vV3SDa40dQJww0p-d0LhgyQzX_kPNhmz-oUE&s=Y4hrMiRzNuObkpm0vPojCqr6Cx6uS_wLxNyAfUaBz70&e=
> >
> > > > >>
> > > > >> Volker Blum
> > > > >> Associate Professor
> > > > >> Ab Initio Materials Simulations
> > > > >> Duke University, MEMS Department
> > > > >> 144 Hudson Hall, Box 90300, Duke University, Durham, NC
> > 27708, USA
> > > > >>
> > > > >> volker.b...@duke.edu <mailto:volker.b...@duke.edu>
> > > > >> https://aims.pratt.duke.edu
> > > > >> +1 (919) 660 5279 
> > > > >> Twitter: Aimsduke
> > > > >>
> > > > >> Office:  Hudson Hall
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> ___
> > > > >> users mailing list
> > > > >> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >>
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
> >
> > > > > ___
> > > > > users mailing list
> > > > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=HDubxzHgm3hz7-NQfgobz7rkGf0LWBTlGGqdgSoPCC4&s=2D1Arirt92pKR6i2-4KQKZ8YhSnZ2TPkouQQePHvNf0&e=
> >
> > > >
> > > > Volker Blum
> > > > Associate Professor
> > > > Ab Initio Materials Simulations
> > > > Duke University, MEMS Department
> > > > 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708,
> USA
> > > >
> > > > volker.b...@duke.edu <mailto:volker.b...@duke.edu>
> > > > https://aims.pratt.duke.edu
> > > > +1 (919) 660 5279 
> > > > Twitter: Aimsduke
> > > >
> > > > Office:  Hudson Hall
> > > >
> > > >
> > > >
> > > >
> > > > ___
> > > > users mailing list
> > > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> > <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
> > > >
> > > > ___
> > > > users mailing list
> > > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=4W_1_DI3QU9PRT2fhTXyUeXiu7HvTSNzX48E-9ifoTc&s=i3f7Olcbyor4Pu0hv6YlgO10cJ_XvOR13zZn7PPIGto&e=
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=4W_1_DI3QU9PRT2fhTXyUeXiu7HvTSNzX48E-9ifoTc&s=i3f7Olcbyor4Pu0hv6YlgO10cJ_XvOR13zZn7PPIGto&e=
> >
> > >
> > > Volker Blum
> > > Associate Professor
> > > Ab Initio Materials Simulations
> > > Duke University, MEMS Department
> > > 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
> > >
> > > volker.b...@duke.edu <mailto:volker.b...@duke.edu>
> > > https://aims.pratt.duke.edu
> > > +1 (919) 660 5279 
> > > Twitter: Aimsduke
> > >
> > > Office:  Hudson Hall
> > >
> > >
> > >
> > >
> > > ___
> > > users mailing list
> > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> > <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
> > >
> > > ___
> > > users mailing list
> > > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=h6zBsOI45o8fovfy43A2FyCt-fL_yVpNbVSf1OA8CrQ&s=YNADyKbvRnxPmnDVmVlYmsYgZEr8m-etPBXLHPRkflw&e=
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__rfd.newmexicoconsortium.org_mailman_listinfo_users&d=DwICAg&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=I9QLPu689VeINkpRod6EQfprr_v-FLoLsAuSXIHhDsk&m=h6zBsOI45o8fovfy43A2FyCt-fL_yVpNbVSf1OA8CrQ&s=YNADyKbvRnxPmnDVmVlYmsYgZEr8m-etPBXLHPRkflw&e=
> >
> >
> > Volker Blum
> > Associate Professor
> > Ab Initio Materials Simulations
> > Duke University, MEMS Department
> > 144 Hudson Hall, Box 90300, Duke University, Durham, NC 27708, USA
> >
> > volker.b...@duke.edu <mailto:volker.b...@duke.edu>
> > https://aims.pratt.duke.edu
> > +1 (919) 660 5279 
> > Twitter: Aimsduke
> >
> > Office:  Hudson Hall
> >
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
> > <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
> >
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] How to get a verbose compilation?

2017-08-05 Thread Jeff Hammond
make V=1

Jeff

On Sat, Aug 5, 2017 at 2:01 PM Neil Carlson 
wrote:

> How do I get the build system to echo the commands it is issuing?   My
> Fortran compiler is throwing an error on one file, and I need to see the
> full compiler command line with all options in order to debug.  Thanks
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Issues with Large Window Allocations

2017-08-25 Thread Jeff Hammond
and the time increasing linearly with the allocation
>>> size.
>>>
>>> Are these issues known? Maybe there is documentation describing
>>> work-arounds? (esp. for 3) and 4))
>>>
>>> I am attaching a small benchmark. Please make sure to adjust the
>>> MEM_PER_NODE macro to suit your system before you run it :) I'm happy to
>>> provide additional details if needed.
>>>
>>> Best
>>> Joseph
>>> --
>>> Dipl.-Inf. Joseph Schuchart
>>> High Performance Computing Center Stuttgart (HLRS)
>>> Nobelstr. 19
>>> D-70569 Stuttgart
>>>
>>> Tel.: +49(0)711-68565890
>>> Fax: +49(0)711-6856832
>>> E-Mail: schuch...@hlrs.de
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>>
>
> --
> Dipl.-Inf. Joseph Schuchart
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Issues with Large Window Allocations

2017-08-29 Thread Jeff Hammond
I don't know any reason why you shouldn't be able to use IB for intra-node
transfers.  There are, of course, arguments against doing it in general
(e.g. IB/PCI bandwidth less than DDR4 bandwidth), but it likely behaves
less synchronously than shared-memory, since I'm not aware of any MPI RMA
library that dispatches the intranode RMA operations to an asynchronous
agent (e.g. communication helper thread).

Regarding 4, faulting 100GB in 24s corresponds to 1us per 4K page, which
doesn't sound unreasonable to me.  You might investigate if/how you can use
2M or 1G pages instead.  It's possible Open-MPI already supports this, if
the underlying system does.  You may need to twiddle your OS settings to
get hugetlbfs working.

Jeff

On Tue, Aug 29, 2017 at 6:15 AM, Joseph Schuchart  wrote:

> Jeff, all,
>
> Thanks for the clarification. My measurements show that global memory
> allocations do not require the backing file if there is only one process
> per node, for arbitrary number of processes. So I was wondering if it was
> possible to use the same allocation process even with multiple processes
> per node if there is not enough space available in /tmp. However, I am not
> sure whether the IB devices can be used to perform intra-node RMA. At least
> that would retain the functionality on this kind of system (which arguably
> might be a rare case).
>
> On a different note, I found during the weekend that Valgrind only
> supports allocations up to 60GB, so my second point reported below may be
> invalid. Number 4 seems still seems curious to me, though.
>
> Best
> Joseph
>
> On 08/25/2017 09:17 PM, Jeff Hammond wrote:
>
>> There's no reason to do anything special for shared memory with a
>> single-process job because MPI_Win_allocate_shared(MPI_COMM_SELF) ~=
>> MPI_Alloc_mem().  However, it would help debugging if MPI implementers at
>> least had an option to take the code path that allocates shared memory even
>> when np=1.
>>
>> Jeff
>>
>> On Thu, Aug 24, 2017 at 7:41 AM, Joseph Schuchart > <mailto:schuch...@hlrs.de>> wrote:
>>
>> Gilles,
>>
>> Thanks for your swift response. On this system, /dev/shm only has
>> 256M available so that is no option unfortunately. I tried disabling
>> both vader and sm btl via `--mca btl ^vader,sm` but Open MPI still
>> seems to allocate the shmem backing file under /tmp. From my point
>> of view, missing the performance benefits of file backed shared
>> memory as long as large allocations work but I don't know the
>> implementation details and whether that is possible. It seems that
>> the mmap does not happen if there is only one process per node.
>>
>> Cheers,
>> Joseph
>>
>>
>> On 08/24/2017 03:49 PM, Gilles Gouaillardet wrote:
>>
>> Joseph,
>>
>> the error message suggests that allocating memory with
>> MPI_Win_allocate[_shared] is done by creating a file and then
>> mmap'ing
>> it.
>> how much space do you have in /dev/shm ? (this is a tmpfs e.g. a
>> RAM
>> file system)
>> there is likely quite some space here, so as a workaround, i
>> suggest
>> you use this as the shared-memory backing directory
>>
>> /* i am afk and do not remember the syntax, ompi_info --all | grep
>> backing is likely to help */
>>
>> Cheers,
>>
>> Gilles
>>
>> On Thu, Aug 24, 2017 at 10:31 PM, Joseph Schuchart
>> mailto:schuch...@hlrs.de>> wrote:
>>
>> All,
>>
>> I have been experimenting with large window allocations
>> recently and have
>> made some interesting observations that I would like to share.
>>
>> The system under test:
>> - Linux cluster equipped with IB,
>> - Open MPI 2.1.1,
>> - 128GB main memory per node
>> - 6GB /tmp filesystem per node
>>
>> My observations:
>> 1) Running with 1 process on a single node, I can allocate
>> and write to
>> memory up to ~110 GB through MPI_Allocate, MPI_Win_allocate,
>> and
>> MPI_Win_allocate_shared.
>>
>> 2) If running with 1 process per node on 2 nodes single
>> large allocations
>> succeed but with the repeating allocate/free cycle in the
>> attached code I
>> see the application being reprod

Re: [OMPI users] Issues with Large Window Allocations

2017-09-04 Thread Jeff Hammond
On Mon, Sep 4, 2017 at 6:13 AM, Joseph Schuchart  wrote:

> Jeff, all,
>
> Unfortunately, I (as a user) have no control over the page size on our
> cluster. My interest in this is more of a general nature because I am
> concerned that our users who use Open MPI underneath our code run into this
> issue on their machine.
>
>
Sure, but I assume you are able to suggest such changes to the HLRS
operations team.  Cray XC machines like Hazel Hen already support large
pages by default and Cray recommends their use to improve MPI performance,
so I don't think it is a surprising or unreasonable request to support them
on your non-Cray systems.


> I took a look at the code for the various window creation methods and now
> have a better picture of the allocation process in Open MPI. I realized
> that memory in windows allocated through MPI_Win_alloc or created through
> MPI_Win_create is registered with the IB device using ibv_reg_mr, which
> takes significant time for large allocations (I assume this is where
> hugepages would help?). In contrast to this, it seems that memory attached
> through MPI_Win_attach is not registered, which explains the lower latency
> for these allocation I am observing (I seem to remember having observed
> higher communication latencies as well).
>
>
There's a reason for this.  The way MPI dynamic windows are defined,
caching registration keys is not practical without implementation-defined
info keys to assert no reattach.  That is why allocation latency is lower
and communication latency is higher.

Jeff


> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix
> component that uses shmem_open to create a POSIX shared memory object
> instead of a file on disk, which is then mmap'ed. Unfortunately, if I raise
> the priority of this component above that of the default mmap component I
> end up with a SIGBUS during MPI_Init. No other errors are reported by MPI.
> Should I open a ticket on Github for this?
>
> As an alternative, would it be possible to use anonymous shared memory
> mappings to avoid the backing file for large allocations (maybe above a
> certain threshold) on systems that support MAP_ANONYMOUS and distribute the
> result of the mmap call among the processes on the node?
>
> Thanks,
> Joseph
>
> On 08/29/2017 06:12 PM, Jeff Hammond wrote:
>
>> I don't know any reason why you shouldn't be able to use IB for
>> intra-node transfers.  There are, of course, arguments against doing it in
>> general (e.g. IB/PCI bandwidth less than DDR4 bandwidth), but it likely
>> behaves less synchronously than shared-memory, since I'm not aware of any
>> MPI RMA library that dispatches the intranode RMA operations to an
>> asynchronous agent (e.g. communication helper thread).
>>
>> Regarding 4, faulting 100GB in 24s corresponds to 1us per 4K page, which
>> doesn't sound unreasonable to me.  You might investigate if/how you can use
>> 2M or 1G pages instead.  It's possible Open-MPI already supports this, if
>> the underlying system does.  You may need to twiddle your OS settings to
>> get hugetlbfs working.
>>
>> Jeff
>>
>> On Tue, Aug 29, 2017 at 6:15 AM, Joseph Schuchart > <mailto:schuch...@hlrs.de>> wrote:
>>
>> Jeff, all,
>>
>> Thanks for the clarification. My measurements show that global
>> memory allocations do not require the backing file if there is only
>> one process per node, for arbitrary number of processes. So I was
>> wondering if it was possible to use the same allocation process even
>> with multiple processes per node if there is not enough space
>> available in /tmp. However, I am not sure whether the IB devices can
>> be used to perform intra-node RMA. At least that would retain the
>> functionality on this kind of system (which arguably might be a rare
>> case).
>>
>> On a different note, I found during the weekend that Valgrind only
>> supports allocations up to 60GB, so my second point reported below
>> may be invalid. Number 4 seems still seems curious to me, though.
>>
>> Best
>> Joseph
>>
>> On 08/25/2017 09:17 PM, Jeff Hammond wrote:
>>
>> There's no reason to do anything special for shared memory with
>> a single-process job because
>> MPI_Win_allocate_shared(MPI_COMM_SELF) ~= MPI_Alloc_mem().
>>However, it would help debugging if MPI implementers at least
>> had an option to take the code path that allocates shared memory
>> even when np=1.
>>
>> Jeff
>>
>> On Thu, Aug 24, 201

Re: [OMPI users] Issues with Large Window Allocations

2017-09-08 Thread Jeff Hammond
fortunately, I (as a user) have no control over the page size on our
> >> cluster. My interest in this is more of a general nature because I am
> >> concerned that our users who use Open MPI underneath our code run into
> >> this issue on their machine.
> >>
> >> I took a look at the code for the various window creation methods and
> >> now have a better picture of the allocation process in Open MPI. I
> >> realized that memory in windows allocated through MPI_Win_alloc or
> >> created through MPI_Win_create is registered with the IB device using
> >> ibv_reg_mr, which takes significant time for large allocations (I assume
> >> this is where hugepages would help?). In contrast to this, it seems that
> >> memory attached through MPI_Win_attach is not registered, which explains
> >> the lower latency for these allocation I am observing (I seem to
> >> remember having observed higher communication latencies as well).
> >>
> >> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix
> >> component that uses shmem_open to create a POSIX shared memory object
> >> instead of a file on disk, which is then mmap'ed. Unfortunately, if I
> >> raise the priority of this component above that of the default mmap
> >> component I end up with a SIGBUS during MPI_Init. No other errors are
> >> reported by MPI. Should I open a ticket on Github for this?
> >>
> >> As an alternative, would it be possible to use anonymous shared memory
> >> mappings to avoid the backing file for large allocations (maybe above a
> >> certain threshold) on systems that support MAP_ANONYMOUS and distribute
> >> the result of the mmap call among the processes on the node?
> >>
> >> Thanks,
> >> Joseph
> >>
> >> On 08/29/2017 06:12 PM, Jeff Hammond wrote:
> >>> I don't know any reason why you shouldn't be able to use IB for
> >>> intra-node transfers.  There are, of course, arguments against doing
> >>> it in general (e.g. IB/PCI bandwidth less than DDR4 bandwidth), but it
> >>> likely behaves less synchronously than shared-memory, since I'm not
> >>> aware of any MPI RMA library that dispatches the intranode RMA
> >>> operations to an asynchronous agent (e.g. communication helper thread).
> >>>
> >>> Regarding 4, faulting 100GB in 24s corresponds to 1us per 4K page,
> >>> which doesn't sound unreasonable to me.  You might investigate if/how
> >>> you can use 2M or 1G pages instead.  It's possible Open-MPI already
> >>> supports this, if the underlying system does.  You may need to twiddle
> >>> your OS settings to get hugetlbfs working.
> >>>
> >>> Jeff
> >>>
> >>> On Tue, Aug 29, 2017 at 6:15 AM, Joseph Schuchart  >>> <mailto:schuch...@hlrs.de>> wrote:
> >>>
> >>> Jeff, all,
> >>>
> >>> Thanks for the clarification. My measurements show that global
> >>> memory allocations do not require the backing file if there is only
> >>> one process per node, for arbitrary number of processes. So I was
> >>> wondering if it was possible to use the same allocation process
> even
> >>> with multiple processes per node if there is not enough space
> >>> available in /tmp. However, I am not sure whether the IB devices
> can
> >>> be used to perform intra-node RMA. At least that would retain the
> >>> functionality on this kind of system (which arguably might be a
> rare
> >>> case).
> >>>
> >>> On a different note, I found during the weekend that Valgrind only
> >>> supports allocations up to 60GB, so my second point reported below
> >>> may be invalid. Number 4 seems still seems curious to me, though.
> >>>
> >>> Best
> >>> Joseph
> >>>
> >>> On 08/25/2017 09:17 PM, Jeff Hammond wrote:
> >>>
> >>> There's no reason to do anything special for shared memory with
> >>> a single-process job because
> >>> MPI_Win_allocate_shared(MPI_COMM_SELF) ~= MPI_Alloc_mem().
> >>> However, it would help debugging if MPI implementers at least
> >>> had an option to take the code path that allocates shared
> memory
> >>> even when np=1.
> >>>
> >>> Jeff
> >>>
> &

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Jeff Hammond
Please separate C and C++ here. C has a standard ABI.  C++ doesn't.

Jeff

On Mon, Sep 18, 2017 at 5:39 PM Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Even if i do not fully understand the question, keep in mind Open MPI
> does not use OpenMP, so from that point of view, Open MPI is
> independant of the OpenMP runtime.
>
> Let me emphasize on what Jeff already wrote : use different installs
> of Open MPI (and you can use modules or lmod in order to choose
> between them easily) and always use the compilers that were used to
> build Open MPI. This is mandatory is you use Fortran bindings (use mpi
> and use mpi_f08), and you'd better keep yourself out of trouble with
> C/C++ and mpif.h
>
> Cheers,
>
> Gilles
>
> On Tue, Sep 19, 2017 at 5:57 AM, Michael Thomadakis
>  wrote:
> > Thanks for the note. How about OMP runtimes though?
> >
> > Michael
> >
> > On Mon, Sep 18, 2017 at 3:21 PM, n8tm via users <
> users@lists.open-mpi.org>
> > wrote:
> >>
> >> On Linux and Mac, Intel c and c++ are sufficiently compatible with gcc
> and
> >> g++ that this should be possible.  This is not so for Fortran libraries
> or
> >> Windows.
> >>
> >>
> >>
> >>
> >>
> >>
> >> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
> >>
> >>  Original message 
> >> From: Michael Thomadakis 
> >> Date: 9/18/17 3:51 PM (GMT-05:00)
> >> To: users@lists.open-mpi.org
> >> Subject: [OMPI users] Question concerning compatibility of languages
> used
> >> with building OpenMPI and languages OpenMPI uses to build MPI binaries.
> >>
> >> Dear OpenMPI list,
> >>
> >> As far as I know, when we build OpenMPI itself with GNU or Intel
> compilers
> >> we expect that the subsequent MPI application binary will use the same
> >> compiler set and run-times.
> >>
> >> Would it be possible to build OpenMPI with the GNU tool chain but then
> >> subsequently instruct the OpenMPI compiler wrappers to use the Intel
> >> compiler set? Would there be any issues with compiling C++ / Fortran or
> >> corresponding OMP codes ?
> >>
> >> In general, what is clean way to build OpenMPI with a GNU compiler set
> but
> >> then instruct the wrappers to use Intel compiler set?
> >>
> >> Thanks!
> >> Michael
> >>
> >> _______
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Jeff Hammond
Intel compilers support GOMP runtime interoperability, although I don't
believe it is the default. You can use the Intel/LLVM OpenMP runtime with
GCC such that all three OpenMP compilers work together.

Fortran is a legit problem, although if somebody builds a standalone
Fortran 2015 implementation of the MPI interface, it would be decoupled
from the MPI library compilation.

Jeff

On Mon, Sep 18, 2017 at 6:21 PM Michael Thomadakis 
wrote:

> OMP is yet another source of incompatibility between GNU and Intel
> environments. So compiling say Fortran OMP code into a library and trying
> to link it with Intel Fortran codes just aggravates the problem.
> Michael
> On Mon, Sep 18, 2017 at 7:35 PM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> Even if i do not fully understand the question, keep in mind Open MPI
>> does not use OpenMP, so from that point of view, Open MPI is
>> independant of the OpenMP runtime.
>>
>> Let me emphasize on what Jeff already wrote : use different installs
>> of Open MPI (and you can use modules or lmod in order to choose
>> between them easily) and always use the compilers that were used to
>> build Open MPI. This is mandatory is you use Fortran bindings (use mpi
>> and use mpi_f08), and you'd better keep yourself out of trouble with
>> C/C++ and mpif.h
>>
>> Cheers,
>>
>> Gilles
>>
>> On Tue, Sep 19, 2017 at 5:57 AM, Michael Thomadakis
>>  wrote:
>> > Thanks for the note. How about OMP runtimes though?
>> >
>> > Michael
>> >
>> > On Mon, Sep 18, 2017 at 3:21 PM, n8tm via users <
>> users@lists.open-mpi.org>
>> > wrote:
>> >>
>> >> On Linux and Mac, Intel c and c++ are sufficiently compatible with gcc
>> and
>> >> g++ that this should be possible.  This is not so for Fortran
>> libraries or
>> >> Windows.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
>> >>
>> >>  Original message 
>> >> From: Michael Thomadakis 
>> >> Date: 9/18/17 3:51 PM (GMT-05:00)
>> >> To: users@lists.open-mpi.org
>> >> Subject: [OMPI users] Question concerning compatibility of languages
>> used
>> >> with building OpenMPI and languages OpenMPI uses to build MPI binaries.
>> >>
>> >> Dear OpenMPI list,
>> >>
>> >> As far as I know, when we build OpenMPI itself with GNU or Intel
>> compilers
>> >> we expect that the subsequent MPI application binary will use the same
>> >> compiler set and run-times.
>> >>
>> >> Would it be possible to build OpenMPI with the GNU tool chain but then
>> >> subsequently instruct the OpenMPI compiler wrappers to use the Intel
>> >> compiler set? Would there be any issues with compiling C++ / Fortran or
>> >> corresponding OMP codes ?
>> >>
>> >> In general, what is clean way to build OpenMPI with a GNU compiler set
>> but
>> >> then instruct the wrappers to use Intel compiler set?
>> >>
>> >> Thanks!
>> >> Michael
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.open-mpi.org
>> >> https://lists.open-mpi.org/mailman/listinfo/users
>> >
>> >
>> >
>> > ___
>> > users mailing list
>> > users@lists.open-mpi.org
>> > https://lists.open-mpi.org/mailman/listinfo/users
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] error: unknown type name 'ompi_jobid_t'

2013-06-25 Thread Jeff Hammond
I observe this error with the OpenMPI 1.7.1 "feature":

Making all in mca/common/ofacm
make[2]: Entering directory
`/gpfs/mira-home/jhammond/MPI/openmpi-1.7.1/build-gcc/ompi/mca/common/ofacm'
  CC   common_ofacm_xoob.lo
../../../../../ompi/mca/common/ofacm/common_ofacm_xoob.c:158:91:
error: unknown type name 'ompi_jobid_t'
 static int xoob_ib_address_init(ofacm_ib_address_t *ib_addr, uint16_t
lid, uint64_t s_id, ompi_jobid_t ep_jobid)

^
../../../../../ompi/mca/common/ofacm/common_ofacm_xoob.c: In function
'xoob_ib_address_add_new':
../../../../../ompi/mca/common/ofacm/common_ofacm_xoob.c:189:5:
warning: implicit declaration of function 'xoob_ib_address_init'
[-Wimplicit-function-declaration]
 ret = xoob_ib_address_init(ib_addr, lid, s_id, ep_jobid);
 ^
make[2]: *** [common_ofacm_xoob.lo] Error 1
make[2]: Leaving directory
`/gpfs/mira-home/jhammond/MPI/openmpi-1.7.1/build-gcc/ompi/mca/common/ofacm'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory
`/gpfs/mira-home/jhammond/MPI/openmpi-1.7.1/build-gcc/ompi'
make: *** [all-recursive] Error 1

I invoked configure like this:

../configure CC=gcc CXX=g++ FC=gfortran F77=gfortran
--prefix=/home/jhammond/MPI/openmpi-1.7.1/install-gcc --with-verbs
--enable-mpi-thread-multiple --enable-static --enable-shared

My config.log is attached with bzip2 compression or if you do not
trust binary attachments, please go to Dropbox and blindly download
the uncompressed text file.

https://www.dropbox.com/l/ZxZoE6FNROZuBY7I7wdsgc

Any suggestions?  I asked the Google and it had not heard of this
particular error message before.

Thanks,

Jeff

PS Please do not tell Pavan I was here :-)
PPS I recognize the Streisand effect is now in play and that someone
will deliberately disobey the previous request because I made it.

-- 
Jeff Hammond
jeff.scie...@gmail.com


config.log.tbz
Description: Binary data


Re: [OMPI users] undefined reference to `MPI::Comm::Comm()

2013-07-09 Thread Jeff Hammond
You must be using an older version of Gromacs, because the version I'm
looking at (git master) has nary a reference to the C++ bindings.

Since you say that Gromacs alone compiles fine, I suspect the problem
is that Plumed uses the C++ bindings.  The Plumed download site hosted
by Google Docs (yuck!) is down/broken/in-redirect-hell so I can't
verify this hypothesis right now.

Jeff

On Tue, Jul 9, 2013 at 8:44 AM, Jeff Squyres (jsquyres)
 wrote:
> If you care, the issue is that it looks like Gromacs is using the MPI C++ 
> bindings.  You therefore need to use the MPI C++ wrapper compiler, mpic++ 
> (vs. mpicc, which is the MPI C wrapper compiler).
>
>
> On Jul 9, 2013, at 9:41 AM, Tomek Wlodarski  wrote:
>
>> I used mpicc but when I switched in Makefile to mpic++ it compiled
>> without errors.
>> Thanks a lot!
>> Best,
>>
>> tomek
>>
>> On Tue, Jul 9, 2013 at 2:31 PM, Jeff Squyres (jsquyres)
>>  wrote:
>>> I don't see all the info requested from that web page, but it looks like 
>>> OMPI built the C++ bindings ok.
>>>
>>> Did you use mpic++ to build Gromacs?
>>>
>>>
>>> On Jul 9, 2013, at 9:20 AM, Tomek Wlodarski  
>>> wrote:
>>>
>>>> So I am running OpenMPi1.6.3 (config.log attached)
>>>> And I would like to install gromacs patched with plumed (scientific
>>>> computing). Both uses openmpi.
>>>> Gromacs alone compiles without errors (openMPI works). But when
>>>> patched I got one mentioned before.
>>>> I am sending config file for patched gromacs.
>>>> If you need any other file I would be happy to provide.
>>>> Thanks a lot!
>>>> Best,
>>>>
>>>> tomek
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> --
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> For corporate legal information go to: 
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] configure/library question

2013-07-19 Thread Jeff Hammond
Whoever designed the application you're trying to use to work only
with LIBS="-lmpi" indicates poor software engineering and a
low-quality application.

You can install or uninstall whatever you like but it is incorrect to
think that MPICH is broken because it does not provide libmpi.{a,so}.

In the absence of a sufficient understanding of how to link against
MPI, your best bet is to use CC=mpicc (and friends for LD, CXX,
FC,...).

Jeff

On Fri, Jul 19, 2013 at 2:12 PM, Hodgess, Erin  wrote:
> I figured out how to uninstall and am going to install open mpi
> Thanks,
> Erin
>
> 
> From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of
> Ralph Castain [r...@open-mpi.org]
> Sent: Friday, July 19, 2013 2:06 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] configure/library question
>
> Probably a lot more familiar to the folks on the MPICH mailing list - this
> is the mailing list for Open MPI :-)
>
> On Jul 19, 2013, at 12:03 PM, "Hodgess, Erin"  wrote:
>
> Hello all!
>
> I just downloaded the MPICH 3.0.4 tar.gz
>
> Then I used
> tar xfvz tar-3.0.4.tar.gz
> ./configure
> make
> make install
>
> Now I'm trying to compile someone else's program and it can't find libmpi or
> libmpich.a
>
> I did find libmpich.a, but no libmpi.
>
> Does this sound familiar, please?
>
> Thanks for any help!
>
> Sincerely,
> Erin
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] Large send problem: Error when send buf size =2^28 in a simple code

2013-08-05 Thread Jeff Hammond
As your code prints OK without verifying the correctness of the
result, you are only verifying the lack of segfault in OpenMPI, which
is necessary but not sufficient for correct execution.

It is not uncommon for MPI implementations to have issues near
count=2^31.  I can't speak to the extent to which OpenMPI is
rigorously correct in this respect.  I've yet to find an
implementation which is end-to-end count-safe, which includes support
for zettabyte buffers via MPI datatypes for collectives,
point-to-point, RMA and IO.

The easy solution for your case is to chop MPI_Allgatherv into
multiple calls.  In the case where the array of send counts is near
uniform, you can do N MPI_Allgather calls and 1 MPI_Allgatherv, which
might help performance in some cases.

Since most MPI implementations use Send/Recv under the hood for
collectives, you can aid in the debugging of this issue by testing
MPI_Send/Recv for count->2^31.

Best,

Jeff

On Mon, Aug 5, 2013 at 6:48 PM, ryan He  wrote:
> Dear All,
>
> I write a simple test code to use MPI_Allgatherv function. The problem comes
> when
> the send buf size becomes relatively big.
>
> When Bufsize = 2^28 – 1, run on 4 processors. OK
> When Bufsize = 2^28, run on 4 processors. Error
> [btl_tcp_frag.c:209:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv
> error (0x85f526f8, 2147483592) Bad address(1)
>
> When Bufsize =2^29-1, run on 2 processors. OK
> When Bufsize = 2^29, run on 2 processors. Error
> [btl_tcp_frag.c:209:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv
> error (0x964605d0, 2147483632) Bad address(1)
>
> Bufsize is not that close to int limit, but readv in mca_btl_tcp_frag_recv
> has size close to 2147483647. Does anyone have idea why the error comes? Any
> suggestion to solve or avoid this problem?
>
> The simple test code is attached below:
>
> #include 
>
> #include 
>
> #include 
>
> #include 
>
> #include 
>
> #include "mpi.h"
>
> int main(int argc, char ** argv)
>
> {
>
> int myid,nproc;
>
> long i,j;
>
> long size;
>
> long bufsize;
>
> int *rbuf;
>
> int *sbuf;
>
> char hostname[MPI_MAX_PROCESSOR_NAME];
>
> int len;
>
> size = (long) 2*1024*1024*1024-1;
>
> MPI_Init(&argc, &argv);
>
> MPI_Comm_rank(MPI_COMM_WORLD, &myid);
>
> MPI_Comm_size(MPI_COMM_WORLD, &nproc);
>
> MPI_Get_processor_name(hostname, &len);
>
> printf("I am process %d with pid: %d at %s\n",myid,getpid(),hostname);
>
> sleep(2);
>
>
> if (myid == 0)
>
> printf("size : %ld\n",size);
>
> sbuf = (int *) calloc(size,sizeof(MPI_INT));
>
> if (sbuf == NULL) {
>
> printf("fail to allocate memory of sbuf\n");
>
> exit(1);
>
> }
>
> rbuf = (int *) calloc(size,sizeof(MPI_INT));
>
> if (rbuf == NULL) {
>
> printf("fail to allocate memory of rbuf\n");
>
> exit(1);
>
> }
>
> int *recvCount = calloc(nproc,sizeof(int));
>
> int *displ = calloc(nproc,sizeof(int));
>
> bufsize = 268435456; //which is 2^28
>
> for(i=0;i
> recvCount[i] = bufsize;
>
> displ[i] = bufsize*i;
>
> }
>
> for (i=0;i
> sbuf[i] = myid+i;
>
> printf("buffer size: %ld recvCount[0]:%d last displ
> index:%d\n",bufsize,recvCount[0],displ[nproc-1]);
>
> fflush(stdout);
>
>
> MPI_Allgatherv(sbuf,recvCount[0], MPI_INT,rbuf,recvCount,displ,MPI_INT,
>
> MPI_COMM_WORLD);
>
>
> printf("OK\n");
>
> fflush(stdout);
>
> MPI_Finalize();
>
> return 0;
>
> }
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com



Re: [OMPI users] openmpi, stdin and qlogic infiniband

2013-09-19 Thread Jeff Hammond
See this related post
http://lists.mpich.org/pipermail/discuss/2013-September/001452.html.

The only text in the MPI standard I could find related to stdin is
"assuming the MPI implementation supports stdin such that this works",
which is not what I'd call a ringing endorsement of the practice of using
it.

Tell the AbInit people that they're wrong for using stdin.  There are lots
of cases where it won't work.

Jeff


On Thu, Sep 19, 2013 at 6:42 AM, Fabrice Boyrie 
wrote:
>
> Hello
>
> I have to compile a program (abinit) reading data from stdin and it
> doesn't works.
>
>
>   I made a simplified version of the program
>
>
>
> PROGRAM TESTSTDIN
>
>   use mpi
>   integer ( kind = 4 ) error
>   integer ( kind = 4 ) id
>   integer ( kind = 4 ) p
>   real ( kind = 8 ) wtime
>   CHARACTER(LEN=255) a
>   call MPI_Init ( error )
>   call MPI_Comm_size ( MPI_COMM_WORLD, p, error )
>   call MPI_Comm_rank ( MPI_COMM_WORLD, id, error )
>
>   if ( id == 0 ) then
> PRINT*, "id0"
> READ(5,'(A)') a
>   end if
>
>   write ( *, '(a)' ) ' '
>   write ( *, '(a,i8,a)' ) '  Process ', id, ' says "Hello, world!"'
>
>   if ( id == 0 ) then
> write ( *, '(a)' ) 'READ from stdin'
> write ( *, '(a)' ) a
>   end if
>   call MPI_Finalize ( error )
>
>   stop
> end
>
>
> I've tried openmpi 1.6.5 and 1.7.2
> The fortran compiler is ifort (tried Version 14.0.0.080 Build 20130728
> and Version 11.1Build 20100806)
> (c compiler is gcc, centos 6.x, infiniband stack from qlogic
> infinipath-libs-3.1-3420.1122_rhel6_qlc.x86_64)
>
> Trying with and without infiniband (qlogic card)
>
> mpirun -np 8 ./teststdin < /tmp/a
> forrtl: Bad file descriptor
> forrtl: severe (108): cannot stat file, unit 5, file /proc/43811/fd/0
> Image  PCRoutineLine
> Source
> teststdin  0040BF48  Unknown   Unknown
 Unknown
>
>
>
>  mpirun -mca mtl ^psm -mca btl self,sm -np 8 ./teststdin < /tmp/a
>
>  id0
>   Process0 says "Hello, world!"
> READ from stdin
> zer
>
>   Process1 says "Hello, world!"
> ...
>
>
>
> Is it a known problem ?
>
>  Fabrice BOYRIE
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] users Digest, Vol 2696, Issue 1

2013-10-03 Thread Jeff Hammond
You try Google Scholar yet?

Always exhaust all nonhuman resources before requesting human
assistance. The human brain is a terrible resource to waste when a
computer can do the job.

Jeff

Sent from my iPhone

On Oct 3, 2013, at 10:18 AM, Yin Zhao  wrote:

> Hi all,
>
> Does anybody have done experiments comparing the speed between mpich and 
> openmpi?
>
> Best regards,
> Yin Zhao
>
>> 在 2013年10月3日,0:00,users-requ...@open-mpi.org 写道:
>>
>> Send users mailing list submissions to
>>   us...@open-mpi.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>   http://www.open-mpi.org/mailman/listinfo.cgi/users
>> or, via email, send a message with subject or body 'help' to
>>   users-requ...@open-mpi.org
>>
>> You can reach the person managing the list at
>>   users-ow...@open-mpi.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of users digest..."
>>
>>
>> Today's Topics:
>>
>>  1. Re: Error compiling openmpi-1.9a1r29292 (Jeff Squyres (jsquyres))
>>  2. Re: non-functional mpif90 compiler (Gus Correa)
>>  3. Re: non-functional mpif90 compiler (Jeff Squyres (jsquyres))
>>  4. CUDA-aware usage (Rolf vandeVaart)
>>
>>
>> --
>>
>> Message: 1
>> Date: Tue, 1 Oct 2013 18:38:15 +
>> From: "Jeff Squyres (jsquyres)" 
>> To: Siegmar Gross , Open MPI
>>   Users
>> Subject: Re: [OMPI users] Error compiling openmpi-1.9a1r29292
>> Message-ID:
>>   
>> Content-Type: text/plain; charset="us-ascii"
>>
>> These should now be fixed.
>>
>>> On Sep 30, 2013, at 3:41 AM, Siegmar Gross 
>>>  wrote:
>>>
>>> Hi,
>>>
>>> today I tried to install openmpi-1.9a1r29292 on my platforms
>>> (openSuSE 12.1 Linux x86_64, Solaris 10 x86_64, and Solaris 10 Sparc)
>>> with Sun C 5.12 and gcc-4.8.0. I have the following error on all
>>> platforms, when I compile a 32- or 64-bit version with Sun C.
>>>
>>> ...
>>> PPFC mpi-f08-interfaces.lo
>>>
>>> module mpi_f08_interfaces
>>> ^
>>> "../../../../../openmpi-1.9a1r29292/ompi/mpi/fortran/base/mpi-f08-interfaces.F90",
>>> Line = 19, Column = 8: ERROR: The compiler has detected
>>> errors in module "MPI_F08_INTERFACES".  No module information
>>> file will be created for this module.
>>>
>>> use :: mpi_f08_types, only : MPI_Datatype, MPI_Comm, MPI_Aint,
>>>   MPI_ADDRESS_KIND
>>>  ^
>>> "../../../../../openmpi-1.9a1r29292/ompi/mpi/fortran/base/mpi-f08-interfaces.F90",
>>> Line = 4419, Column = 57: ERROR: "MPI_AINT" is not in module 
>>> "MPI_F08_TYPES".
>>>
>>> f90comp: 4622 SOURCE LINES
>>> f90comp: 2 ERRORS, 0 WARNINGS, 0 OTHER MESSAGES, 0 ANSI
>>> make[2]: *** [mpi-f08-interfaces.lo] Error 1
>>> make[2]: Leaving directory
>>> `.../openmpi-1.9a1r29292-Linux.x86_64.64_cc/ompi/mpi/fortran/base'
>>> make[1]: *** [all-recursive] Error 1
>>> make[1]: Leaving directory
>>> `.../openmpi-1.9a1r29292-Linux.x86_64.64_cc/ompi'
>>> make: *** [all-recursive] Error 1
>>> linpc1 openmpi-1.9a1r29292-Linux.x86_64.64_cc 122
>>>
>>>
>>>
>>>
>>> I have the following error on all platforms, when I compile a 32-bit
>>> version with gcc-4.8.0.
>>>
>>> linpc1 openmpi-1.9a1r29292-Linux.x86_64.32_gcc 120 tail -150 
>>> log.make.Linux.x86_64.32_gcc
>>> Making all in mca/spml
>>> make[2]: Entering directory 
>>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem/mca/spml'
>>> CC   base/spml_base_frame.lo
>>> CC   base/spml_base_select.lo
>>> CC   base/spml_base_request.lo
>>> CC   base/spml_base_atomicreq.lo
>>> CC   base/spml_base_getreq.lo
>>> CC   base/spml_base_putreq.lo
>>> CC   base/spml_base.lo
>>> CCLD libmca_spml.la
>>> make[2]: Leaving directory 
>>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem/mca/spml'
>>> Making all in .
>>> make[2]: Entering directory 
>>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem'
>>> CC   op/op.lo
>>> ../../openmpi-1.9a1r29292/oshmem/op/op.c: In function 
>>> 'oshmem_op_max_freal16_func':
>>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: error: 'a' undeclared 
>>> (first use in this function)
>>>   type *a = (type *) in;  \
>>> ^
>>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro 
>>> 'FUNC_OP_CREATE'
>>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op);
>>> ^
>>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: note: each undeclared 
>>> identifier is reported only once for each function it appears in
>>>   type *a = (type *) in;  \
>>> ^
>>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro 
>>> 'FUNC_OP_CREATE'
>>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op);
>>> ^
>>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:26: error: expected expression 
>>> before ')' token
>>>  

Re: [OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV

2013-10-08 Thread Jeff Hammond
"I have made a test case..." means there is little reason not to
attach said test case to the email for verification :-)

The following is in mpi.h.in in the OpenMPI trunk.

=
/*
 * Just in case you need it.  :-)
 */
#define OPEN_MPI 1

/*
 * MPI version
 */
#define MPI_VERSION 2
#define MPI_SUBVERSION 2
=

Two things can be said from this:

(1) You can workaround this non-portable awfulness with the C
preprocess by testing for the OPEN_MPI symbol.

(2) OpenMPI claims to be compliant with the MPI 2.2 standard, hence
any failures to adhere to the behavior specified in that document for
MPI_IN_PLACE is erroneous.

Best,

Jeff

On Tue, Oct 8, 2013 at 2:40 PM, Gerlach, Charles A.
 wrote:
> I have an MPI code that was developed using MPICH1 and OpenMPI before the
> MPI2 standards became commonplace (before MPI_IN_PLACE was an option).
>
>
>
> So, my code has many examples of GATHERV, AGATHERV and SCATTERV, where I
> pass the same array in as the SEND_BUF and the RECV_BUF, and this has worked
> fine for many years.
>
>
>
> Intel MPI and MPICH2 explicitly disallow this behavior according to the MPI2
> standard. So, I have gone through and used MPI_IN_PLACE for all the
> GATHERV/SCATTERVs that used to pass the same array twice. This code now
> works with MPICH2 and Intel_MPI, but fails with OpenMPI-1.6.5 on multiple
> platforms and compilers.
>
>
>
> PLATFORM  COMPILERSUCCESS? (For at least one
> simple example)
>
> 
>
> SLED 12.3 (x86-64) – Portland group  - fails
>
> SLED 12.3 (x86-64) – g95 - fails
>
> SLED 12.3 (x86-64) – gfortran   - works
>
>
>
> OS X 10.8 -- intel-fails
>
>
>
>
>
> In every case where OpenMPI fails with the MPI_IN_PLACE code, I can go back
> to the original code that passes the same array twice instead of using
> MPI_IN_PLACE, and it is fine.
>
>
>
> I have made a test case doing an individual GATHERV with MPI_IN_PLACE, and
> it works with OpenMPI.  So it looks like there is some interaction with my
> code that is causing the problem. I have no idea how to go about trying to
> debug it.
>
>
>
>
>
> In summary:
>
>
>
> OpenMPI-1.6.5 crashes my code when I use GATHERV, AGATHERV, and SCATTERV
> with MPI_IN_PLACE.
>
> Intel MPI and MPICH2 work with my code when I use GATHERV, AGATHERV, and
> SCATTERV with MPI_IN_PLACE.
>
>
>
> OpenMPI-1.6.5 works with my code when I pass the same array to SEND_BUF and
> RECV_BUF instead of using MPI_IN_PLACE for those same GATHERV, AGATHERV, and
> SCATTERVs.
>
>
>
>
>
> -Charles
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Jeff Hammond
On Wed, Oct 23, 2013 at 12:02 PM, Barrett, Brian W  wrote:
> On 10/22/13 10:23 AM, "Jai Dayal"  wrote:

> I'm asking because I'm using an open_mpi build ontop of infiniband, and the
> maximum thread mode is MPI_THREAD_SINGLE.
>
>
> That doesn't seem right; which version of Open MPI are you using?

The last time I looked at this, OpenMPI only supported
MPI_THREAD_SINGLE by default and if you ask for any higher thread
level, you get MPI_THREAD_MULTIPLE, which requires a configure-time
switch.

Maybe something has changed dramatically more recently than I tested.
Squyres told me some thread-oriented refactoring was going on.  All of
this was over a year ago so it is entirely reasonable for me to be
wrong about all of this.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Jeff Hammond
And in practice the difference between FUNNELED and SERIALIZED will be
very small.  The differences might emerge from thread-local state and
thread-specific network registration, but I don't see this being
required.  Hence, for most purposes SINGLE=FUNNELED=SERIALIZED is
equivalent to NOMUTEX and MULTIPLE is MUTEX, where MUTEX refers to the
internal mutex required to make MPI reentrant.

Jeff

On Wed, Oct 23, 2013 at 1:18 PM, Tim Prince  wrote:
> On 10/23/2013 01:02 PM, Barrett, Brian W wrote:
>
> On 10/22/13 10:23 AM, "Jai Dayal"  wrote:
>
> I, for the life of me, can't understand the difference between these two
> init_thread modes.
>
> MPI_THREAD_SINGLE states that "only one thread will execute", but
> MPI_THREAD_FUNNELED states "The process may be multi-threaded, but only the
> main thread will make MPI calls (all MPI calls are funneled to the main
> thread)."
>
> If I use MPI_THREAD_SINGLE, and just create a bunch of pthreads that dumbly
> loop in the background, the MPI library will have no way of detecting this,
> nor should this have any affects on the machine.
>
> This is exactly the same as MPI_THREAD_FUNNELED. What exactly does it mean
> with "only one thread will execute?" The openmpi library has absolutely zero
> way of knowng I've spawned other pthreads, and since these pthreads aren't
> actually doing MPI communication, I fail to see how this would interfere.
>
>
> Technically, if you call MPI_INIT_THREAD with MPI_THREAD_SINGLE, you have
> made a promise that you will not create any other threads in your
> application.  There was a time where OSes shipped threaded and non-threaded
> malloc, for example, so knowing that might be important for that last bit of
> performance.  There are also some obscure corner cases of the memory model
> of some architectures where you might get unexpected results if you made an
> MPI Receive call in an thread and accessed that buffer later from another
> thread, which may require memory barriers inside the implementation, so
> there could be some differences between SINGLE and FUNNELED due to those
> barriers.
>
> In Open MPI, we'll handle those corner cases whether you init for SINGLE or
> FUNNELED, so there's really no practical difference for Open MPI, but you're
> then slightly less portable.
>
> I'm asking because I'm using an open_mpi build ontop of infiniband, and the
> maximum thread mode is MPI_THREAD_SINGLE.
>
>
> That doesn't seem right; which version of Open MPI are you using?
>
> Brian
>
>
>
> As Brian said, you aren't likely to be running on a system like Windows 98
> where non-thread-safe libraries were preferred.  My colleagues at NASA
> insist that any properly built MPI will support MPI_THREAD_FUNNELED by
> default, even when the documentation says explicit setting in
> MPI_Init_thread() is mandatory.  The statement which I see in OpenMPI doc
> says all MPI calls must be made by the thread which calls MPI_Init_thread.
> Apparently it will work if plain MPI_Init is used instead.  This theory
> appears to hold up for all the MPI implementations of interest.  The
> additional threads referred to are "inside the MPI rank," although I suppose
> additional application threads not involved with MPI are possible.
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] Prototypes for Fortran MPI_ commands using 64-bit indexing

2013-10-31 Thread Jeff Hammond
Stupid question:

Why not just make your first level internal API equivalent to the MPI
public API except for s/int/size_t/g and have the Fortran bindings
drop directly into that?  Going through the C int-erface seems like a
recipe for endless pain...

Jeff

On Thu, Oct 31, 2013 at 4:05 PM, Jeff Squyres (jsquyres)
 wrote:
> On Oct 30, 2013, at 11:55 PM, Jim Parker  wrote:
>
>> Perhaps I should start with the most pressing issue for me.  I need 64-bit 
>> indexing
>>
>> @Martin,
>>you indicated that even if I get this up and running, the MPI library 
>> still uses signed 32-bit ints to count (your term), or index (my term) the 
>> recvbuffer lengths.  More concretely,
>> in a call to MPI_Allgatherv( buffer, count, MPI_Integer, recvbuf, 
>> recv-count, displ, MPI_integer, MPI_COMM_WORLD, status, mpierr): count, 
>> recvcounts, and displs must be  32-bit integers, not 64-bit.  Actually, all 
>> I need is displs to hold 64-bit values...
>> If this is true, then compiling OpenMPI this way is not a solution.  I'll 
>> have to restructure my code to collect 31-bit chunks...
>> Not that it matters, but I'm not using DIRAC, but a custom code to compute 
>> circuit analyses.
>
> Yes, that is correct -- the MPI specification makes us use C "int" for outer 
> level count specifications.  We do use larger than that internally, though.
>
> The common workaround for this is to make your own MPI datatype -- perhaps an 
> MPI_TYPE_VECTOR -- that strings together N contiguous datatypes, and then 
> send M of those.
>
> For example, say you need to send 8B (billion) contiguous INTEGERs.  You 
> obviously can't represent 8B with a C int (or a 4 byte Fortran INTEGER).  So 
> what you would do is something like this (forgive me -- I'm a C guy):
>
> -
> int my_buffer[8 billion];
> MPI_Datatype my_type;
> // This makes a datatype of 8 contiguous int's
> MPI_Type_vector(1, 8, 0, MPI_INT, &my_type);
> MPI_Type_commit(&my_type);
> MPI_Send(my_buffer, 1048576, my_type, ...);
> -
>
> This basically sends 1B types that are 8 int's long, and is therefore an 8B 
> int message.
>
> Make sense?
>
>> @Jeff,
>>   Interesting, your runtime behavior has a different error than mine.  You 
>> have problems with the passed variable tempInt, which would make sense for 
>> the reasons you gave.  However, my problem involves the fact that the local 
>> variable "rank" gets overwritten by a memory corruption after MPI_RECV is 
>> called.
>
> Odd.  :-\
>
>> Re: config.log. I will try to have the admin guy recompile tomorrow and see 
>> if I can get the log for you.
>>
>> BTW, I'm using the gcc 4.7.2 compiler suite on a Rocks 5.4 HPC cluster.  I 
>> use the options -m64 and -fdefault-integer-8
>
> Ok.  I was using icc/ifort with -m64 and -i8.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] MPI_File_write hangs on NFS-mounted filesystem

2013-11-07 Thread Jeff Hammond
That's a relatively old version of OMPI.  Maybe try the latest
release? That's always the safe bet since the issue might have been
fixed already.

I recall that OMPI uses ROMIO so you might try to reproduce with MPICH
so you can report it to the people that wrote the MPI-IO code. Of
course, this might not be an issue with ROMIO itself.  Trying with
MPICH is a good way to verify that.

Best,

Jeff

Sent from my iPhone

On Nov 7, 2013, at 10:55 AM, Steven G Johnson  wrote:

> The simple C program attached below hangs on MPI_File_write when I am using 
> an NFS-mounted filesystem.   Is MPI-IO supported in OpenMPI for NFS 
> filesystems?
>
> I'm using OpenMPI 1.4.5 on Debian stable (wheezy), 64-bit Opteron CPU, Linux 
> 3.2.51.   I was surprised by this because the problems only started occurring 
> recently when I upgraded my Debian system to wheezy; with OpenMPI in the 
> previous Debian release, output to NFS-mounted filesystems worked fine.
>
> Is there any easy way to get this working?  Any tips are appreciated.
>
> Regards,
> Steven G. Johnson
>
> ---
> #include 
> #include 
> #include 
>
> void perr(const char *label, int err)
> {
>char s[MPI_MAX_ERROR_STRING];
>int len;
>MPI_Error_string(err, s, &len);
>printf("%s: %d = %s\n", label, err, s);
> }
>
> int main(int argc, char **argv)
> {
>MPI_Init(&argc, &argv);
>
>MPI_File fh;
>int err;
>err = MPI_File_open(MPI_COMM_WORLD, "tstmpiio.dat", MPI_MODE_CREATE | 
> MPI_MODE_WRONLY, MPI_INFO_NULL, &fh);
>perr("open", err);
>
>const char s[] = "Hello world!\n";
>MPI_Status status;
>err = MPI_File_write(fh, (void*) s, strlen(s), MPI_CHAR, &status);
>perr("write", err);
>
>err = MPI_File_close(&fh);
>perr("close", err);
>
>MPI_Finalize();
>return 0;
> }
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Reducing Varying Length Arrays

2013-11-09 Thread Jeff Hammond
MPI_{Gather,Allgather}v are appropriate for this. See docs for details.

Jeff

Sent from my iPhone

On Nov 9, 2013, at 6:15 PM, Saliya Ekanayake  wrote:

Hi,

I want to contact bunch of strings from MPI processes. For example, say
with 2 processes,

P1 has text "hi"
P2 has text "world"

I have these stored as char arrays in processes. Is there a simple way to
do a reduce operation to concat these?

Thank you,
Saliya

-- 
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Bug MPI_Iscatter

2013-11-21 Thread Jeff Hammond
This program makes no sense and is wrong in multiple ways.

Jeff

On Thu, Nov 21, 2013 at 4:19 PM, Pierre Jolivet  wrote:
> Hello,
> The following code doesn’t execute properly :
> #include 
>
> int main(int argc, char** argv) {
> int  taskid, ntasks;
> MPI_Init(&argc, &argv);
> MPI_Request rq;
>
> MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
> MPI_Comm_size(MPI_COMM_WORLD,&ntasks);
> double* r;
> int l = 0;
> if(taskid > 0)
> MPI_Iscatter(NULL, 0, MPI_DATATYPE_NULL, r, l, MPI_DOUBLE, 0, 
> MPI_COMM_WORLD, &rq);
> MPI_Finalize();
> }
>
> Outcome:
> *** An error occurred in MPI_Type_extent
> *** MPI_ERR_TYPE: invalid datatype
>
> Hotfix: change MPI_DATATYPE_NULL to something else.
>
> Thanks for a quick fix.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] Multi-threading support for openib

2013-11-27 Thread Jeff Hammond
See slide 22 of http://www.open-mpi.org/papers/sc-2013/Open-MPI-SC13-BOF.pdf

Jeff

On Wed, Nov 27, 2013 at 8:59 AM, Ralph Castain  wrote:
> Openib does not currently support thread multiple - hopefully in 1.9 series
>
> Sent from my iPhone
>
> On Nov 27, 2013, at 7:14 AM, Daniel Cámpora  wrote:
>
> Dear list,
>
> I've gone through several hours of configuring and testing to get a grasp of
> the current status for multi-threading support.
>
> I want to use a program with MPI_THREAD_MULTIPLE, over the openib BTL. I'm
> using openmpi-1.6.5 and SLC6 (rhel6), for what's worth.
>
> Upon configuring my own openmpi library, if I just
> --enable-mpi-thread-multiple, and execute my program with -mca btl openib,
> it simply crashes upon openib not supporting MPI_THREAD_MULTIPLE.
>
> I've only started testing with --enable-opal-multi-threads, just in case it
> would help me. Configuring with the aforementioned options,
> ./configure --enable-mpi-thread-multiple --enable-opal-multi-threads
>
> results in a crash whenever executing my program,
>
> $ mpirun -np 2 -mca mca_component_path
> /usr/mpi/gcc/openmpi-1.6.5/lib64/openmpi -mca btl openib -mca
> btl_openib_warn_default_gid_prefix 0 -mca btl_base_verbose 100 -mca
> btl_openib_verbose 100 -machinefile machinefile.labs `pwd`/em_bu_app 2>&1 |
> less
> --
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>   opal_shmem_base_select failed
>   --> Returned value -1 instead of OPAL_SUCCESS
> --
> [lab14:13672] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file
> runtime/orte_init.c at line 79
> [lab14:13672] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file orterun.c at
> line 694
>
>
> Several questions related to these. Does --enable-opal-multi-threads have
> any impact on the BTL multi-threading support? (If there's more
> documentation on what this does I'd be glad to read it).
>
> Is there any additional configuration tag necessary for enabling
> opal-multi-threads to work?
>
> Cheers, thanks a lot!
>
> Daniel
>
> --
> Daniel Hugo Cámpora Pérez
> European Organization for Nuclear Research (CERN)
> PH LBC, LHCb Online Fellow
> e-mail. dcamp...@cern.ch
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] Segmentation fault on OMPI 1.6.5 built with gcc 4.4.7 and PGI pgfortran 11.10

2013-12-24 Thread Jeff Hammond
Try pure PGI and pure GCC builds. If only the mixed one fails, then I saw a 
problem like this in MPICH a few months ago. It appears PGI does not play nice 
with GCC regarding the C standard library functions. Or at least that's what I 
concluded. The issue remains unresolved. 

Jeff

Sent from my iPhone

> On Dec 24, 2013, at 5:10 AM, "Jeff Squyres (jsquyres)"  
> wrote:
> 
> I'm *very loosely* checking email.  :-)
> 
> Agree with what Ralph said: it looks like your program called memalign, and 
> that ended up segv'ing.  That could be an OMPI problem, or it could be an 
> application problem.  Try also configuring OMPI --with-valgrind and running 
> your app through a memory-checking debugger (although OMPI is not very 
> valgrind-clean in the 1.6 series :-\ -- you'll get a bunch of false positives 
> about reads from unallocated and memory being left still-allocated after 
> MPI_FINALIZE).
> 
> 
> 
>> On Dec 23, 2013, at 7:17 PM, Ralph Castain  wrote:
>> 
>> I fear that Jeff and Brian are both out for the holiday, Gus, so we are 
>> unlikely to have much info on this until they return
>> 
>> I'm unaware of any such problems in 1.6.5. It looks like something isn't 
>> properly aligned in memory - could be an error on our part, but might be in 
>> the program. You might want to build a debug version and see if that 
>> segfaults, and then look at the core with gdb to see where it happened.
>> 
>> 
>>> On Dec 23, 2013, at 3:27 PM, Gus Correa  wrote:
>>> 
>>> Dear OMPI experts
>>> 
>>> I have been using OMPI 1.6.5 built with gcc 4.4.7 and
>>> PGI pgfortran 11.10 to successfully compile and run
>>> a large climate modeling program (CESM) in several
>>> different configurations.
>>> 
>>> However, today I hit a segmentation fault when running a new model 
>>> configuration.
>>> [In the climate modeling jargon, a program is called a "model".]
>>> 
>>> This is somewhat unpleasant because that OMPI build
>>> is a central piece of the production CESM model setup available
>>> to all users in our two clusters at this point.
>>> I have other OMPI 1.6.5 builds, with other compilers, but that one
>>> was working very well with CESM, until today.
>>> 
>>> Unless I am misinterpreting it, the error message,
>>> reproduced below, seems to indicate the problem
>>> happened inside the OMPI library.
>>> Or not?
>>> 
>>> Other details:
>>> 
>>> Nodes are AMD Opteron 6376 x86_64, interconnect is Infiniband QDR,
>>> OS is stock CentOS 6.4, kernel 2.6.32-358.2.1.el6.x86_64.
>>> The program is compiled with the OMPI wrappers (mpicc and mpif90),
>>> and somewhat conservative optimization flags:
>>> 
>>> FFLAGS   := $(CPPDEFS) -i4 -gopt -Mlist -Mextend -byteswapio 
>>> -Minform=inform -traceback -O2 -Mvect=nosse -Kieee
>>> 
>>> Is this a known issue?
>>> Any clues on how to address it?
>>> 
>>> Thank you for your help,
>>> Gus Correa
>>> 
>>>  error message ***
>>> 
>>> [1,31]:[node30:17008] *** Process received signal ***
>>> [1,31]:[node30:17008] Signal: Segmentation fault (11)
>>> [1,31]:[node30:17008] Signal code: Address not mapped (1)
>>> [1,31]:[node30:17008] Failing at address: 0x17
>>> [1,31]:[node30:17008] [ 0] /lib64/libpthread.so.0(+0xf500) 
>>> [0x2b788ef9f500]
>>> [1,31]:[node30:17008] [ 1] 
>>> /sw/openmpi/1.6.5/gnu-4.4.7-pgi-11.10/lib/libmpi.so.1(+0x100ee3) 
>>> [0x2b788e200ee3]
>>> [1,31]:[node30:17008] [ 2] 
>>> /sw/openmpi/1.6.5/gnu-4.4.7-pgi-11.10/lib/libmpi.so.1(opal_memory_ptmalloc2_int_malloc+0x111)
>>>  [0x2b788e203771]
>>> [1,31]:[node30:17008] [ 3] 
>>> /sw/openmpi/1.6.5/gnu-4.4.7-pgi-11.10/lib/libmpi.so.1(opal_memory_ptmalloc2_int_memalign+0x97)
>>>  [0x2b788e2046d7]
>>> [1,31]:[node30:17008] [ 4] 
>>> /sw/openmpi/1.6.5/gnu-4.4.7-pgi-11.10/lib/libmpi.so.1(opal_memory_ptmalloc2_memalign+0x8b)
>>>  [0x2b788e2052ab]
>>> [1,31]:[node30:17008] [ 5] ./ccsm.exe(pgf90_auto_alloc+0x73) 
>>> [0xe2c4c3]
>>> [1,31]:[node30:17008] *** End of error message ***
>>> --
>>> mpiexec noticed that process rank 31 with PID 17008 on node node30 exited 
>>> on signal 11 (Segmentation fault).
>>> --
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] globally unique 64bit unsigned integer (homogenous)

2014-01-03 Thread Jeff Hammond
Unique to each process?

Try this:

int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
uint64_t unique = rank;

To get additional unique values:

int size;
MPI_Comm_size(MPI_COMM_WORLD, &size);
unique += size;

If this isn't insufficient, please ask to question differently. 

There is no canonical method for this. 

Jeff

Sent from my iPhone

> On Jan 3, 2014, at 3:50 AM, MM  wrote:
> 
> Hello,
> Is there a canonical way to obtain a globally unique 64bit unsigned integer 
> across all mpi processes, multiple times?
> 
> Thanks
> 
> MM
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Calling a variable from another processor

2014-01-09 Thread Jeff Hammond
One sided is quite simple to understand. It is like file io. You read/write 
(get/put) to a memory object. If you want to make it hard to screw up, use 
passive target bss wrap you calls in lock/unlock so every operation is globally 
visible where it's called. 

I've never deadlocked RMA while p2p  is easy to hang for nontrivial patterns 
unless you only do nonblocking plus waitall. 

If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM 
implementations over MPI-3 already (I wrote both...). 

The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2 RMA 
stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3 and OSHMPI 
(OpenSHMEM over MPI-3) require a late-model MPICH-derivative to work, but these 
are readily available on every platform normal people use (BGQ is the only 
system missing, and that will be resolved soon). I've run MPI-3 on my Mac 
(MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI (MPICH).

Best,

Jeff

Sent from my iPhone

> On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)"  
> wrote:
> 
> MPI one-sided stuff is actually pretty complicated; I wouldn't suggest it for 
> a beginner (I don't even recommend it for many MPI experts ;-) ).
> 
> Why not look at the MPI_SOURCE in the status that you got back from the 
> MPI_RECV?  In fortran, it would look something like (typed off the top of my 
> head; forgive typos):
> 
> -
> integer, dimension(MPI_STATUS_SIZE) :: status
> ...
> call MPI_Recv(buffer, ..., status, ierr)
> -
> 
> The rank of the sender will be in status(MPI_SOURCE).
> 
> 
>> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer  wrote:
>> 
>> Hello,
>> 
>> I suggest you have a look onto the MPI one-sided functionality (Section 11 
>> of the MPI Spec 3.0).
>> Create a window to allow the other processes to access the arrays A directly 
>> via MPI_Get/MPI_Put.
>> Be aware of synchronization which you have to implement via MPI_Win_fence or 
>> manual locking.
>> 
>> Regards
>> Christoph
>> 
>> --
>> 
>> Christoph Niethammer
>> High Performance Computing Center Stuttgart (HLRS)
>> Nobelstrasse 19
>> 70569 Stuttgart
>> 
>> Tel: ++49(0)711-685-87203
>> email: nietham...@hlrs.de
>> http://www.hlrs.de/people/niethammer
>> 
>> 
>> 
>> - Ursprüngliche Mail -
>> Von: "Pradeep Jha" 
>> An: "Open MPI Users" 
>> Gesendet: Donnerstag, 9. Januar 2014 12:10:51
>> Betreff: [OMPI users] Calling a variable from another processor
>> 
>> 
>> 
>> 
>> 
>> I am writing a parallel program in Fortran77. I have the following problem: 
>> 1) I have N number of processors.
>> 2) Each processor contains an array A of size S.
>> 3) Using some function, on every processor (say rank X), I calculate the 
>> value of two integers Y and Z, where Z> different on every processor)
>> 4) I want to get the value of A(Z) on processor Y to processor X. 
>> 
>> I thought of first sending the numerical value X to processor Y from 
>> processor X and then sending A(Z) from processor Y to processor X. But it is 
>> not possible as processor Y does not know the numerical value X and so it 
>> won't know from which processor to receive the numerical value X from. 
>> 
>> I tried but I haven't been able to come up with any code which can implement 
>> this action. So I am not posting any codes. 
>> 
>> Any suggestions? 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Calling a variable from another processor

2014-01-17 Thread Jeff Hammond
The attached version is modified to use passive target, which does not
require collective synchronization for remote access.

Note that I didn't compile and run this and don't write MPI in Fortran
so there may be syntax errors.

Jeff

On Thu, Jan 16, 2014 at 11:03 AM, Christoph Niethammer
 wrote:
> Hello,
>
> Find attached a minimal example - hopefully doing what you intended.
>
> Regards
> Christoph
>
> --
>
> Christoph Niethammer
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstrasse 19
> 70569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: nietham...@hlrs.de
> http://www.hlrs.de/people/niethammer
>
>
>
> - Ursprüngliche Mail -
> Von: "Pradeep Jha" 
> An: "Open MPI Users" 
> Gesendet: Freitag, 10. Januar 2014 10:23:40
> Betreff: Re: [OMPI users] Calling a variable from another processor
>
>
>
> Thanks for your responses. I am still not able to figure it out. I will 
> further simply my problem statement. Can someone please help me with a 
> fortran90 code for that.
>
>
> 1) I have N processors each with an array A of size S
> 2) On any random processor (say rank X), I calculate the two integer values, 
> Y and Z. (0<=Y 3) On processor X, I want to get the value of A(Z) on processor Y.
>
>
> This operation will happen parallely on each processor. Can anyone please 
> help me with this?
>
>
>
>
>
>
>
> 2014/1/9 Jeff Hammond < jeff.scie...@gmail.com >
>
>
> One sided is quite simple to understand. It is like file io. You read/write 
> (get/put) to a memory object. If you want to make it hard to screw up, use 
> passive target bss wrap you calls in lock/unlock so every operation is 
> globally visible where it's called.
>
> I've never deadlocked RMA while p2p is easy to hang for nontrivial patterns 
> unless you only do nonblocking plus waitall.
>
> If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM 
> implementations over MPI-3 already (I wrote both...).
>
> The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2 
> RMA stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3 and 
> OSHMPI (OpenSHMEM over MPI-3) require a late-model MPICH-derivative to work, 
> but these are readily available on every platform normal people use (BGQ is 
> the only system missing, and that will be resolved soon). I've run MPI-3 on 
> my Mac (MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI (MPICH).
>
> Best,
>
> Jeff
>
> Sent from my iPhone
>
>
>
>> On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" < jsquy...@cisco.com > 
>> wrote:
>>
>> MPI one-sided stuff is actually pretty complicated; I wouldn't suggest it 
>> for a beginner (I don't even recommend it for many MPI experts ;-) ).
>>
>> Why not look at the MPI_SOURCE in the status that you got back from the 
>> MPI_RECV? In fortran, it would look something like (typed off the top of my 
>> head; forgive typos):
>>
>> -
>> integer, dimension(MPI_STATUS_SIZE) :: status
>> ...
>> call MPI_Recv(buffer, ..., status, ierr)
>> -
>>
>> The rank of the sender will be in status(MPI_SOURCE).
>>
>>
>>> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer < nietham...@hlrs.de > 
>>> wrote:
>>>
>>> Hello,
>>>
>>> I suggest you have a look onto the MPI one-sided functionality (Section 11 
>>> of the MPI Spec 3.0).
>>> Create a window to allow the other processes to access the arrays A 
>>> directly via MPI_Get/MPI_Put.
>>> Be aware of synchronization which you have to implement via MPI_Win_fence 
>>> or manual locking.
>>>
>>> Regards
>>> Christoph
>>>
>>> --
>>>
>>> Christoph Niethammer
>>> High Performance Computing Center Stuttgart (HLRS)
>>> Nobelstrasse 19
>>> 70569 Stuttgart
>>>
>>> Tel: ++49(0)711-685-87203
>>> email: nietham...@hlrs.de
>>> http://www.hlrs.de/people/niethammer
>>>
>>>
>>>
>>> - Ursprüngliche Mail -
>>> Von: "Pradeep Jha" < prad...@ccs.engg.nagoya-u.ac.jp >
>>> An: "Open MPI Users" < us...@open-mpi.org >
>>> Gesendet: Donnerstag, 9. Januar 2014 12:10:51
>>> Betreff: [OMPI users] Calling a variable from another processor
>>>
>>>
>>>
>>>
>>>
>>> I am writing a parallel program in Fortran77. I have the following problem: 
>>> 1) I have N number of processors.
&g

Re: [OMPI users] Use of __float128 with openmpi

2014-02-01 Thread Jeff Hammond
See Section 5.9.5 of MPI-3 or the section named "User-Defined
Reduction Operations" but presumably numbered differently in older
copies of the MPI standard.

An older but still relevant online reference is
http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report/node107.htm

There is a proposal to support __float128 in the MPI standard in the
future but it has not been formally considered by the MPI Forum yet
[https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/318].

Best,

Jeff

On Sat, Feb 1, 2014 at 2:28 PM, Tim Prince  wrote:
>
> On 02/01/2014 12:42 PM, Patrick Boehl wrote:
>>
>> Hi all,
>>
>> I have a question on datatypes in openmpi:
>>
>> Is there an (easy?) way to use __float128 variables with openmpi?
>>
>> Specifically, functions like
>>
>> MPI_Allreduce
>>
>> seem to give weird results with __float128.
>>
>> Essentially all I found was
>>
>> http://beige.ucs.indiana.edu/I590/node100.html
>>
>> where they state
>> 
>> MPI_LONG_DOUBLE
>>This is a quadruple precision, 128-bit long floating point number.
>> 
>>
>> But as far as I have seen, MPI_LONG_DOUBLE is only used for long doubles.
>>
>> The Open MPI Version is 1.6.3 and gcc is 4.7.3 on a x86_64 machine.
>>
> It seems unlikely that 10 year old course notes on an unspecified MPI
> implementation (hinted to be IBM power3) would deal with specific details of
> openmpi on a different architecture.
> Where openmpi refers to "portable C types" I would take long double to be
> the 80-bit hardware format you would have in a standard build of gcc for
> x86_64.  You should be able to gain some insight by examining your openmpi
> build logs to see if it builds for both __float80 and __float128 (or
> neither).  gfortran has a 128-bit data type (software floating point
> real(16), corresponding to __float128); you should be able to see in the
> build logs whether that data type was used.
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com


Re: [OMPI users] [mpich-discuss] Buffered sends are evil?

2015-04-06 Thread Jeff Hammond
While we are tilting at windmills, can we also discuss the evils of
MPI_Cancel for MPI_Send, everything about MPI_Alltoallw, how
MPI_Reduce_scatter is named wrong, and any number of other pet peeves
that people have about MPI-3? :-D

The MPI standard contains many useful functions and at least a handful
of stupid ones.  This is remarkably similar to other outputs of the
design-by-committee process and can be observed in OpenMP 4.0, C++14,
Fortran 2008, and probably every other standardized parallel
programming interface in use today.

Fortunately, judicious parallel programmers know that less is more and
generally focus on using the useful functions effectively, while
ignoring the less useful ones, and it's usually not hard to tell the
difference.

Jeff

PS I used MPI_Bsend once and found it superior to the alternative of
MPI_Isend+MPI_Request_free for sending fire-and-forget acks, because
it forced the implementation to do what I wanted and the performance
improved as a result.

On Fri, Apr 3, 2015 at 8:34 AM, Jeff Squyres (jsquyres)
 wrote:
> Fair enough.
>
> My main point should probably be summarized as: MPI_BSEND isn't worth it; 
> there are other, less-confusing, generally-more-optimized alternatives.
>
>
>
>
>> On Apr 3, 2015, at 11:20 AM, Balaji, Pavan  wrote:
>>
>> Jeff,
>>
>> Your blog post seems to confuse what implementations currently do with what 
>> Bsend is capable of.  If users really wanted it (e.g., a big customer asked 
>> for it), every implementation will optimize the crap out of it.  The problem 
>> is that every few users really care for it, so there's not been a good 
>> incentive for implementations to optimize it.
>>
>> Coming to the technical aspects, bsend doesn't require copying into the user 
>> buffer, if you have internal buffer resources.  It only guarantees that 
>> Bsend will not block if enough user buffer space is available.  If you are 
>> blocking for progress anyway, I'm not sure the extra copy would matter too 
>> much -- it matters some, of course, but likely not to the extent of a full 
>> copy cost.  Also, the semantics it provides are different -- guaranteed 
>> nonblocking nature when there's buffer space available.  It's like saying 
>> Ssend is not as efficient as send.  That's true, but those are different 
>> semantics.
>>
>> Having said that, I do agree with some of the shortcomings you pointed out 
>> -- specifically, you can only attach one buffer.  I'd add to the list with 
>> one more shortcoming: It's not communicator safe.  That is, if I attach a 
>> buffer, some other library I linked with might chew up my buffer space.  So 
>> the nonblocking guarantee is kind of bogus at that point.
>>
>>  -- Pavan
>>
>>> On Apr 3, 2015, at 5:30 AM, Jeff Squyres (jsquyres)  
>>> wrote:
>>>
>>> Yes.  I think the blog post gives 10 excellent reasons why.  :-)
>>>
>>>
>>>> On Apr 3, 2015, at 2:40 AM, Lei Shi  wrote:
>>>>
>>>> Hello,
>>>>
>>>> I want to use buffered sends. Read a blog said it is evil, 
>>>> http://blogs.cisco.com/performance/top-10-reasons-why-buffered-sends-are-evil
>>>>
>>>> Is it true or not? Thanks!
>>>>
>>>> Sincerely Yours,
>>>>
>>>> Lei Shi
>>>> -
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post: 
>>>> http://www.open-mpi.org/community/lists/users/2015/04/26597.php
>>>
>>>
>>> --
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> For corporate legal information go to: 
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>>
>>> ___
>>> discuss mailing list disc...@mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>> --
>> Pavan Balaji  ✉️
>> http://www.mcs.anl.gov/~balaji
>>
>> ___
>> discuss mailing list disc...@mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> discuss mailing list disc...@mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


[OMPI users] Ubuntu/Debian packages for recent version (for Travis CI support)

2015-11-02 Thread Jeff Hammond
I setup Travis CI support for ARMCI-MPI but the available version in
whatever Ubuntu they use is buggy.  For example:
https://travis-ci.org/jeffhammond/armci-mpi/jobs/0279.

I have not checked lately, but I believe that Nathan (and perhaps others)
have fixed most if not all of the bugs that were blocking ARMCI-MPI from
working.  Thus, I'd like to use a recent version of OpenMPI, but I do not
want to have to have Travis build it from source for every instance.

Can anyone suggest easy alternatives to building from source?  Are there
deb files online somewhere, perhaps provided by a third-party as for
MPICH?  Perhaps there is some obvious trick to get the latest OpenMPI via
apt-get.  However, since none of my machines run Ubuntu/Debian anymore, I
cannot easily test this, and I do not want to play guess-and-check via
repeated pushes to Github to fire up Travis builds.

If someone knows an easy way to get a late-model OpenMPI in Travis using a
method other than what I've indicated above, by all means suggest that.  I
am still new to Travis CI and would be happy to learn new things.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


[OMPI users] Ubuntu/Debian packages for recent version (for Travis CI support)

2015-11-06 Thread Jeff Hammond
Gilles:

Regarding http://www.open-mpi.org/community/lists/users/2015/11/27999.php,
I am looking for a package with RMA bug fixes from last summer.  I will
start with the git HEAD and work backwards.

Dave:

Regarding http://www.open-mpi.org/community/lists/users/2015/11/27981.php...

The ARMCI-MPI unit test tests/mpi/test_mpi_subarray_accs fails with
Open-MPI.  This is not too surprising for an older version, since there
were bugs in RMA+Datatypes as of April/May 2014.

Anyways, I will figure out what version of Open-MPI works with ARMCI-MPI
and then build that one from source in Travis.

Sorry for the lack of explicit context in this reply, but I am signed up to
this list (and many others) in no-email mode.

Best,

Jeff

--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


[OMPI users] help understand unhelpful ORTE error message

2015-11-19 Thread Jeff Hammond
I have no idea what this is trying to tell me.  Help?

jhammond@nid00024:~/MPI/qoit/collectives> mpirun -n 2 ./driver.x 64
[nid00024:00482] [[46168,0],0] ORTE_ERROR_LOG: Not found in file
../../../../../orte/mca/plm/alps/plm_alps_module.c at line 418

I can run the same job with srun without incident:

jhammond@nid00024:~/MPI/qoit/collectives> srun -n 2 ./driver.x 64
MPI was initialized.

This is on the NERSC Cori Cray XC40 system.  I build Open-MPI git head from
source for OFI libfabric.

I have many other issues, which I will report later.  As a spoiler, if I
cannot use your mpirun, I cannot set any of the MCA options there.  Is
there a method to set MCA options with environment variables?  I could not
find this documented anywhere.

In particular, is there a way to cause shm to not use the global
filesystem?  I see this issue comes up a lot and I read the list archives,
but the warning message (
https://github.com/hpc/cce-mpi-openmpi-1.6.4/blob/master/ompi/mca/common/sm/help-mpi-common-sm.txt)
suggested that I could override it by setting TMP, TEMP or TEMPDIR, which I
did to no avail.

Thanks,

Jeff

--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] help understand unhelpful ORTE error message

2015-11-19 Thread Jeff Hammond
>
>
> How did you configure for Cori?  You need to be using the slurm plm
> component for that system.  I know this sounds like gibberish.
>
>
../configure --with-libfabric=$HOME/OFI/install-ofi-gcc-gni-cori \
 --enable-mca-static=mtl-ofi \
 --enable-mca-no-build=btl-openib,btl-vader,btl-ugni,btl-tcp \
 --enable-static --disable-shared --disable-dlopen \
 --prefix=$HOME/MPI/install-ompi-ofi-gcc-gni-xpmem-cori \
 --with-cray-pmi --with-alps --with-cray-xpmem --with-slurm \
 --without-verbs --without-fca --without-mxm --without-ucx \
 --without-portals4 --without-psm --without-psm2 \
 --without-udreg --without-ugni --without-munge \
 --without-sge --without-loadleveler --without-tm --without-lsf \
 --without-pvfs2 --without-plfs \
 --without-cuda --disable-oshmem \
 --disable-mpi-fortran --disable-oshmem-fortran \
 LDFLAGS="-L/opt/cray/ugni/default/lib64 -lugni \
-L/opt/cray/alps/default/lib64 -lalps -lalpslli -lalpsutil
\  -ldl -lrt"


This is copied from
https://github.com/jeffhammond/HPCInfo/blob/master/ofi/README.md#open-mpi,
which I note in case you want to see what changes I've made at any point in
the future.


> There should be a with-slurm configure option to pick up this component.
>
> Indeed there is.


> Doesn't mpich have the option to use sysv memory?  You may want to try that
>
>
MPICH?  Look, I may have earned my way onto Santa's naughty list more than
a few times, but at least I have the decency not to post MPICH questions to
the Open-MPI list ;-)

If there is a way to tell Open-MPI to use shm_open without filesystem
backing (if that is even possible) at configure time, I'd love to do that.


> Oh for tuning params you can use env variables.  For example lets say
> rather than using the gni provider in ofi mtl you want to try sockets. Then
> do
>
> Export OMPI_MCA_mtl_ofi_provider_include=sockets
>
>
Thanks.  I'm glad that there is an option to set them this way.


> In the spirit OMPI - may the force be with you.
>
>
All I will say here is that Open-MPI has a Vader BTL :-)

>
> > On Thu 19.11.2015 09:44:20 Jeff Hammond wrote:
> > > I have no idea what this is trying to tell me. Help?
> > >
> > > jhammond@nid00024:~/MPI/qoit/collectives> mpirun -n 2 ./driver.x 64
> > > [nid00024:00482] [[46168,0],0] ORTE_ERROR_LOG: Not found in file
> > > ../../../../../orte/mca/plm/alps/plm_alps_module.c at line 418
> > >
> > > I can run the same job with srun without incident:
> > >
> > > jhammond@nid00024:~/MPI/qoit/collectives> srun -n 2 ./driver.x 64
> > > MPI was initialized.
> > >
> > > This is on the NERSC Cori Cray XC40 system. I build Open-MPI git head
> from
> > > source for OFI libfabric.
> > >
> > > I have many other issues, which I will report later. As a spoiler, if I
> > > cannot use your mpirun, I cannot set any of the MCA options there. Is
> > > there a method to set MCA options with environment variables? I could
> not
> > > find this documented anywhere.
> > >
> > > In particular, is there a way to cause shm to not use the global
> > > filesystem? I see this issue comes up a lot and I read the list
> archives,
> > > but the warning message (
> > >
> https://github.com/hpc/cce-mpi-openmpi-1.6.4/blob/master/ompi/mca/common/sm/
> > > help-mpi-common-sm.txt) suggested that I could override it by setting
> TMP,
> > > TEMP or TEMPDIR, which I did to no avail.
> >
> > From my experience on edison: the one environment variable that does
> works is TMPDIR - the one that is not listed in the error message :-)
>

That's great.  I will try that now.  Is there a Github issue open already
to fix that documentation?  If not...


> > Can't help you with your mpirun problem though ...
>
> No worries.  I appreciate all the help I can get.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] help understand unhelpful ORTE error message

2015-11-19 Thread Jeff Hammond
On Thu, Nov 19, 2015 at 4:11 PM, Howard Pritchard 
wrote:

> Hi Jeff H.
>
> Why don't you just try configuring with
>
> ./configure --prefix=my_favorite_install_dir
> --with-libfabric=install_dir_for_libfabric
> make -j 8 install
>
> and see what happens?
>
>
That was the first thing I tried.  However, it seemed to give me a
Verbs-oriented build, and Verbs is the Sith lord to us JedOFIs :-)

>From aforementioned Wiki:

../configure \
 --with-libfabric=$HOME/OFI/install-ofi-gcc-gni-cori \
 --disable-shared \
 --prefix=$HOME/MPI/install-ompi-ofi-gcc-gni-cori

Unfortunately, this (above) leads to an mpicc that indicates support for IB
Verbs, not OFI.
I will try again though just in case.


> Make sure before you configure that you have PrgEnv-gnu or PrgEnv-intel
> module loaded.
>
>
Yeah, I know better than to use the Cray compilers for such things (e.g.
https://github.com/jeffhammond/OpenPA/commit/965ca014ea3148ee5349e16d2cec1024271a7415
)


> Those were the configure/compiler options I used to do testing of ofi mtl
> on cori.
>
> Jeff S. - this thread has gotten intermingled with mpich setup as well,
> hence
> the suggestion for the mpich shm mechanism.
>
>
The first OSS implementation of MPI that I can use on Cray XC using OFI
gets a prize at the December MPI Forum.

Best,

Jeff



> Howard
>
>
>
> 2015-11-19 16:59 GMT-07:00 Jeff Hammond :
>
>>
>>> How did you configure for Cori?  You need to be using the slurm plm
>>> component for that system.  I know this sounds like gibberish.
>>>
>>>
>> ../configure --with-libfabric=$HOME/OFI/install-ofi-gcc-gni-cori \
>>  --enable-mca-static=mtl-ofi \
>>  --enable-mca-no-build=btl-openib,btl-vader,btl-ugni,btl-tcp \
>>  --enable-static --disable-shared --disable-dlopen \
>>  --prefix=$HOME/MPI/install-ompi-ofi-gcc-gni-xpmem-cori \
>>  --with-cray-pmi --with-alps --with-cray-xpmem --with-slurm \
>>  --without-verbs --without-fca --without-mxm --without-ucx \
>>  --without-portals4 --without-psm --without-psm2 \
>>  --without-udreg --without-ugni --without-munge \
>>  --without-sge --without-loadleveler --without-tm --without-lsf \
>>  --without-pvfs2 --without-plfs \
>>  --without-cuda --disable-oshmem \
>>  --disable-mpi-fortran --disable-oshmem-fortran \
>>  LDFLAGS="-L/opt/cray/ugni/default/lib64 -lugni \
>>   -L/opt/cray/alps/default/lib64 -lalps -lalpslli -lalpsutil \   
>>-ldl -lrt"
>>
>>
>> This is copied from
>> https://github.com/jeffhammond/HPCInfo/blob/master/ofi/README.md#open-mpi,
>> which I note in case you want to see what changes I've made at any point in
>> the future.
>>
>>
>>> There should be a with-slurm configure option to pick up this component.
>>>
>>> Indeed there is.
>>
>>
>>> Doesn't mpich have the option to use sysv memory?  You may want to try
>>> that
>>>
>>>
>> MPICH?  Look, I may have earned my way onto Santa's naughty list more
>> than a few times, but at least I have the decency not to post MPICH
>> questions to the Open-MPI list ;-)
>>
>> If there is a way to tell Open-MPI to use shm_open without filesystem
>> backing (if that is even possible) at configure time, I'd love to do that.
>>
>>
>>> Oh for tuning params you can use env variables.  For example lets say
>>> rather than using the gni provider in ofi mtl you want to try sockets. Then
>>> do
>>>
>>> Export OMPI_MCA_mtl_ofi_provider_include=sockets
>>>
>>>
>> Thanks.  I'm glad that there is an option to set them this way.
>>
>>
>>> In the spirit OMPI - may the force be with you.
>>>
>>>
>> All I will say here is that Open-MPI has a Vader BTL :-)
>>
>>>
>>> > On Thu 19.11.2015 09:44:20 Jeff Hammond wrote:
>>> > > I have no idea what this is trying to tell me. Help?
>>> > >
>>> > > jhammond@nid00024:~/MPI/qoit/collectives> mpirun -n 2 ./driver.x 64
>>> > > [nid00024:00482] [[46168,0],0] ORTE_ERROR_LOG: Not found in file
>>> > > ../../../../../orte/mca/plm/alps/plm_alps_module.c at line 418
>>> > >
>>> > > I can run the same job with srun without incident:
>>> > >
>>> > > jhammond@nid00024:~/MPI/qoit/collectives> srun -n 2 ./driver.x 64
>>&g

Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with Intel-Ftn-compiler

2015-11-24 Thread Jeff Hammond
..
>
> Btw, any reason why you do not
> Use mpi_f08 ?
>
> HTH
>
> Gilles
>
> michael.rach...@dlr.de wrote:
> Dear developers of OpenMPI,
>
> I am trying to run our parallelized Ftn-95 code on a Linux cluster with
> OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler.
> In the code I use the  module MPI  (“use MPI”-stmts).
>
> However I am not able to compile the code, because of compiler error
> messages like this:
>
> /src_SPRAY/mpi_wrapper.f90(2065): error #6285: There is no matching
> specific subroutin for this generic subroutine call.   [MPI_REDUCE]
>
>
> The problem seems for me to be this one:
>
> The interfaces in the module MPI for the MPI-routines do not accept a send
> or receive buffer array, which is
> actually a variable, an array element or a constant (like MPI_IN_PLACE).
>
> Example 1:
>  This does not work (gives the compiler error message:  error
> #6285: There is no matching specific subroutin for this generic subroutine
> call  )
>  ivar=123! ß ivar is an integer variable, not an array
>   *call* MPI_BCAST( ivar, 1, MPI_INTEGER, 0, MPI_COMM_WORLD),
> ierr_mpi )! ß- this should work, but is not accepted by the compiler
>
>   only this cumbersome workaround works:
>   ivar=123
> allocate( iarr(1) )
> iarr(1) = ivar
> * call* MPI_BCAST( iarr, 1, MPI_INTEGER, 0, MPI_COMM_WORLD,
> ierr_mpi )! ß- this workaround works
> ivar = iarr(1)
> deallocate( iarr(1) )
>
> Example 2:
>  Any call of an MPI-routine with MPI_IN_PLACE does not work, like that
> coding:
>
>   *if*(lmaster) *then*
> *call* MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8,
> MPI_MAX &! ß- this should work, but is not accepted by the compiler
>  ,0_INT4, MPI_COMM_WORLD, ierr_mpi
> )
>   *else*  ! slaves
> *call* MPI_REDUCE( rbuffarr, rdummyarr, nelem, MPI_REAL8, MPI_MAX
> &
> ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
>   *endif*
>
> This results in this compiler error message:
>
>   /src_SPRAY/mpi_wrapper.f90(2122): error #6285: There is no matching
> specific subroutine for this generic subroutine call.   [MPI_REDUCE]
> call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8,
> MPI_MAX &
> -^
>
>
> In our code I observed the bug with MPI_BCAST, MPI_REDUCE, MPI_ALLREDUCE,
> but probably there may be other MPI-routines with the same kind of bug.
>
> This bug occurred for   : OpenMPI-1.10.0
>  with Intel-16.0.0
> In contrast, this bug did NOT occur for: OpenMPI-1.8.8with
> Intel-16.0.0
>
>OpenMPI-1.8.8with Intel-15.0.3
>
>  OpenMPI-1.10.0  with
> gfortran-5.2.0
>
> Greetings
> Michael Rachner
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/11/28052.php
>
>
>
> --
> Kind regards Nick
>
>
>
> ___
>
> users mailing list
>
> us...@open-mpi.org
>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/11/28056.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/11/28098.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/11/28099.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] FW: Win_flush_all

2015-12-14 Thread Jeff Hammond
Bruce:

The ARMCI-MPI (mpi3rma branch) test suite is a good way to determine if an
MPI-3 implementation supports passive target RMA properly. I haven't tested
against OpenMPI recently but will add it to Travis CI before the year is
over.

Best,

Jeff

On Monday, December 14, 2015, Palmer, Bruce J  wrote:

> I’m trying to get some code working using request-based RMA (MPI_Rput,
> MPI_Rget, MPI_Raccumulate). My understanding of the MPI 3.0 standard is
> that after calling MPI_Wait on the request handle, the local buffers should
> be safe to use. On calling MPI_Win_flush_all on the window used for RMA
> operations, all operations should be completed on the remote processor.
> Based on this, I would expect that the following program should work:
>
>
>
> #include 
>
>
>
> int main(int argc, char *argv[])
>
> {
>
>   int bytes = 4096;
>
>   MPI_Win win;
>
>   void *buf;
>
>
>
>   MPI_Init(&argc, &argv);
>
>
>
>   MPI_Alloc_mem(bytes,MPI_INFO_NULL, &win);
>
>   MPI_Win_create(buf,bytes,1,MPI_INFO_NULL,MPI_COMM_WORLD,&win);
>
>   MPI_Win_flush_all(win);
>
>
>
>   MPI_Win_free(&win);
>
>   MPI_Finalize();
>
>   return(0);
>
> }
>
>
>
> However, with openmpi-1.8.3 I’m seeing a crash
>
>
>
> [node302:3689] *** An error occurred in MPI_Win_flush_all
>
> [node302:3689] *** reported by process [2431516673,0]
>
> [node302:3689] *** on win rdma window 3
>
> [node302:3689] *** MPI_ERR_RMA_SYNC: error executing rma sync
>
> [node302:3689] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
> abort,
>
> [node302:3689] ***and potentially your MPI job)
>
>
>
> I’m seeing similar failures for mvapich2-2.1 and mpich-3.2. Does anyone
> know if this stuff is suppose to work? I’ve had pretty good luck using the
> original RMA calls (MPI_Put, MPI_Get and MPI_Accumulate) with
> MPI_Lock/MPI_Unlock but the request-based calls are mostly a complete
> failure.
>
>
>
> Bruce Palmer
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] statically linked OpenMPI 1.10.1 with PGI compilers

2015-12-31 Thread Jeff Hammond
Try using the same LDFLAGS for PGI. I think you got exactly what you asked
for from PGI when you used -Bstatic_pgi.

I'm not sure what value there is to having mpirun be a static binary, other
than enabling users to be ignorant of how LD_LIBRARY_PATH works and wasting
space in your filesystem. You should instead consider rpath.

Jeff

On Thursday, December 31, 2015, Ilias Miroslav 
wrote:

> Dear experts,
>
>
> while I have succeeded to build fully statically linked OpenMPI with Intel
> compilers:
>
>
> ./configure --prefix=/home/ilias/bin/openmpi-1.10.1_intel_static
> --without-memory-manager CXX=icpc CC=icc F77=ifort FC=ifort
> LDFLAGS=--static --disable-shared --enable-static
>
>
> il...@grid.ui.savba.sk:~/bin/openmpi-1.10.1_intel_static/bin/.ldd mpif90;
> ldd mpicc; ldd mpirun
> not a dynamic executable
> not a dynamic executable
> not a dynamic executable
>
> ​
> I have not succeeded with PGI compilers:
>
>   $ ./configure --prefix=/home/ilias/bin/openmpi-1.10.1_pgi_static
> CXX=pgCC CC=pgcc F77=pgf77 FC=pgf90 CPP=cpp LDFLAGS=-Bstatic_pgi
> --disable-shared --enable-static --without-memory-manager
>
> il...@grid.ui.savba.sk:~/bin/openmpi-1.10.1_pgi_static/bin/.ldd mpif90
> linux-vdso.so.1 =>  (0x7fffc75da000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7f2f0820e000)
> libm.so.6 => /lib64/libm.so.6 (0x7f2f07f89000)
> libnuma.so => /opt/pgi/linux86-64/13.10/lib/libnuma.so
> (0x7f2f07e88000)
> librt.so.1 => /lib64/librt.so.1 (0x7f2f07c8)
> libutil.so.1 => /lib64/libutil.so.1 (0x7f2f07a7c000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f2f0785f000)
> libc.so.6 => /lib64/libc.so.6 (0x7f2f074cb000)
> /lib64/ld-linux-x86-64.so.2 (0x7f2f0842a000)
> il...@grid.ui.savba.sk:~/bin/openmpi-1.10.1_pgi_static/bin/.ldd mpirun
> linux-vdso.so.1 =>  (0x7fffe75ed000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7f4f264b3000)
> libm.so.6 => /lib64/libm.so.6 (0x7f4f2622e000)
> libnuma.so => /opt/pgi/linux86-64/13.10/lib/libnuma.so
> (0x7f4f2612d000)
> librt.so.1 => /lib64/librt.so.1 (0x7f4f25f25000)
> libutil.so.1 => /lib64/libutil.so.1 (0x7f4f25d21000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f4f25b04000)
> libc.so.6 => /lib64/libc.so.6 (0x7f4f2577)
> /lib64/ld-linux-x86-64.so.2 (0x7f4f266cf000)
> il...@grid.ui.savba.sk:~/bin/openmpi-1.10.1_pgi_static/bin/.
>
>
> Any help, please ?
>
> ​Miro
>
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] statically linked OpenMPI 1.10.1 with PGI compilers

2015-12-31 Thread Jeff Hammond
I have no idea what ESMF is and am too lazy to google it, but I would consider 
a failure to build with dynamic OpenMPI libs to be a bug in EMSF, OpenMPI, the 
Intel compiler or the linker and devote the least amount of time possible to 
mitigating it. I guess that static linking is the easiest fix, but you should 
nonetheless be unforgiving of whoever broke your user experience. 

Jeff

Sent from my iPhone

> On Dec 31, 2015, at 3:10 PM, Matt Thompson  wrote:
> 
>> On Thu, Dec 31, 2015 at 4:37 PM, Jeff Hammond  wrote:
>> Try using the same LDFLAGS for PGI. I think you got exactly what you asked 
>> for from PGI when you used -Bstatic_pgi. 
>> 
>> I'm not sure what value there is to having mpirun be a static binary, other 
>> than enabling users to be ignorant of how LD_LIBRARY_PATH works and 
>> wasting space in your filesystem. You should instead consider rpath. 
>  
> Jeff,
> 
> I found one excuse. On a desktop of mine I build Open MPI with gfortran, PGI, 
> and Intel. It turns out if I built Intel Fortran + Open MPI with 
> --enable-shared, it would not compile ESMF correctly. I'm going to try and 
> revisit it next week because I want Intel OpenMPI as shared so I can easily 
> use Allinea MAP. 
> 
> I'll try and make a good report for you/ESMF.
> 
> -- 
> Matt Thompson
> Man Among Men
> Fulcrum of History
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/12/28210.php


Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Jeff Hammond
t; -map-by
>>>>>> > ppr:2:socket ./hello-hybrid.x | sort -g -k 18
>>>>>> > Hello from thread 0 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 0
>>>>>> > Hello from thread 0 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 0
>>>>>> > Hello from thread 1 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 1
>>>>>> > Hello from thread 1 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 1
>>>>>> > Hello from thread 2 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 2
>>>>>> > Hello from thread 2 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 2
>>>>>> > Hello from thread 3 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 3
>>>>>> > Hello from thread 3 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 3
>>>>>> > Hello from thread 4 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 4
>>>>>> > Hello from thread 4 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 4
>>>>>> > Hello from thread 5 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 5
>>>>>> > Hello from thread 5 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 5
>>>>>> > Hello from thread 6 out of 7 from process 0 out of 4 on borgo035 on
>>>>>> CPU 6
>>>>>> > Hello from thread 6 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 6
>>>>>> > Hello from thread 0 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 14
>>>>>> > Hello from thread 0 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 14
>>>>>> > Hello from thread 1 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 15
>>>>>> > Hello from thread 1 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 15
>>>>>> > Hello from thread 2 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 16
>>>>>> > Hello from thread 2 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 16
>>>>>> > Hello from thread 3 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 17
>>>>>> > Hello from thread 3 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 17
>>>>>> > Hello from thread 4 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 18
>>>>>> > Hello from thread 4 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 18
>>>>>> > Hello from thread 5 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 19
>>>>>> > Hello from thread 5 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 19
>>>>>> > Hello from thread 6 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 20
>>>>>> > Hello from thread 6 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 20
>>>>>> >
>>>>>> > Obviously not right. Any ideas on how to help me learn? The man
>>>>>> mpirun page
>>>>>> > is a bit formidable in the pinning part, so maybe I've missed an
>>>>>> obvious
>>>>>> > answer.
>>>>>> >
>>>>>> > Matt
>>>>>> > --
>>>>>> > Matt Thompson
>>>>>> >
>>>>>> > Man Among Men
>>>>>> > Fulcrum of History
>>>>>> >
>>>>>> >
>>>>>> > ___
>>>>>> > users mailing list
>>>>>> > us...@open-mpi.org
>>>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> > Link to this post:
>>>>>> > http://www.open-mpi.org/community/lists/users/2016/01/28217.php
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Erik Schnetter 
>>>>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>>>> ___
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> Link to this post:
>>>>>> http://www.open-mpi.org/community/lists/users/2016/01/28218.php
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Kind regards Nick
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2016/01/28219.php
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2016/01/28221.php
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Thompson
>>>>
>>>> Man Among Men
>>>> Fulcrum of History
>>>>
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/01/28223.php
>>>>
>>>
>>>
>>>
>>> --
>>> Kind regards Nick
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28224.php
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28226.php
>>>
>>
>>
>>
>> --
>> Kind regards Nick
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/01/28227.php
>>
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/01/28228.php
>>
>
>
>
> --
> Kind regards Nick
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28229.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-06 Thread Jeff Hammond
On Wed, Jan 6, 2016 at 4:36 PM, Matt Thompson  wrote:

> On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet 
> wrote:
>
>> FWIW,
>>
>> there has been one attempt to set the OMP_* environment variables within
>> OpenMPI, and that was aborted
>> because that caused crashes with a prominent commercial compiler.
>>
>> also, i'd like to clarify that OpenMPI does bind MPI tasks (e.g.
>> processes), and it is up to the OpenMP runtime to bind the OpenMP threads
>> to the resources made available by OpenMPI to the MPI task.
>>
>> in this case, that means OpenMPI will bind a MPI tasks to 7 cores (for
>> example cores 7 to 13), and it is up to the OpenMP runtime to bind each 7
>> OpenMP threads to one core previously allocated by OpenMPI
>> (for example, OMP thread 0 to core 7, OMP thread 1 to core 8, ...)
>>
>
> Indeed. Hybrid programming is a two-step tango. The harder task (in some
> ways) is the placing MPI processes where I want. With omplace I could just
> force things (though probably not with Open MPI...haven't tried it yet),
> but I'd rather have a more "formulaic" way to place processes since then
> you can script it. Now that I know about the ppr: syntax, I can see it'll
> be quite useful!
>
> The other task is to get the OpenMP threads in the "right way". I was
> pretty sure KMP_AFFINITY=compact was correct (worked once...and, yeah,
> using Intel at present. Figured start there, then expand to figure out GCC
> and PGI). I'll do some experimenting with the OMP_* versions as a
> more-respected standard is always a good thing.
>
> For others with inquiries into this, I highly recommend this page I found
> after my query was answered here:
>
>
> https://www.olcf.ornl.gov/kb_articles/parallel-job-execution-on-commodity-clusters/
>
> At this point, I'm thinking I should start up an MPI+OpenMP wiki to map
> all the combinations of compiler+mpistack.
>
>
Just using Intel compilers, OpenMP and MPI.  Problem solved :-)

(I work for Intel and the previous statement should be interpreted as a
joke, although Intel OpenMP and MPI interoperate as well as any
implementations of which I am aware.)


> Or pray the MPI Forum and OpenMP combine and I can just look in a
> Standard. :D
>
>
echo "" > $OPENMP_STANDARD # critical step
cat $MPI_STANDARD $OPENMP_STANDARD > $HPC_STANDARD

More seriously, hybrid programming sucks.  Just use MPI-3 and exploit your
coherence domain via MPI_Win_allocate_shared.  That way, you won't have to
mix runtimes, suffer mercilessly because of opaque race conditions in
thread-unsafe libraries, or reason about a bolt-on pseudo-language that
replicates features found in ISO languages without a well-defined
interoperability model.  For example, what is the interoperability between
OpenMP 4.5 threads/atomics and C++11 threads/atomics, C11 threads/atomics,
or Fortran 2008 concurrency features (e.g. coarrays)?  Nobody knows out
side of "don't do that".  How about OpenMP parallel regions inside code
that runs in a POSIX, C11 or C++11 thread?  Good luck.  I've been trying to
solve the latter problem for years and have made very little progress as
far as the spec goes.

Related work:
- http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf
-
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf

Do not feed the trolls ;-)

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Put/Get semantics

2016-01-08 Thread Jeff Hammond
program
> Bruce Palmer
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
http://www.open-mpi.org/community/lists/users/2016/01/28216.php




--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-12 Thread Jeff Hammond
Example 4.23 of MPI 3.1 (it is hardly a new example, but may have a
different number in older versions) demonstrates the use of
(buffer=NULL,count=0,type=MPI_DATATYPE_NULL).  While examples in the MPI
standard are not normative text, this is certainly a valid use of MPI.  I
can't find a citation where it says explicitly that this is correct, but it
follows logically from other text.

The MPICH macro MPIR_ERRTEST_USERBUFFER that is used through the code to
test for valid user buffers begins with "if (count > 0..." and thus does
concern itself with the type or buffer pointer when count=0.  Furthermore,
this macro is redundantly protected with a count>0 check when used in
MPI_Alltoallw (and other collectives).

Best,

Jeff

On Tue, Jan 12, 2016 at 4:18 PM, Gilles Gouaillardet 
wrote:
>
> Hi Jim,
>
> can you please confirm my understanding is correct :
>
> - OpenMPI does *not* accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw
> - mpich does accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw *if*
the corresponding count *is* zero
> - mpich does *not* accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw
*if* the corresponding count is *not* zero
>
> So you are considering as a bug the fact OpenMPI does not accept
MPI_DATATYPE_NULL *with* a zero count.
>
> am i correct ?
>
> Cheers,
>
> Gilles
>
>
> On 1/13/2016 8:27 AM, Jim Edwards wrote:
>
> Hi,
>
> I am using OpenMPI-1.8.3 built with gcc 4.8.3
> and I am using an MPI_Alltoallw call to perform
> an all to some (or some to all) communication.
>
> In the case in which my task is not sending (or receiving) any data I set
the
> datatype for that send or recv buffer to MPI_DATATYPE_NULL - this
> works fine with other mpi libraries but fails in openmpi.   If I set
> the datatype to something else say MPI_CHAR - it works fine.   I think
> that this is a bug in open-mpi - would you agree?
>
>
>
>
> --
> Jim Edwards
>
> CESM Software Engineer
> National Center for Atmospheric Research
> Boulder, CO
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
http://www.open-mpi.org/community/lists/users/2016/01/28249.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
http://www.open-mpi.org/community/lists/users/2016/01/28250.php




--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-12 Thread Jeff Hammond
Consider MPI_Get_accumulate with op=MPI_NO_OP, which is used to achieve
atomic Get.  Obviously, one does not want to allocate and describe a source
buffer that will not be touched by this.  This is a case like MPI_Alltoallw
where (NULL,0,MPI_DATATYPE_NULL) needs to work at participating processes.

Best,

Jeff

On Tue, Jan 12, 2016 at 4:46 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Thanks Jeff,
>
> i could not find anything in the standard that says this is an invalid
> usage ... so i can only agree this is a bug.
>
> fwiw, example 4.23 is working fine with OpenMPI
> but that is a different case : with MPI_Gather and friends, recv stuff
> is irrelevant on non root task.
> in the case of MPI_Alltoallw and friends, no parameter is ignored.
>
> fortunatly, the fix is pretty trivial, so i will make a PR from now
>
> Cheers,
>
> Gilles
>
>
> On Wed, Jan 13, 2016 at 9:37 AM, Jeff Hammond 
> wrote:
> > Example 4.23 of MPI 3.1 (it is hardly a new example, but may have a
> > different number in older versions) demonstrates the use of
> > (buffer=NULL,count=0,type=MPI_DATATYPE_NULL).  While examples in the MPI
> > standard are not normative text, this is certainly a valid use of MPI.  I
> > can't find a citation where it says explicitly that this is correct, but
> it
> > follows logically from other text.
> >
> > The MPICH macro MPIR_ERRTEST_USERBUFFER that is used through the code to
> > test for valid user buffers begins with "if (count > 0..." and thus does
> > concern itself with the type or buffer pointer when count=0.
> Furthermore,
> > this macro is redundantly protected with a count>0 check when used in
> > MPI_Alltoallw (and other collectives).
> >
> > Best,
> >
> > Jeff
> >
> >
> > On Tue, Jan 12, 2016 at 4:18 PM, Gilles Gouaillardet 
> > wrote:
> >>
> >> Hi Jim,
> >>
> >> can you please confirm my understanding is correct :
> >>
> >> - OpenMPI does *not* accept MPI_DATATYPE_NULL as an input of
> MPI_Alltoallw
> >> - mpich does accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw *if*
> >> the corresponding count *is* zero
> >> - mpich does *not* accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw
> >> *if* the corresponding count is *not* zero
> >>
> >> So you are considering as a bug the fact OpenMPI does not accept
> >> MPI_DATATYPE_NULL *with* a zero count.
> >>
> >> am i correct ?
> >>
> >> Cheers,
> >>
> >> Gilles
> >>
> >>
> >> On 1/13/2016 8:27 AM, Jim Edwards wrote:
> >>
> >> Hi,
> >>
> >> I am using OpenMPI-1.8.3 built with gcc 4.8.3
> >> and I am using an MPI_Alltoallw call to perform
> >> an all to some (or some to all) communication.
> >>
> >> In the case in which my task is not sending (or receiving) any data I
> set
> >> the
> >> datatype for that send or recv buffer to MPI_DATATYPE_NULL - this
> >> works fine with other mpi libraries but fails in openmpi.   If I set
> >> the datatype to something else say MPI_CHAR - it works fine.   I think
> >> that this is a bug in open-mpi - would you agree?
> >>
> >>
> >>
> >>
> >> --
> >> Jim Edwards
> >>
> >> CESM Software Engineer
> >> National Center for Atmospheric Research
> >> Boulder, CO
> >>
> >>
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> Link to this post:
> >> http://www.open-mpi.org/community/lists/users/2016/01/28249.php
> >>
> >>
> >>
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> Link to this post:
> >> http://www.open-mpi.org/community/lists/users/2016/01/28250.php
> >
> >
> >
> >
> > --
> > Jeff Hammond
> > jeff.scie...@gmail.com
> > http://jeffhammond.github.io/
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> > http://www.open-mpi.org/community/lists/users/2016/01/28251.php
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28252.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-12 Thread Jeff Hammond
That's ridiculous.  You can have 0 of anything.  There is absolutely no
logical reason why MPI_DATATYPE_NULL should not be valid with count=0.  Are
you suggesting that one cannot pass buffer=NULL when count=0 either?

Jeff

On Tue, Jan 12, 2016 at 6:06 PM, George Bosilca  wrote:

> As JH mentioned the examples are not normative. The type MPI_DATATYPE_NULL
> is not part of the MPI predefined datatypes, and as such is not expected to
> be a commited datatype, thus improper for communications (even when the
> count is 0).
>
>   George.
>
>
> On Tue, Jan 12, 2016 at 8:25 PM, Jim Edwards  wrote:
>
>> ​Hi Gilles,
>>
>> I think that your conversation with Jeff pretty much covered it but
>> your understanding of my original problem is correct.
>> Thanks for the prompt response and the PR.
>>
>> On Tue, Jan 12, 2016 at 5:59 PM, Jeff Hammond 
>> wrote:
>>
>>> Consider MPI_Get_accumulate with op=MPI_NO_OP, which is used to achieve
>>> atomic Get.  Obviously, one does not want to allocate and describe a source
>>> buffer that will not be touched by this.  This is a case like MPI_Alltoallw
>>> where (NULL,0,MPI_DATATYPE_NULL) needs to work at participating processes.
>>>
>>> Best,
>>>
>>> Jeff
>>>
>>> On Tue, Jan 12, 2016 at 4:46 PM, Gilles Gouaillardet <
>>> gilles.gouaillar...@gmail.com> wrote:
>>>
>>>> Thanks Jeff,
>>>>
>>>> i could not find anything in the standard that says this is an invalid
>>>> usage ... so i can only agree this is a bug.
>>>>
>>>> fwiw, example 4.23 is working fine with OpenMPI
>>>> but that is a different case : with MPI_Gather and friends, recv stuff
>>>> is irrelevant on non root task.
>>>> in the case of MPI_Alltoallw and friends, no parameter is ignored.
>>>>
>>>> fortunatly, the fix is pretty trivial, so i will make a PR from now
>>>>
>>>> Cheers,
>>>>
>>>> Gilles
>>>>
>>>>
>>>> On Wed, Jan 13, 2016 at 9:37 AM, Jeff Hammond 
>>>> wrote:
>>>> > Example 4.23 of MPI 3.1 (it is hardly a new example, but may have a
>>>> > different number in older versions) demonstrates the use of
>>>> > (buffer=NULL,count=0,type=MPI_DATATYPE_NULL).  While examples in the
>>>> MPI
>>>> > standard are not normative text, this is certainly a valid use of
>>>> MPI.  I
>>>> > can't find a citation where it says explicitly that this is correct,
>>>> but it
>>>> > follows logically from other text.
>>>> >
>>>> > The MPICH macro MPIR_ERRTEST_USERBUFFER that is used through the code
>>>> to
>>>> > test for valid user buffers begins with "if (count > 0..." and thus
>>>> does
>>>> > concern itself with the type or buffer pointer when count=0.
>>>> Furthermore,
>>>> > this macro is redundantly protected with a count>0 check when used in
>>>> > MPI_Alltoallw (and other collectives).
>>>> >
>>>> > Best,
>>>> >
>>>> > Jeff
>>>> >
>>>> >
>>>> > On Tue, Jan 12, 2016 at 4:18 PM, Gilles Gouaillardet <
>>>> gil...@rist.or.jp>
>>>> > wrote:
>>>> >>
>>>> >> Hi Jim,
>>>> >>
>>>> >> can you please confirm my understanding is correct :
>>>> >>
>>>> >> - OpenMPI does *not* accept MPI_DATATYPE_NULL as an input of
>>>> MPI_Alltoallw
>>>> >> - mpich does accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw
>>>> *if*
>>>> >> the corresponding count *is* zero
>>>> >> - mpich does *not* accept MPI_DATATYPE_NULL as an input of
>>>> MPI_Alltoallw
>>>> >> *if* the corresponding count is *not* zero
>>>> >>
>>>> >> So you are considering as a bug the fact OpenMPI does not accept
>>>> >> MPI_DATATYPE_NULL *with* a zero count.
>>>> >>
>>>> >> am i correct ?
>>>> >>
>>>> >> Cheers,
>>>> >>
>>>> >> Gilles
>>>> >>
>>>> >>
>>>> >> On 1/13/2016 8:27 AM, Jim Edwards wrote:
>>>> >>
>>>> >> Hi,
>>>> >>
>>>

Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-13 Thread Jeff Hammond
There's a thread about this on the MPI Forum mailing list already ;-)

Jeff

On Tuesday, January 12, 2016, Gilles Gouaillardet  wrote:

> Jim,
>
> if i understand correctly, George point is that OpenMPI is currently
> correct with respect to the MPI standard :
> MPI_DATATYPE_NULL is *not* a predefined datatype, hence it is not
> (expected to be) a committed datatype,
> and hence it cannot be used in MPI_Alltoallw (regardless the corresponding
> count is zero).
>
> an other way to put this is mpich could/should have failed and/or you were
> lucky it worked.
>
> George and Jeff,
>
> do you feel any need to ask MPI Forum to clarify this point ?
>
>
> Cheers,
>
> Gilles
>
> On 1/13/2016 12:14 PM, Jim Edwards wrote:
>
> Sorry there was a mistake in that code,
> stypes and rtypes should be of type MPI_Datatype not integer
> however the result is the same.
>
> *** An error occurred in MPI_Alltoallw
>
> *** reported by process [204406785,1]
>
> *** on communicator MPI_COMM_WORLD
>
> *** MPI_ERR_TYPE: invalid datatype
>
>
>
> On Tue, Jan 12, 2016 at 7:55 PM, Jim Edwards  > wrote:
>
>> Maybe the example is too simple.  Here is another one which
>> when run on two tasks sends two integers from each task to
>> task 0.   Task 1 receives nothing.  This works with mpich and impi
>> but fails with openmpi.
>>
>> #include 
>> #include 
>>
>>  my_mpi_test(int rank, int ntasks)
>> {
>>   MPI_Datatype stype, rtype;
>>   int sbuf[2];
>>   int rbuf[4];
>>
>>   int slen[ntasks], sdisp[ntasks], stypes[ntasks], rlen[ntasks],
>> rdisp[ntasks], rtypes[ntasks];
>>   sbuf[0]=rank;
>>   sbuf[1]=ntasks+rank;
>>   slen[0]=2;
>>   slen[1]=0;
>>   stypes[0]=MPI_INT;
>>   stypes[1]=MPI_DATATYPE_NULL;
>>   sdisp[0]=0;
>>   sdisp[1]=4;
>>   if(rank==0){
>> rlen[0]=2;
>> rlen[1]=2;
>> rtypes[0]=MPI_INT;
>> rtypes[1]=MPI_INT;
>> rdisp[0]=0;
>> rdisp[1]=8;
>>
>>   }else{
>> rlen[0]=0;
>> rlen[1]=0;
>> rtypes[0]=MPI_DATATYPE_NULL;
>> rtypes[1]=MPI_DATATYPE_NULL;
>> rdisp[0]=0;
>> rdisp[1]=0;
>>   }
>>
>>   MPI_Alltoallw(sbuf, slen, sdisp, stypes, rbuf, rlen, rdisp, rtypes,
>> MPI_COMM_WORLD);
>>   if(rank==0){
>> printf("%d %d %d %d\n",rbuf[0],rbuf[1],rbuf[2],rbuf[3]);
>>   }
>>
>> int main(int argc, char **argv)
>> {
>>   int rank, ntasks;
>>
>>   MPI_Init(&argc, &argv);
>>
>>   MPI_Comm_rank(MPI_COMM_WORLD,&rank);
>>   MPI_Comm_size(MPI_COMM_WORLD, &ntasks);
>>
>>   printf("rank %d ntasks %d\n",rank, ntasks);
>>
>>   my_mpi_test(rank,ntasks);
>>
>>
>>   MPI_Finalize();
>> }
>>
>>
>>
>>
>
>
> --
> Jim Edwards
>
> CESM Software Engineer
> National Center for Atmospheric Research
> Boulder, CO
>
>
> ___
> users mailing listus...@open-mpi.org 
> 
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/01/28258.php
>
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-13 Thread Jeff Hammond
Bill Gropp's statement on the Forum list is clear: null handles cannot be
used unless explicitly permitted.  Unfortunately, there is no exception for
MPI_DATATYPE_NULL when count=0.  Hopefully, we will add one in MPI-4.

While your usage model is perfectly reasonable to me and something that I
would do in the same position, you need to use e.g. MPI_BYTE instead to
comply with the current MPI standard.

As to why Open-MPI wastes CPU cycles testing for datatype validity when
count=0, that is a question for someone else to answer.  Implementations
have no obligation enforce every letter of the MPI standard.

Jeff

On Wed, Jan 13, 2016 at 6:11 AM, Jim Edwards  wrote:

> It seems to me that when there is a question of interpretation of the
> standard one should ask the consequences of each potential interpretation.
>   It just makes sense that MPI_DATATYPE_NULL should be allowed when the
> count is 0, otherwise you need to insert some random datatype just to fill
> the array.
>
> Can you make any argument in support of not allowing it (other than that's
> the way you've interpreted the standard)?
>
> On Tue, Jan 12, 2016 at 10:44 PM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> Thanks Jeff,
>>
>> i found it at http://lists.mpi-forum.org/mpi-forum/2016/01/3152.php
>>
>> i'd like to re-iterate what i wrote earlier about example 4.23
>> MPI_DATATYPE_NULL is used as a recv type on non root tasks,
>> and per the mpi 3.1 standard, recv type is "significant only at root"
>>
>> in the case of MPI_Gatherv, MPI_DATATYPE_NULL is *not* significant,
>> but in the case of MPI_Alltoallw, it *is* significant.
>>
>> as far as i am concerned, and to say the least, these are two distinct
>> shades of grey.
>>
>>
>> IMHO, it would be more intuitive if the use of MPI_DATATYPE_NULL was
>> allowed with a zero count, and in both MPI_Alltoallw *and*
>> MPI_Sendrecv.
>>
>>
>> i still believe George interpretation is the correct one, and Bill
>> Gropp agreed with him.
>>
>>
>> and btw, is example 4.23 correct ?
>> /* fwiw, i did copy/paste it and found several missing local variable
>> myrank, i, and comm
>> and i'd rather have MPI_COMM_WORLD than comm */
>>
>> and what if recvcount is negative on non root task ?
>> should it be an error (negative int) or not (not significant value) ?
>>
>> Cheers,
>>
>> Gilles
>>
>>
>> On Wed, Jan 13, 2016 at 2:15 PM, Jeff Hammond 
>> wrote:
>> > There's a thread about this on the MPI Forum mailing list already ;-)
>> >
>> > Jeff
>> >
>> >
>> > On Tuesday, January 12, 2016, Gilles Gouaillardet 
>> wrote:
>> >>
>> >> Jim,
>> >>
>> >> if i understand correctly, George point is that OpenMPI is currently
>> >> correct with respect to the MPI standard :
>> >> MPI_DATATYPE_NULL is *not* a predefined datatype, hence it is not
>> >> (expected to be) a committed datatype,
>> >> and hence it cannot be used in MPI_Alltoallw (regardless the
>> corresponding
>> >> count is zero).
>> >>
>> >> an other way to put this is mpich could/should have failed and/or you
>> were
>> >> lucky it worked.
>> >>
>> >> George and Jeff,
>> >>
>> >> do you feel any need to ask MPI Forum to clarify this point ?
>> >>
>> >>
>> >> Cheers,
>> >>
>> >> Gilles
>> >>
>> >> On 1/13/2016 12:14 PM, Jim Edwards wrote:
>> >>
>> >> Sorry there was a mistake in that code,
>> >> stypes and rtypes should be of type MPI_Datatype not integer
>> >> however the result is the same.
>> >>
>> >> *** An error occurred in MPI_Alltoallw
>> >>
>> >> *** reported by process [204406785,1]
>> >>
>> >> *** on communicator MPI_COMM_WORLD
>> >>
>> >> *** MPI_ERR_TYPE: invalid datatype
>> >>
>> >>
>> >>
>> >>
>> >> On Tue, Jan 12, 2016 at 7:55 PM, Jim Edwards 
>> wrote:
>> >>>
>> >>> Maybe the example is too simple.  Here is another one which
>> >>> when run on two tasks sends two integers from each task to
>> >>> task 0.   Task 1 receives nothing.  This works with mpich and impi
>> >>> but fails with openmpi.
>> >>>
>> >>> #include 
>> >>> #include 
>> >>

Re: [OMPI users] MPI_DATATYPE_NULL and MPI_AlltoallW

2016-01-14 Thread Jeff Hammond
On Thu, Jan 14, 2016 at 3:05 PM, George Bosilca  wrote:

>
> On Jan 13, 2016, at 19:57 , Jim Edwards  wrote:
>
> George and all.
>
> Back to OpenMPI, now the question is :
>
> “Is OpenMPI going to be updated (and when) in order to support an
> intuitive and user friendly feature, that is currently explicitly
> prohibited by the MPI 3.1 standard, but that might be part of the MPI-4
> standard and that we already know is not backward compatible (*) ?
>
>
> If the MPI Forum agrees to amend the standard to allow this [currently
> forbidden] behavior, we will be bound to adapt. Meanwhile, I would assume
> that with regard to this particular question the MPICH implementation is
> far too user-friendly and only loosely standard compliant.
>


The MPI standard does not require implementations to catch ANY invalid
usage errors, so MPICH compliance is in no way affected by allowing invalid
usage.  The error in question is completely innocuous and there is no value
whatsoever to crashing a user application over it.

If you want to hop on the standard-compliance soapbox, let's start with
MPI-2 features like MPI_THREAD_MULTIPLE and RMA ;-)


> (*) fwiw, mpich already “implements" this, so backward incompatibility
> would only affect tools currently working with OpenMPI but not with mpich."
>
> i am a pragmatic guy, so i'd rather go for it, but here is what i am gonna
> do :
>
> unless George vetoes that, i will add this topic to the weekly call
> agenda, and wait for the community to make a decision
> (e.g. go / no go, and milestone if needed 1.10 series ? 2.0 ? 2.1 ? master
> only ?)
>
>
> A pragmatic user will certainly appreciate in all circumstances to type
> less characters (MPI_BYTE) instead of MPI_DATATYPE_NULL when used in
> combination with a statically known count of 0.
>
>
What is the type of NULL and nullptr?  Both because of static analysis and
inferring MPI datatypes from pointer types (as done in C++ codes), I'm not
sure it's a good idea to say that null buffers have a well-defined MPI
type.

Jeff


> Cheers,
>   George.
>
>
>
> Cheers,
>
> Gilles
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28277.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Using OpenMPI Thread Multiple mode

2016-01-19 Thread Jeff Hammond
You may wish to subscribe to https://github.com/open-mpi/ompi/issues/157.

Jeff

On Tue, Jan 19, 2016 at 4:13 PM, Udayanga Wickramasinghe <
uswic...@umail.iu.edu> wrote:

> Hi devs,
> I am using THREAD_MULTIPLE in openmpi version 1.8.4. However I
> occasionally see following warning and my application gets hanged up
> intermittently. Does this mean thread multiple mode is not supported in
> 1.8.4 ? Or does openmpi has a version that fully supports this ?
>
> opal_libevent2021_event_base_loop: reentrant invocation;  Only one
> event_base_loop can run on each event_base at once.
>
> Thanks and Regards
> Udayanga
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28304.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Open MPI MPI-OpenMP Hybrid Binding Question

2016-01-21 Thread Jeff Hammond
On Thu, Jan 21, 2016 at 4:07 AM, Dave Love  wrote:
>
> Jeff Hammond  writes:
>
> > Just using Intel compilers, OpenMP and MPI.  Problem solved :-)
> >
> > (I work for Intel and the previous statement should be interpreted as a
> > joke,
>
> Good!
>
> > although Intel OpenMP and MPI interoperate as well as any
> > implementations of which I am aware.)
>
> Better than MPC (not that I've used it)?
>

MPC is a great idea, although it poses some challenges w.r.t. globals and
such (however, see below).  Unfortunately, "MPC conforms to the POSIX
Threads, OpenMP 3.1 and MPI 1.3 standards" (
http://mpc.hpcframework.paratools.com/), it does not do me much good (I'm a
heavy-duty RMA user).

For those that are interested in MPC, the Intel compilers (on Linux)
support an option to change how TLS works so that MPC works.

-f[no-]mpc_privatize
  Enables privatization of all static data for the MPC
  unified parallel runtime.  This will cause calls to
  extended thread local storage resolution run-time routines
  which are not supported on standard linux distributions.
  This option is only usable in conjunction with the MPC
  unified parallel runtime.  The default is -fno-mpc-privatize.

>
> For what it's worth, you have to worry about the batch resource manager
> as well as the MPI, and you may need to ensure you're allocated complete
> nodes.  There are known problems with IMPI and SGE specifically, and
> several times I've made users a lot happier with OMPI/GCC.
>

This is likely because GCC uses one OpenMP thread when the user does not
set OMP_NUM_THREADS, whereas Intel will use one per virtual processor
(divided by MPI processes, but only if it can figure out how many).  Both
behaviors are compliant with the OpenMP standard.  GCC is doing the
conservative thing, whereas Intel is trying to maximize performance in the
case of OpenMP-only applications (more common than you think) and
MPI+OpenMP applications where Intel MPI is used.  As experienced HPC users
always set OMP_NUM_THREADS (and OMP_PROC_BIND, OMP_WAIT_POLICY or
implementation-specific equivalents) explicitly anyways, this should not be
a problem.

As for not getting complete nodes, one is either in the cloud or the shared
debug queue and performance is secondary.  But as always, one should be
able to set OMP_NUM_THREADS, OMP_PROC_BIND, OMP_WAIT_POLICY to get the
right behavior.

My limited experience with SGE has caused me to conclude that any problems
associated with SGE + $X are almost certainly the fault of SGE and not $X.

> >> Or pray the MPI Forum and OpenMP combine and I can just look in a
> >> Standard. :D
> >>
> >>
> > echo "" > $OPENMP_STANDARD # critical step
> > cat $MPI_STANDARD $OPENMP_STANDARD > $HPC_STANDARD
> >
> > More seriously, hybrid programming sucks.  Just use MPI-3 and exploit
your
> > coherence domain via MPI_Win_allocate_shared.  That way, you won't have
to
> > mix runtimes, suffer mercilessly because of opaque race conditions in
> > thread-unsafe libraries, or reason about a bolt-on pseudo-language that
> > replicates features found in ISO languages without a well-defined
> > interoperability model.
>
> Sure, but the trouble is that "everyone knows"" you need the hybrid
> stuff.  Are there good examples of using MPI-3 instead/in comparison?
> I'd be particularly interested in convincing chemists, though as they
> don't believe in deadlock and won't measure things, that's probably a
> lost cause.  Not all chemists, of course.

PETSc (
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf
)

Quantum chemistry or molecular dynamics?  Parts of quantum chemistry are so
flop heavy that stupid fork-join MPI+OpenMP is just fine.  I'm doing this
in NWChem coupled cluster codes.  I fork-join in every kernel even though
this is shameful, because my kernels do somewhere between 4 and 40 billion
FMAs and touch between 0.5 and 5 GB of memory.  For methods that aren't
coupled-cluster, OpenMP is not always a good solution, and certainly not
for legacy codes that aren't thread-safe.  OpenMP may be useful within a
core to exploit >1 thread per core (if necessary) and certainly "#pragma
omp simd" should be exploited when appropriate, but scaling OpenMP beyond
~4 threads in most quantum chemistry codes requires an intensive rewrite.
Because of load-balancing issues in atomic integral computations, TBB or
OpenMP tasking may be more appropriate.

If you want to have a more detailed discussion of programming models for
computational chemistry, I'd be happy to take that discussion offline.

Best,

Jeff



--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Jeff Hammond
You will find the MPI Fortran 2008 bindings to be significantly better
w.r.t. MPI types.  See e.g. MPI 3.1 section 17.2.5 where it describes
TYPE(MPI_Status), which means that the status object is a first-class type
in the Fortran 2008 interface, rather than being an error prone INTEGER
array.

I haven't used Fortran 2008 bindings in a nontrivial way yet, but it is my
understanding that Open-MPI has a good implementation of them and has for a
relatively long time.

For multilingual MPI programmers, the Fortran 2008 bindings will be quite
easy to understand from the perspective of the C bindings, since they are
quite similar in many respects.

Jeff

On Fri, Jan 22, 2016 at 7:12 AM, Paweł Jarzębski  wrote:

> Thx a lot. I will be more careful with declaration of the MPI variables.
>
> Pawel J.
>
> W dniu 2016-01-22 o 16:06, Nick Papior pisze:
>
> The status field should be
>
> integer :: stat(MPI_STATUS_SIZE)
>
> Perhaps n is located stackwise just after the stat variable, which then
> overwrites it.
>
> 2016-01-22 15:37 GMT+01:00 Paweł Jarzębski :
>
>> Hi,
>>
>> I wrote this code:
>>
>>   program hello
>>implicit none
>>
>>include 'mpif.h'
>>integer :: rank, dest, source, tag, ierr, stat
>>integer :: n
>>integer :: taskinfo, ptr
>>
>>call MPI_INIT(ierr)
>>call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
>>
>>if(rank.eq.0) then
>> write(*,*) 'Hello'
>>
>> n = 20
>> dest = 1
>> tag = 1
>> taskinfo = n
>> call MPI_SEND(taskinfo, 1, MPI_INTEGER, dest, tag,
>>  1   MPI_COMM_WORLD, ierr)
>>
>> tag = tag + 1
>> call MPI_SEND(ptr, 1, MPI_INTEGER, dest, tag,
>>  1   MPI_COMM_WORLD, ierr)
>>
>>else
>> source = 0
>> tag = 1
>>
>> !n = 1
>> call MPI_RECV(taskinfo, 1, MPI_INTEGER, source, tag,
>>  1   MPI_COMM_WORLD, stat, ierr)
>>
>> n = taskinfo
>>
>> tag = tag + 1
>>
>> write(*,*) 'n1 ', n
>> write(*,*) 'taskinfo1 ', taskinfo
>> call MPI_RECV(ptr, 1, MPI_INTEGER, source, tag,
>>  1   MPI_COMM_WORLD, stat, ierr)
>> write(*,*) 'n2 ', n
>> write(*,*) 'taskinfo2 ', taskinfo
>>endif
>>
>>call MPI_FINALIZE(ierr)
>>   end
>>
>>
>> I supposed that it should produce this:
>>  Hello
>>  n1   20
>>  taskinfo1   20
>>  n220
>>  taskinfo2   20
>>
>> But in fact it produces this:
>>  Hello
>>  n1   20
>>  taskinfo1   20
>>  n22
>>  taskinfo2   20
>>
>> It's strange to me that variable "n" is changed after call to MPI
>> subroutine, but I dont even put it in calls to MPI.
>> If I comment line 13 with " write(*,*) 'Hello' " everything is ok. If I
>> uncomment line 30 with "n = 1", everything is ok as well.
>>
>> Could anybody explain me what is happening?
>>
>> I tested it on:
>>   1) intel fortran compiler 14.0 and openmpi 1.6.5
>>   1) intel fortran compiler 13.1.3 and openmpi 1.8.4
>>
>> Best regards,
>> Pawel J.
>>
>>
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/01/28334.php
>>
>
>
>
> --
> Kind regards Nick
>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/01/28336.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28337.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] difference between OpenMPI - intel MPI mpi_waitall

2016-01-29 Thread Jeff Hammond
On Fri, Jan 29, 2016 at 2:45 AM, Diego Avesani 
wrote:

> Dear all,
>
> I have created a program in fortran and OpenMPI, I test it on my laptop
> and it works.
> I would like to use it on a cluster that has, unfortunately, intel MPI.
>
>
You can install any open-source MPI implementation from user space.  This
includes Open-MPI, MPICH, and MVAPICH2.  If you like Open-MPI, try this:


cd $OMPI_DIR && mkdir build && cd build && ../configure
--prefix=$HOME/ompi-install && make -j && make install

...or something like that.  I'm sure the details are properly documented
online.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] MX replacement?

2016-02-02 Thread Jeff Hammond
On Tuesday, February 2, 2016, Brice Goglin  wrote:

> Le 02/02/2016 15:21, Jeff Squyres (jsquyres) a écrit :
> > On Feb 2, 2016, at 9:00 AM, Dave Love  > wrote:
> >> Now that MX support has been dropped, is there an alternative for fast
> Ethernet?
> > There are several options for low latency ethernet, but they're all
> vendor-based solutions (e.g., my company's usNIC solution).
> >
> > Note that MX support was dropped mainly due to lack of someone to
> maintain it.  If someone wants to step up to maintain the MX support (which
> may potentially including maintaining the kernel side of things -- I don't
> know if Brice is interested in maintaining it any longer), it could be
> brought back.
> >
>
> I announced the end of the Open-MX maintenance to my users in December
> because OMPI was dropping MX support. Nobody complained. So I don't plan
> to bring back Open-MX to life neither OMPI MX support.
>
>
How much would it cost to turn Open-MX into a libfabric Ethernet provider?
I'll pass the hat at SC16...

Jeff


> Brice
>
> ___
> users mailing list
> us...@open-mpi.org 
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28438.php
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Fortran vs C reductions

2016-02-08 Thread Jeff Hammond
>
>
> > BTW: is there a reason you don't want to just use the C datatypes?  The
> fundamental output of the index is an integer value -- casting it to a
> float of some flavor doesn't fundamentally change its value.
>
> The code in question is not mine.  I have suggested to the developers that
> they should use the correct C types.  The reason I became aware of this
> issue is that one of my colleagues compiled and ran this code on a system
> where OpenMPI was built without Fortran support and the code started
> failing with errors from MPI which were not seen on other systems.
>
>
If you use an MPI library compiled without Fortran support, you should
expect precisely nothing related to Fortran to work.  You might get more
than this because the universe is being nice to you, but you should treat
it as serendipity when something works, not a bug when something doesn't.

Jeff


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] OMPI users] Fortran vs C reductions

2016-02-08 Thread Jeff Hammond
Waiting until runtime to issue this error is a terrible idea, because then
the PETSc team (among others) will disparage you for allowing a user to
successfully build against an unusable library.  They are on-record
numerous times in the past as to the evils of e.g. no-op symbols in MPI or
other runtime libraries, since this means that compile- and link-based
tests pass, even though nothing good will happen when the user employs them
in an application.

The right thing to do is what Gilles proposed: do not define the types in
mpi.h so that it is impossible to compile C code with Fortran datatypes, if
Fortran datatypes are not supported.  There are more and less effective
ways to do this, in terms of letting the user know what is happening.  For
example, you can just not define them, which might confuse novices who
don't know how to read error messages (HPC users are frequent offenders).

You could use e.g.:

#define MPI_DOUBLE_PRECISION choke me No Fortran support when library was
compiled!

Unfortunately, Clang colorized output emphasizes the wrong problem here,
and ICC doesn't even echo the message at all in its error message.  GCC
issues the same error four times, and makes it relatively hard to miss the
message.

If the following GCC extension is supported, along with C99/C++11, you
could do this:

#define MPI_DOUBLE_PRECISION _Pragma("GCC error \"MPI was not compiled with
Fortran support\"")

For the _functions_ that require Fortran support, you can use e.g.
__attribute__((error("no Fortran"))) on the function declaration, although
neither ICC nor Clang support this, and it ends up throwing two error
messages when compiled (only one - the right one - when only preprocessed),
which might confuse the same folks that it is trying to help.

Best,

Jeff

On Mon, Feb 8, 2016 at 5:14 AM, Jeff Squyres (jsquyres) 
wrote:

> The issue at hand is trying to help the user figure out that they have an
> open MPI built without fortran support.
>
> Perhaps we should improve the error reporting at run time to display
> something about the fact that you used a fortran data type but have an OMPI
> that was compiled without fortran support.
>
> Sent from my phone. No type good.
>
> On Feb 8, 2016, at 4:00 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
> That being said, should we remove these fortran types from include files
> and libs when ompi is configure'd without fortran support ?
>
> Cheers,
>
> Gilles
>
> Jeff Hammond  wrote:
>
>>
>> > BTW: is there a reason you don't want to just use the C datatypes?  The
>> fundamental output of the index is an integer value -- casting it to a
>> float of some flavor doesn't fundamentally change its value.
>>
>> The code in question is not mine.  I have suggested to the developers
>> that they should use the correct C types.  The reason I became aware of
>> this issue is that one of my colleagues compiled and ran this code on a
>> system where OpenMPI was built without Fortran support and the code started
>> failing with errors from MPI which were not seen on other systems.
>>
>>
> If you use an MPI library compiled without Fortran support, you should
> expect precisely nothing related to Fortran to work.  You might get more
> than this because the universe is being nice to you, but you should treat
> it as serendipity when something works, not a bug when something doesn't.
>
> Jeff
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28459.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28460.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] OMPI users] Fortran vs C reductions

2016-02-09 Thread Jeff Hammond
"MPI-3.0 (and later) compliant Fortran bindings are not only a property of
the MPI library itself, but rather a property of an MPI library together
with the Fortran compiler suite for which it is compiled." (MPI 3.1 Section
17.1.7).

Of course, implementations can provide support in excess of the minimum
required by the standard, provided that support remains compliant with the
standard.

Jeff

On Mon, Feb 8, 2016 at 9:21 PM, George Bosilca  wrote:
>
> Sorry to spoil the fun here, but this proposal is a very bad idea. It is
mandated by the MPI standard, page 25 line 27 (v3.1), not only to provide
all predefined datatypes, but to have support for them. There are optional
datatypes, but MPI_DOUBLE_PRECISION (which is explicitly the base
predefined datatype for MPI_2DOUBLE_PRECISION) is not one of them.
>
> Now we can argue if DOUBLE PRECISION in Fortran is a double in C. As
these languages are interoperable, and there is no explicit conversion
function, it is safe to assume this is the case. Thus, is seems to me
absolutely legal to provide the MPI-required support for DOUBLE PRECISION
despite the fact that Fortran support is not enabled.
>
> Now taking a closer look at the op, I see nothing in the standard the
would require to provide the op if the corresponding language is not
supported. While it could be nice (as a convenience for the users and also
because there is no technical reason not to) to enable the loc op, on non
native datatypes, this is not mandatory. Thus, the current behavior exposed
by Open MPI is acceptable from the standard perspective.
>
>   George.
>
>
> On Mon, Feb 8, 2016 at 4:35 PM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
>>
>> Awesome; thanks Gilles.
>>
>>
>> > On Feb 8, 2016, at 9:29 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
>> >
>> > ok, will do
>> >
>> > Cheers,
>> >
>> > Gilles
>> >
>> > On Monday, February 8, 2016, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
>> > I like your suggestion better -- if we can somehow report during the
compile/link that the reason for the error is because Open MPI was not
compiled with Fortran support, that would definitely be preferable.
>> >
>> > FWIW: my suggestion was because I wanted to convey the *reason* for
the error (i.e., that OMPI has no Fortran support), and a pragma-based
solution didn't occur to me.  I didn't want to follow Gilles' suggestion of
just removing the symbols, because that will lead to other confusion (e.g.,
"Hey, Open MPI is not compliant because it doesn't have Fortran datatypes
available in C!").
>> >
>> > Gilles: do you want to poke around and see if you can make any of
Jeff's suggestions work out nicely?  (i.e., give some kind of compile/link
error that states that Open MPI was not built with Fortran support?)
>> >
>> >
>> > On Feb 8, 2016, at 8:55 AM, Jeff Hammond 
wrote:
>> > >
>> > > Waiting until runtime to issue this error is a terrible idea,
because then the PETSc team (among others) will disparage you for allowing
a user to successfully build against an unusable library.  They are
on-record numerous times in the past as to the evils of e.g. no-op symbols
in MPI or other runtime libraries, since this means that compile- and
link-based tests pass, even though nothing good will happen when the user
employs them in an application.
>> > >
>> > > The right thing to do is what Gilles proposed: do not define the
types in mpi.h so that it is impossible to compile C code with Fortran
datatypes, if Fortran datatypes are not supported.  There are more and less
effective ways to do this, in terms of letting the user know what is
happening.  For example, you can just not define them, which might confuse
novices who don't know how to read error messages (HPC users are frequent
offenders).
>> > >
>> > > You could use e.g.:
>> > >
>> > > #define MPI_DOUBLE_PRECISION choke me No Fortran support when
library was compiled!
>> > >
>> > > Unfortunately, Clang colorized output emphasizes the wrong problem
here, and ICC doesn't even echo the message at all in its error message.
GCC issues the same error four times, and makes it relatively hard to miss
the message.
>> > >
>> > > If the following GCC extension is supported, along with C99/C++11,
you could do this:
>> > >
>> > > #define MPI_DOUBLE_PRECISION _Pragma("GCC error \"MPI was not
compiled with Fortran support\"")
>> > >
>> > > For the _functions_ that require Fortran support, you can use e.g.
__attribute__((error("no Fortran"))

Re: [OMPI users] OMPI users] Fortran vs C reductions

2016-02-09 Thread Jeff Hammond
Then we should clarify the spec, because it's unreasonable to require MPI
support a Fortran type without being able to know its representation.

Jeff

On Tuesday, February 9, 2016, George Bosilca  wrote:

> The text you pinpoint is clear about the target: the MPI bindings. The
> question here is not about bindings, but about a predefined datatype, a
> case where I don't think the text applies.
>
>   George.
>
>
> On Tue, Feb 9, 2016 at 6:17 PM, Jeff Hammond  > wrote:
>
>> "MPI-3.0 (and later) compliant Fortran bindings are not only a property
>> of the MPI library itself, but rather a property of an MPI library together
>> with the Fortran compiler suite for which it is compiled." (MPI 3.1 Section
>> 17.1.7).
>>
>> Of course, implementations can provide support in excess of the minimum
>> required by the standard, provided that support remains compliant with the
>> standard.
>>
>> Jeff
>>
>>
>> On Mon, Feb 8, 2016 at 9:21 PM, George Bosilca > > wrote:
>> >
>> > Sorry to spoil the fun here, but this proposal is a very bad idea. It
>> is mandated by the MPI standard, page 25 line 27 (v3.1), not only to
>> provide all predefined datatypes, but to have support for them. There are
>> optional datatypes, but MPI_DOUBLE_PRECISION (which is explicitly the base
>> predefined datatype for MPI_2DOUBLE_PRECISION) is not one of them.
>> >
>> > Now we can argue if DOUBLE PRECISION in Fortran is a double in C. As
>> these languages are interoperable, and there is no explicit conversion
>> function, it is safe to assume this is the case. Thus, is seems to me
>> absolutely legal to provide the MPI-required support for DOUBLE PRECISION
>> despite the fact that Fortran support is not enabled.
>> >
>> > Now taking a closer look at the op, I see nothing in the standard the
>> would require to provide the op if the corresponding language is not
>> supported. While it could be nice (as a convenience for the users and also
>> because there is no technical reason not to) to enable the loc op, on non
>> native datatypes, this is not mandatory. Thus, the current behavior exposed
>> by Open MPI is acceptable from the standard perspective.
>> >
>> >   George.
>> >
>> >
>> > On Mon, Feb 8, 2016 at 4:35 PM, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com >
>> wrote:
>> >>
>> >> Awesome; thanks Gilles.
>> >>
>> >>
>> >> > On Feb 8, 2016, at 9:29 AM, Gilles Gouaillardet <
>> gilles.gouaillar...@gmail.com
>> > wrote:
>> >> >
>> >> > ok, will do
>> >> >
>> >> > Cheers,
>> >> >
>> >> > Gilles
>> >> >
>> >> > On Monday, February 8, 2016, Jeff Squyres (jsquyres) <
>> jsquy...@cisco.com >
>> wrote:
>> >> > I like your suggestion better -- if we can somehow report during the
>> compile/link that the reason for the error is because Open MPI was not
>> compiled with Fortran support, that would definitely be preferable.
>> >> >
>> >> > FWIW: my suggestion was because I wanted to convey the *reason* for
>> the error (i.e., that OMPI has no Fortran support), and a pragma-based
>> solution didn't occur to me.  I didn't want to follow Gilles' suggestion of
>> just removing the symbols, because that will lead to other confusion (e.g.,
>> "Hey, Open MPI is not compliant because it doesn't have Fortran datatypes
>> available in C!").
>> >> >
>> >> > Gilles: do you want to poke around and see if you can make any of
>> Jeff's suggestions work out nicely?  (i.e., give some kind of compile/link
>> error that states that Open MPI was not built with Fortran support?)
>> >> >
>> >> >
>> >> > On Feb 8, 2016, at 8:55 AM, Jeff Hammond > > wrote:
>> >> > >
>> >> > > Waiting until runtime to issue this error is a terrible idea,
>> because then the PETSc team (among others) will disparage you for allowing
>> a user to successfully build against an unusable library.  They are
>> on-record numerous times in the past as to the evils of e.g. no-op symbols
>> in MPI or other runtime libraries, since this means that compile- and
>> link-based tests pass, even though nothing good will happen when the user
>> employs them in an application.
>> >> > >
>> >> > > The right thing to do is what Gilles proposed: do not de

Re: [OMPI users] shared memory zero size segment

2016-02-10 Thread Jeff Hammond
I don't know about bulletproof, but Win_shared_query is the *only* valid
way to get the addresses of memory in other processes associated with a
window.

The default for Win_allocate_shared is contiguous memory, but it can and
likely will be mapped differently into each process, in which case only
relative offsets are transferrable.

Jeff

On Wed, Feb 10, 2016 at 4:19 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Peter,
>
> The bulletproof way is to use MPI_Win_shared_query after
> MPI_Win_allocate_shared.
> I do not know if current behavior is a bug or a feature...
>
> Cheers,
>
> Gilles
>
>
> On Wednesday, February 10, 2016, Peter Wind  wrote:
>
>> Hi,
>>
>> Under fortran, MPI_Win_allocate_shared is called with a window size of
>> zero for some processes.
>> The output pointer is then not valid for these processes (null pointer).
>> Did I understood this wrongly? shouldn't the pointers be contiguous, so
>> that for a zero sized window, the pointer should point to the start of the
>> segment of the next rank?
>> The documentation explicitly specifies "size = 0 is valid".
>>
>> Attached a small code, where rank=0 allocate a window of size zero. All
>> the other ranks get valid pointers, except rank 0.
>>
>> Best regards,
>> Peter
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/02/28485.php
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28493.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] shared memory zero size segment

2016-02-10 Thread Jeff Hammond
On Wed, Feb 10, 2016 at 8:44 AM, Peter Wind  wrote:

> I agree that in practice the best practice would be to use
> Win_shared_query.
>
> Still I am confused by this part in the documentation:
> "The allocated memory is contiguous across process ranks unless the info
> key *alloc_shared_noncontig* is specified. Contiguous across process
> ranks means that the first address in the memory segment of process i is
> consecutive with the last address in the memory segment of process i - 1.
> This may enable the user to calculate remote address offsets with local
> information only."
>
> Isn't this an encouragement to use the pointer of Win_allocate_shared
> directly?
>
>
No, it is not.  Win_allocate_shared only gives you the pointer to the
portion of the allocation that is owned by the calling process.  If you
want to access the whole slab, call Win_shared_query(..,rank=0,..) and use
the resulting baseptr.

I attempted to modify your code to be more correct, but I don't know enough
Fortran to get it right.  If you can parse C examples, I'll provide some of
those.

Jeff


> Peter
>
> --
>
> I don't know about bulletproof, but Win_shared_query is the *only* valid
> way to get the addresses of memory in other processes associated with a
> window.
>
> The default for Win_allocate_shared is contiguous memory, but it can and
> likely will be mapped differently into each process, in which case only
> relative offsets are transferrable.
>
> Jeff
>
> On Wed, Feb 10, 2016 at 4:19 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
>> Peter,
>>
>> The bulletproof way is to use MPI_Win_shared_query after
>> MPI_Win_allocate_shared.
>> I do not know if current behavior is a bug or a feature...
>>
>> Cheers,
>>
>> Gilles
>>
>>
>> On Wednesday, February 10, 2016, Peter Wind  wrote:
>>
>>> Hi,
>>>
>>> Under fortran, MPI_Win_allocate_shared is called with a window size of
>>> zero for some processes.
>>> The output pointer is then not valid for these processes (null pointer).
>>> Did I understood this wrongly? shouldn't the pointers be contiguous, so
>>> that for a zero sized window, the pointer should point to the start of the
>>> segment of the next rank?
>>> The documentation explicitly specifies "size = 0 is valid".
>>>
>>> Attached a small code, where rank=0 allocate a window of size zero. All
>>> the other ranks get valid pointers, except rank 0.
>>>
>>> Best regards,
>>> Peter
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/02/28485.php
>>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/02/28493.php
>>
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>
> _______
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28496.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28497.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


sharetest.f90
Description: Binary data


Re: [OMPI users] shared memory zero size segment

2016-02-11 Thread Jeff Hammond
On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm  wrote:
>
>
> On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> >I would add that the present situation is bound to give problems for
some
> >users.
> >It is natural to divide an array in segments, each process treating
its
> >own segment, but needing to read adjacent segments too.
> >MPI_Win_allocate_shared seems to be designed for this.
> >This will work fine as long as no segment as size zero. It can also
be
> >expected that most testing would be done with all segments larger
than
> >zero.
> >The document adding "size = 0 is valid", would also make people
confident
> >that it will be consistent for that special case too.
>
> Nope, that statement says its ok for a rank to specify that the local
> shared memory segment is 0 bytes. Nothing more. The standard
> unfortunately does not define what pointer value is returned for a rank
> that specifies size = 0. Not sure if the RMA working group intentionally
> left that undefine... Anyway, Open MPI does not appear to be out of
> compliance with the standard here.
>

MPI_Alloc_mem doesn't say what happens if you pass size=0 either.  The RMA
working group intentionally tries to maintain consistency with the rest of
the MPI standard whenever possible, so we did not create a new semantic
here.

MPI_Win_shared_query text includes this:

"If all processes in the group attached to the window specified size = 0,
then the call returns size = 0 and a baseptr as if MPI_ALLOC_MEM was called
with size = 0."

>
> To be safe you should use MPI_Win_shared_query as suggested. You can
> pass MPI_PROC_NULL as the rank to get the pointer for the first non-zero
> sized segment in the shared memory window.

Indeed!  I forgot about that.  MPI_Win_shared_query solves this problem for
the user brilliantly.

Jeff

--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] shared memory zero size segment

2016-02-11 Thread Jeff Hammond
See attached.  Output below.  Note that the base you get for ranks 0 and 1
is the same, so you need to use the fact that size=0 at rank=0 to know not
to dereference that pointer and expect to be writing into rank 0's memory,
since you will write into rank 1's.

I would probably add "if (size==0) base=NULL;" for good measure.

Jeff

$ mpirun -n 4 ./a.out

query: me=0, them=0, size=0, disp=1, base=0x10bd64000

query: me=0, them=1, size=4, disp=1, base=0x10bd64000

query: me=0, them=2, size=4, disp=1, base=0x10bd64004

query: me=0, them=3, size=4, disp=1, base=0x10bd64008

query: me=0, them=PROC_NULL, size=4, disp=1, base=0x10bd64000

query: me=1, them=0, size=0, disp=1, base=0x102d3b000

query: me=1, them=1, size=4, disp=1, base=0x102d3b000

query: me=1, them=2, size=4, disp=1, base=0x102d3b004

query: me=1, them=3, size=4, disp=1, base=0x102d3b008

query: me=1, them=PROC_NULL, size=4, disp=1, base=0x102d3b000

query: me=2, them=0, size=0, disp=1, base=0x10aac1000

query: me=2, them=1, size=4, disp=1, base=0x10aac1000

query: me=2, them=2, size=4, disp=1, base=0x10aac1004

query: me=2, them=3, size=4, disp=1, base=0x10aac1008

query: me=2, them=PROC_NULL, size=4, disp=1, base=0x10aac1000

query: me=3, them=0, size=0, disp=1, base=0x100fa2000

query: me=3, them=1, size=4, disp=1, base=0x100fa2000

query: me=3, them=2, size=4, disp=1, base=0x100fa2004

query: me=3, them=3, size=4, disp=1, base=0x100fa2008

query: me=3, them=PROC_NULL, size=4, disp=1, base=0x100fa2000

On Thu, Feb 11, 2016 at 8:55 AM, Jeff Hammond 
wrote:

>
>
> On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm  wrote:
> >
> >
> > On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> > >I would add that the present situation is bound to give problems
> for some
> > >users.
> > >It is natural to divide an array in segments, each process treating
> its
> > >own segment, but needing to read adjacent segments too.
> > >MPI_Win_allocate_shared seems to be designed for this.
> > >This will work fine as long as no segment as size zero. It can also
> be
> > >expected that most testing would be done with all segments larger
> than
> > >zero.
> > >The document adding "size = 0 is valid", would also make people
> confident
> > >that it will be consistent for that special case too.
> >
> > Nope, that statement says its ok for a rank to specify that the local
> > shared memory segment is 0 bytes. Nothing more. The standard
> > unfortunately does not define what pointer value is returned for a rank
> > that specifies size = 0. Not sure if the RMA working group intentionally
> > left that undefine... Anyway, Open MPI does not appear to be out of
> > compliance with the standard here.
> >
>
> MPI_Alloc_mem doesn't say what happens if you pass size=0 either.  The RMA
> working group intentionally tries to maintain consistency with the rest of
> the MPI standard whenever possible, so we did not create a new semantic
> here.
>
> MPI_Win_shared_query text includes this:
>
> "If all processes in the group attached to the window specified size = 0,
> then the call returns size = 0 and a baseptr as if MPI_ALLOC_MEM was called
> with size = 0."
>
> >
> > To be safe you should use MPI_Win_shared_query as suggested. You can
> > pass MPI_PROC_NULL as the rank to get the pointer for the first non-zero
> > sized segment in the shared memory window.
>
> Indeed!  I forgot about that.  MPI_Win_shared_query solves this problem
> for the user brilliantly.
>
> Jeff
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
#include 
#include 

/* test zero size segment.
 run on at least 3 cpus
 mpirun -np 4 a.out */

int main(int argc, char** argv)
{
MPI_Init(NULL, NULL);

int wsize, wrank;
MPI_Comm_size(MPI_COMM_WORLD, &wsize);
MPI_Comm_rank(MPI_COMM_WORLD, &wrank);

MPI_Comm ncomm = MPI_COMM_NULL;
MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &ncomm);

MPI_Aint size = (wrank==0) ? 0 : sizeof(int);
MPI_Win win = MPI_WIN_NULL;
int * ptr = NULL;
MPI_Win_allocate_shared(size, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &ptr, &win);

int nsize, nrank;
MPI_Comm_size(MPI_COMM_WORLD, &nsize);
MPI_Comm_rank(MPI_COMM_WORLD, &nrank);

for (int r=0; r

Re: [OMPI users] shared memory zero size segment

2016-02-11 Thread Jeff Hammond
 me=2, them=0, size=0, disp=1, base=0x10aac1000
> >
> >  query: me=2, them=1, size=4, disp=1, base=0x10aac1000
> >
> >  query: me=2, them=2, size=4, disp=1, base=0x10aac1004
> >
> >  query: me=2, them=3, size=4, disp=1, base=0x10aac1008
> >
> >  query: me=2, them=PROC_NULL, size=4, disp=1, base=0x10aac1000
> >
> >  query: me=3, them=0, size=0, disp=1, base=0x100fa2000
> >
> >  query: me=3, them=1, size=4, disp=1, base=0x100fa2000
> >
> >  query: me=3, them=2, size=4, disp=1, base=0x100fa2004
> >
> >  query: me=3, them=3, size=4, disp=1, base=0x100fa2008
> >
> >  query: me=3, them=PROC_NULL, size=4, disp=1, base=0x100fa2000
> >
> >  On Thu, Feb 11, 2016 at 8:55 AM, Jeff Hammond <
> jeff.scie...@gmail.com >
> >  wrote:
> >
> >On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm  > wrote:
> >>
> >>
> >> On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> >> >I would add that the present situation is bound to give
> >problems for some
> >> >users.
> >> >It is natural to divide an array in segments, each process
> >treating its
> >> >own segment, but needing to read adjacent segments too.
> >> >MPI_Win_allocate_shared seems to be designed for this.
> >> >This will work fine as long as no segment as size zero. It
> can
> >also be
> >> >expected that most testing would be done with all segments
> >larger than
> >> >zero.
> >> >The document adding "size = 0 is valid", would also make
> people
> >confident
> >> >that it will be consistent for that special case too.
> >>
> >> Nope, that statement says its ok for a rank to specify that the
> >local
> >> shared memory segment is 0 bytes. Nothing more. The standard
> >> unfortunately does not define what pointer value is returned
> for a
> >rank
> >> that specifies size = 0. Not sure if the RMA working group
> >intentionally
> >> left that undefine... Anyway, Open MPI does not appear to be
> out of
> >> compliance with the standard here.
> >>
> >MPI_Alloc_mem doesn't say what happens if you pass size=0
> either.  The
> >RMA working group intentionally tries to maintain consistency
> with the
> >rest of the MPI standard whenever possible, so we did not create
> a new
> >semantic here.
> >MPI_Win_shared_query text includes this:
> >"If all processes in the group attached to the window specified
> size =
> >0, then the call returns size = 0 and a baseptr as if
> MPI_ALLOC_MEM
> >was called with size = 0."
> >
> >>
> >> To be safe you should use MPI_Win_shared_query as suggested.
> You can
> >    > pass MPI_PROC_NULL as the rank to get the pointer for the first
> >non-zero
> >> sized segment in the shared memory window.
> >Indeed!  I forgot about that.  MPI_Win_shared_query solves this
> >problem for the user brilliantly.
> >Jeff
> >--
> >Jeff Hammond
> >jeff.scie...@gmail.com 
> >http://jeffhammond.github.io/
> >
> >  --
> >  Jeff Hammond
> >  jeff.scie...@gmail.com 
> >  http://jeffhammond.github.io/
> >  ___
> >  users mailing list
> >  us...@open-mpi.org 
> >  Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> >  Link to this post:
> >  http://www.open-mpi.org/community/lists/users/2016/02/28508.php
>
> > ___
> > users mailing list
> > us...@open-mpi.org 
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28511.php
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] shared memory zero size segment

2016-02-11 Thread Jeff Hammond
e=4, disp=1, base=0x10bd64000
> > >
> > >  query: me=0, them=2, size=4, disp=1, base=0x10bd64004
> > >
> > >  query: me=0, them=3, size=4, disp=1, base=0x10bd64008
> > >
> > >  query: me=0, them=PROC_NULL, size=4, disp=1, base=0x10bd64000
> > >
> > >  query: me=1, them=0, size=0, disp=1, base=0x102d3b000
> > >
> > >  query: me=1, them=1, size=4, disp=1, base=0x102d3b000
> > >
> > >  query: me=1, them=2, size=4, disp=1, base=0x102d3b004
> > >
> > >  query: me=1, them=3, size=4, disp=1, base=0x102d3b008
> > >
> > >  query: me=1, them=PROC_NULL, size=4, disp=1, base=0x102d3b000
> > >
> > >  query: me=2, them=0, size=0, disp=1, base=0x10aac1000
> > >
> > >  query: me=2, them=1, size=4, disp=1, base=0x10aac1000
> > >
> > >  query: me=2, them=2, size=4, disp=1, base=0x10aac1004
> > >
> > >  query: me=2, them=3, size=4, disp=1, base=0x10aac1008
> > >
> > >  query: me=2, them=PROC_NULL, size=4, disp=1, base=0x10aac1000
> > >
> > >  query: me=3, them=0, size=0, disp=1, base=0x100fa2000
> > >
> > >  query: me=3, them=1, size=4, disp=1, base=0x100fa2000
> > >
> > >  query: me=3, them=2, size=4, disp=1, base=0x100fa2004
> > >
> > >  query: me=3, them=3, size=4, disp=1, base=0x100fa2008
> > >
> > >  query: me=3, them=PROC_NULL, size=4, disp=1, base=0x100fa2000
> > >
> > >  On Thu, Feb 11, 2016 at 8:55 AM, Jeff Hammond <
> jeff.scie...@gmail.com >
> > >  wrote:
> > >
> > >On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm  > wrote:
> > >>
> > >>
> > >> On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> > >> >I would add that the present situation is bound to give
> > >problems for some
> > >> >users.
> > >> >It is natural to divide an array in segments, each
> process
> > >treating its
> > >> >own segment, but needing to read adjacent segments too.
> > >> >MPI_Win_allocate_shared seems to be designed for this.
> > >> >This will work fine as long as no segment as size zero.
> It can
> > >also be
> > >> >expected that most testing would be done with all
> segments
> > >larger than
> > >> >zero.
> > >> >The document adding "size = 0 is valid", would also make
> people
> > >confident
> > >> >that it will be consistent for that special case too.
> > >>
> > >> Nope, that statement says its ok for a rank to specify that
> the
> > >local
> > >> shared memory segment is 0 bytes. Nothing more. The standard
> > >    > unfortunately does not define what pointer value is returned
> for a
> > >rank
> > >    > that specifies size = 0. Not sure if the RMA working group
> > >intentionally
> > >> left that undefine... Anyway, Open MPI does not appear to be
> out of
> > >> compliance with the standard here.
> > >>
> > >MPI_Alloc_mem doesn't say what happens if you pass size=0
> either.  The
> > >RMA working group intentionally tries to maintain consistency
> with the
> > >rest of the MPI standard whenever possible, so we did not
> create a new
> > >semantic here.
> > >MPI_Win_shared_query text includes this:
> > >"If all processes in the group attached to the window specified
> size =
> > >0, then the call returns size = 0 and a baseptr as if
> MPI_ALLOC_MEM
> > >was called with size = 0."
> > >
> > >>
> > >> To be safe you should use MPI_Win_shared_query as suggested.
> You can
> > >> pass MPI_PROC_NULL as the rank to get the pointer for the
> first
> > >non-zero
> > >> sized segment in the shared memory window.
> > >Indeed!  I forgot about that.  MPI_Win_shared_query solves this
> > >problem for the user brilliantly.
> > >Jeff
> > >--
> > >Jeff Hammond
> > >jeff.scie...@gmail.com 
> > >http://jeffhammond.github.io/
> > >
> > >  --
> > >  Jeff Hammond
> > >  jeff.scie...@gmail.com 
> > >  http://jeffhammond.github.io/
> > >  ___
> > >  users mailing list
> > >  us...@open-mpi.org 
> > >  Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >  Link to this post:
> > >  http://www.open-mpi.org/community/lists/users/2016/02/28508.php
> >
> > > ___
> > > users mailing list
> > > us...@open-mpi.org 
> > > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28511.php
> >
>
>
>
> > ___
> > users mailing list
> > us...@open-mpi.org 
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28513.php
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Open MPI backwards incompatibility issue in statically linked program

2016-02-13 Thread Jeff Hammond
On Sat, Feb 13, 2016 at 2:27 PM, Nick Papior  wrote:

>
>
> 2016-02-13 23:07 GMT+01:00 Kim Walisch :
>
>> Hi,
>>
>> > You may be interested in reading:
>> https://www.open-mpi.org/software/ompi/versions/
>>
>> Thanks for your answer. I found:
>>
>> > Specifically: v1.10.x is not guaranteed to be backwards
>> compatible with other v1.x releases.
>>
>> And:
>>
>> > However, this definition only applies when the same version of Open
>> MPI is used with all instances ... If the versions are not exactly the
>> same everywhere, Open MPI is not guaranteed to work properly in any
>> scenario.
>>
>> So statically linking against Open MPI seems to be a bad idea.
>>
>> What about linking against a rather old shared Open MPI library
>> from e.g. 3 years ago? Will my program likely run on most systems
>> which have a more recent Open MPI version installed?
>>
> Most probably this will rarely work. If it does, you are lucky... :)
> The link still applies. As it says, if it works it works, if not you have
> to do something else.
>
>>
>> Or is it better to not distribute any binaries which link against Open MPI
>> and instead put compilation instructions on the website?
>>
> Yes, and/or provide serial equivalents of your program.
> Besides, providing binaries for specific MPI-implementations may seem like
> easing it for users, however in my experience many users do not know that
> MPI is implementation specific, i.e. OpenMPI and MPICH and hence they will
> subsequently ask why it doesn't work using an intel-suite of compilers (for
> instance).
>
>>
>>
You can rely upon e.g. https://www.mpich.org/abi/ when redistributing MPI
binaries built with MPICH, but a better option would be to wrap all of your
MPI code in an implementation-agnostic wrapper and then ship a binary that
can dlopen a different version wrapper depending on which MPI
implementation the user has.  That would allow you to ship a single binary
that could use both MPICH and OpenMPI.

Jeff


> Thanks,
>> Kim
>>
>> On Sat, Feb 13, 2016 at 10:45 PM, Nick Papior 
>> wrote:
>>
>>> You may be interested in reading:
>>> https://www.open-mpi.org/software/ompi/versions/
>>>
>>> 2016-02-13 22:30 GMT+01:00 Kim Walisch :
>>>
>>>> Hi,
>>>>
>>>> In order to make life of my users easier I have built a fully
>>>> statically linked version of my primecount program. So the program
>>>> also statically links against Open MPI. I have built this binary on
>>>> CentOS-7-x86_64 using gcc. The good news is that the binary runs
>>>> without any issues on Ubuntu 15.10 x64 (uses mpiexec (OpenRTE) 1.10.2).
>>>>
>>>> The bad news is that the binary does not work on Ubuntu 14.04 x64
>>>> which uses mpiexec (OpenRTE) 1.6.5. Here is the error message:
>>>>
>>>>
>>>> $ mpirun -n 1 ./primecount 1e10 -t1
>>>> [ip-XXX:02671] [[8243,0],0] mca_oob_tcp_recv_handler: invalid message
>>>> type: 15
>>>>
>>>> --
>>>> mpirun noticed that the job aborted, but has no info as to the process
>>>> that caused that situation.
>>>>
>>>> --
>>>> ubuntu@ip-XXX:~$ mpiexec --version
>>>> mpiexec (OpenRTE) 1.6.5
>>>>
>>>>
>>>> Questions:
>>>>
>>>> 1) Is this backwards incompatibility issue an Open MPI bug?
>>>>
>>>> 2) Can I expect that my binary will work with future mpiexec
>>>> versions >= 1.10 (which it was built with)?
>>>>
>>>> Thanks and best regards,
>>>> Kim
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/02/28522.php
>>>>
>>>
>>>
>>>
>>> --
>>> Kind regards Nick
>>>
>>
>>
>
>
> --
> Kind regards Nick
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28525.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] OpenMP & Open MPI

2016-03-13 Thread Jeff Hammond
On Sun, Mar 13, 2016 at 2:02 PM, Matthew Larkin  wrote:

> Hello,
>
> My understanding is Open MPI can utilize shared and/or distributed memory
> architecture (parallel programming model). OpenMP is soley for shared
> memory systems.
>
>
The MPI-3 standard provides both explicitly shared and distributed memory
semantics.  See MPI_Win_allocate_shared for the shared memory feature.

In addition to the explicit semantic, all reasonable MPI implementations
exploit shared-memory internally, which is why Send-Recv within a node is
usually higher bandwidth than between nodes.


> I believe Open MPI incorporates OpenMP from some of the other archives I
> glanced at.
>
>
Some implementations use OS threads (e.g. POSIX threads) internally but not
for the type of concurrency that OpenMP provides.

OpenMP is usually a bad choice for use inside of an MPI library because it
does not generally compose well with other threading models.


> Is this a true statement? If so, is there any need to create a hybrid
> model that uses both OpenMP and Open MPI?
>
>
Various people, including me, have argued that MPI+OpenMP hybrid
programming is not necessary and even harmful:
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf
http://www.cs.utexas.edu/users/flame/BLISRetreat2014/slides/hammond-blis-2014.pdf
http://scisoftdays.org/pdf/2016_slides/hammond.pdf

However, this does not mean that flat MPI is close to optimal.  The
statement here is only that MPI+MPI is more effective than MPI+OpenMP when
the programmer devotes equivalent effort to both (and handles SIMD via some
mechanism)

Best,

Jeff


> Thanks!
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28696.php
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Fault tolerant feature in Open MPI

2016-03-16 Thread Jeff Hammond
Why do you need OpenMPI to do this? Molecular dynamics trajectories are
trivial to checkpoint and restart at the application level. I'm sure
Gromacs already supports this. Please consult the Gromacs docs or user
support for details.

Jeff

On Tuesday, March 15, 2016, Husen R  wrote:

> Dear Open MPI Users,
>
>
> Does the current stable release of Open MPI (v1.10 series) support fault
> tolerant feature ?
> I got the information from Open MPI FAQ that The checkpoint/restart
> support was last released as part of the v1.6 series.
> I just want to make sure about this.
>
> and by the way, does Open MPI able to checkpoint or restart mpi
> application/GROMACS automatically ?
> Please, I really need help.
>
> Regards,
>
>
> Husen
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


Re: [OMPI users] Fault tolerant feature in Open MPI

2016-03-16 Thread Jeff Hammond
Just checkpoint-restart the app to relocate. The overhead will be lower
than trying to do with MPI.

Jeff

On Wednesday, March 16, 2016, Husen R  wrote:

> Hi Jeff,
>
> Thanks for the reply.
>
> After consulting the Gromacs docs, as you suggested, Gromacs already
> supports checkpoint/restart. thanks for the suggestion.
>
> Previously, I asked about checkpoint/restart in Open MPI because I want to
> checkpoint MPI Application and restart/migrate it while it is running.
> For the example, I run MPI application in node A,B and C in a cluster and
> I want to migrate process running in node A to other node, let's say to
> node C.
> is there a way to do this with open MPI ? thanks.
>
> Regards,
>
> Husen
>
>
>
>
> On Wed, Mar 16, 2016 at 12:37 PM, Jeff Hammond  > wrote:
>
>> Why do you need OpenMPI to do this? Molecular dynamics trajectories are
>> trivial to checkpoint and restart at the application level. I'm sure
>> Gromacs already supports this. Please consult the Gromacs docs or user
>> support for details.
>>
>> Jeff
>>
>>
>> On Tuesday, March 15, 2016, Husen R > > wrote:
>>
>>> Dear Open MPI Users,
>>>
>>>
>>> Does the current stable release of Open MPI (v1.10 series) support fault
>>> tolerant feature ?
>>> I got the information from Open MPI FAQ that The checkpoint/restart
>>> support was last released as part of the v1.6 series.
>>> I just want to make sure about this.
>>>
>>> and by the way, does Open MPI able to checkpoint or restart mpi
>>> application/GROMACS automatically ?
>>> Please, I really need help.
>>>
>>> Regards,
>>>
>>>
>>> Husen
>>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.scie...@gmail.com
>> 
>> http://jeffhammond.github.io/
>>
>> ___
>> users mailing list
>> us...@open-mpi.org 
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/03/28705.php
>>
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


  1   2   >