Hi,
I tried openmpi-3.0.0rc1.tar.gz using Intel Fortran 2017 and gcc on a current
MacOS system. For this version, it seems to me that MPI_IN_PLACE returns
incorrect results (while other MPI implementations, including some past OpenMPI
versions, work fine).
This can be seen with a simple Fortra
Volker,
i was unable to reproduce this issue on linux
can you please post your full configure command line, your gnu
compiler version and the full test program ?
also, how many mpi tasks are you running ?
Cheers,
Gilles
On Wed, Jul 26, 2017 at 4:25 PM, Volker Blum wrote:
> Hi,
>
> I tried op
Dear Gilles,
Thank you very much for the fast answer.
Darn. I feared it might not occur on all platforms, since my former Macbook
(with an older OpenMPI version) no longer exhibited the problem, a different
Linux/Intel Machine did last December, etc.
On this specific machine, the configure lin
Dear George, Dear all,
I use "mpirun -np xx ./a.out"
I do not know if I have some common grounds. I mean, I have to
design everything from the begging. You can find what I would like to do in
the attachment. Basically, an MPI cast in another MPI. Consequently, I am
thinking to MPI groups or MPI
Hi,
> Am 26.07.2017 um 00:48 schrieb Kulshrestha, Vipul
> :
>
> I have several questions about integration of openmpi with resource queuing
> systems.
>
> 1.
> I understand that openmpi supports integration with various resource
> distribution systems such as SGE, LSF, torque etc.
>
> I ne
Hi,
> Am 26.07.2017 um 02:16 schrieb r...@open-mpi.org:
>
>
>> On Jul 25, 2017, at 3:48 PM, Kulshrestha, Vipul
>> wrote:
>>
>> I have several questions about integration of openmpi with resource queuing
>> systems.
>>
>> 1.
>> I understand that openmpi supports integration with various res
Volker,
thanks, i will have a look at it
meanwhile, if you can reproduce this issue on a more mainstream
platform (e.g. linux + gfortran) please let me know.
since you are using ifort, Open MPI was built with Fortran 2008
bindings, so you can replace
include 'mpif.h'
with
use mpi_f08
and who kno
Thanks for a quick response.
I will try building OMPI as suggested.
On the integration with unsupported distribution systems, we cannot use script
based approach, because often these machines don’t have ssh permission in
customer environment. I will explore the path of writing orte component. A
Diego,
As all your processes are started under the umbrella of a single mpirun,
they have a communicator in common, the MPI_COMM_WORLD.
One possible implementation, using MPI_Comm_split, will be the following:
MPI_Comm small_comm, leader_comm;
/* Create small_comm on all processes */
/* Now us
mpirun doesn’t get access to that requirement, nor does it need to do so. SGE
will use the requirement when determining the nodes to allocate. mpirun just
uses the nodes that SGE provides.
What your cmd line does is restrict the entire operation on each node (daemon +
8 procs) to 40GB of memory
Hi,
> Am 26.07.2017 um 15:03 schrieb Kulshrestha, Vipul
> :
>
> Thanks for a quick response.
>
> I will try building OMPI as suggested.
>
> On the integration with unsupported distribution systems, we cannot use
> script based approach, because often these machines don’t have ssh permission
> Am 26.07.2017 um 15:09 schrieb r...@open-mpi.org:
>
> mpirun doesn’t get access to that requirement, nor does it need to do so. SGE
> will use the requirement when determining the nodes to allocate.
m_mem_free appears to come from Univa GE and is not part of the open source
versions. So I ca
Thanks Reuti & RHC for your responses.
My application does not relies on the actual value of m_mem_free and I used
this as an example, in open source SGE environment, we use mem_free resource.
Now, I understand that SGE will allocate requested resources (based on qsub
options) and then launch m
Hi Gilles,
Thank you very much for the response!
Unfortunately, I don’t have access to a different system with the issue right
now. As I said, it’s not new; it just keeps creeping up unexpectedly again on
different platforms. What puzzles me is that I’ve encountered the same problem
with low b
Volker,
With mpi_f08, you have to declare
Type(MPI_Comm) :: mpi_comm_global
(I am afk and not 100% sure of the syntax)
A simpler option is to
use mpi
Cheers,
Gilles
Volker Blum wrote:
>Hi Gilles,
>
>Thank you very much for the response!
>
>Unfortunately, I don’t have access to a different
Thanks!
I tried ‘use mpi’, which compiles fine.
Same result as with ‘include mpif.h', in that the output is
* MPI_IN_PLACE does not appear to work as intended.
* Checking whether MPI_ALLREDUCE works at all.
* Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
Hm. Any other thoughts?
Thank
I thought that maybe the underlying allreduce algorithm fails to support
MPI_IN_PLACE correctly, but I can't replicate on any machine (including
OSX) with any number of processes.
George.
On Wed, Jul 26, 2017 at 10:59 AM, Volker Blum wrote:
> Thanks!
>
> I tried ‘use mpi’, which compiles fi
Did you use Intel Fortran 2017 as well?
(I’m asking because I did see the same issue with a combination of an earlier
Intel Fortran 2017 version and OpenMPI on an Intel/Infiniband Linux HPC machine
… but not Intel Fortran 2016 on the same machine. Perhaps I can revive my
access to that combinat
No, I don't have (or used where they were available) the Intel compiler. I
used clang and gfortran. I can try on a Linux box with the Intel 2017
compilers.
George.
On Wed, Jul 26, 2017 at 11:59 AM, Volker Blum wrote:
> Did you use Intel Fortran 2017 as well?
>
> (I’m asking because I did se
Thanks! Yes, trying with Intel 2017 would be very nice.
> On Jul 26, 2017, at 6:12 PM, George Bosilca wrote:
>
> No, I don't have (or used where they were available) the Intel compiler. I
> used clang and gfortran. I can try on a Linux box with the Intel 2017
> compilers.
>
> George.
>
>
OS: Centos 7
Infiniband Packages from OS repos
Mellanox HCA
Compiled openmpi 1.10.7 on centos7 with the following config
./configure --prefix=/usr/local/software/OpenMPI/openmpi-1.10.7
--with-tm=/opt/pbs --with-verbs
Snippet from config.log seems to indicate that the infiniband header files we
Volker,
Unfortunately, I can't replicate with icc. I tried on a x86_64 box with
Intel compiler chain 17.0.4 20170411 to no avail. I also tested the
3.0.0-rc1 tarball and the current master, and you test completes without
errors on all cases.
Once you figure out an environment where you can consis
Oh no, that's not right. Mpirun launches daemons using qrsh and those daemons
spawn the app's procs. SGE has no visibility of the app at all
Sent from my iPad
> On Jul 26, 2017, at 7:46 AM, Kulshrestha, Vipul
> wrote:
>
> Thanks Reuti & RHC for your responses.
>
> My application does not rel
Folks,
I am able to reproduce the issue on OS X (Sierra) with stock gcc (aka
clang) and ifort 17.0.4
i will investigate this from now
Cheers,
Gilles
On 7/27/2017 9:28 AM, George Bosilca wrote:
Volker,
Unfortunately, I can't replicate with icc. I tried on a x86_64 box
with Intel compil
Does this happen with ifort but not other Fortran compilers? If so, write
me off-list if there's a need to report a compiler issue.
Jeff
On Wed, Jul 26, 2017 at 6:59 PM Gilles Gouaillardet
wrote:
> Folks,
>
>
> I am able to reproduce the issue on OS X (Sierra) with stock gcc (aka
> clang) and i
Thanks! That’s great. Sounds like the exact combination I have here.
Thanks also to George. Sorry that the test did not trigger on a more standard
platform - that would have simplified things.
Best wishes
Volker
> On Jul 27, 2017, at 3:56 AM, Gilles Gouaillardet wrote:
>
> Folks,
>
>
> I am
Are you sure your InfiniBand network is up and running? What kind of
output do you get if you run the command 'ibv_devinfo'?
Sincerely,
Rusty Dekema
On Wed, Jul 26, 2017 at 2:40 PM, Sajesh Singh wrote:
> OS: Centos 7
>
> Infiniband Packages from OS repos
>
> Mellanox HCA
>
>
>
>
>
> Compiled ope
Thanks Jeff for your offer, i will contact you off-list later
i tried a gcc+gfortran and gcc+ifort on both linux and OS X
so far, only gcc+ifort on OS X is failing
i will try icc+ifort on OS X from now
short story, MPI_IN_PLACE is not recognized as such by the ompi
fortran wrapper, and i do not
Thanks!
If you wish, please also keep me posted.
Best wishes
Volker
> On Jul 27, 2017, at 7:50 AM, Gilles Gouaillardet
> wrote:
>
> Thanks Jeff for your offer, i will contact you off-list later
>
>
> i tried a gcc+gfortran and gcc+ifort on both linux and OS X
> so far, only gcc+ifort on OS
29 matches
Mail list logo