them. That’s as far as I got … sorry about
the experience.
Best wishes
Volker
Volker Blum
Vinik Associate Professor, Duke MEMS & Chemistry
https://aims.pratt.duke.edu
https://bsky.app/profile/aimsduke.bsky.social
> On Nov 6, 2023, at 4:25 AM, Christophe Peyret via users
> wrote:
Apple XCode. mpif90 and mpicc work and the situation for mpic++ did improve
with XCode 5.1 beta 2, which is why I am suspecting this is an XCode related
problem. However, I now need to find more time …)
Thanks again & best wishes
Volker
Volker Blum
Associate Professor, Duke MEMS & Chemist
tage) with OpenMPI
4.1.6.
My guess is that this is related to some earlier pmix related issues that can
be found by google but wanted to report.
Thank you!
Best wishes
Volker
Volker Blum
Associate Professor, Duke MEMS & Chemistry
https://aims.pratt.duke.edu
https://bsky.app/profile/aimsduke.bsky.social
:53 PM, Volker Blum via users
> wrote:
>
> Thank you! A patch would be great.
>
> I seem to recall that the patch in that ticket did not solve the issue for me
> about a year ago (was part of another discussion on OMPI Users). I did not
> try this time around (time reaso
bhx2mAzp_3gPybF3Q$>
(if the patch_autotools_output subroutine), so I guess that could be enhanced
to support Intel Fortran on OSX.
I am confident a Pull Request that does fix this issue will be considered for
inclusion in future Open MPI releases.
Cheers,
Gilles
On Fri, Sep 16, 2022 at 11:
Hi all,
This issue here:
https://github.com/open-mpi/ompi/issues/7615
is, unfortunately, still current.
I understand that within OpenMPI there is a sense that this is Intel's problem
but I’m not sure it is. Is it possible to address this in the configure script
in the actual OpenMPI distribu
ker
> On Jul 18, 2021, at 4:29 PM, Volker Blum via users
> wrote:
>
> Hi,
>
> This is a quick note regarding FAQ 41/42 on building OpenMPI:
>
> https://urldefense.com/v3/__https://www.open-mpi.org/faq/?category=building*libevent-or-hwloc-errors-when-linking-fortran__;I
Hi,
This is a quick note regarding FAQ 41/42 on building OpenMPI:
https://www.open-mpi.org/faq/?category=building#libevent-or-hwloc-errors-when-linking-fortran
I was glad to find the information. However, since this is a somewhat
predictable issue … would it make sense to address this in config
Update:
> I have yet to run a full regression test on our actual code to ensure that
> there are no other side effects. I don’t expect any, though.
Indeed. Fixes all regressions that I had observed.
Best wishes
Volker
> On Jul 27, 2017, at 11:47 AM, Volker Blum wrote:
>
&
-lmpi_usempi_ignore_tkr'
>
>
> Cheers,
>
> Gilles
>
> On 7/27/2017 3:28 PM, Volker Blum wrote:
>> Thanks!
>>
>> If you wish, please also keep me posted.
>>
>> Best wishes
>> Volker
>>
>>> On Jul 27, 2017, at 7:50 AM, G
>>>>
>>>> Once you figure out an environment where you can consistently replicate
>>>> the issue, I would suggest to attach to the processes and:
>>>> - make sure the MPI_IN_PLACE as seen through the Fortran layer matches
>>>> what the C
MPI
>>
>> I have a "Fortran 101" level question. When you pass an array a(:) as
>> argument, what exactly gets passed via the Fortran interface to the
>> corresponding C function ?
>>
>> George.
>>
>> On Wed, Jul 26, 2017
ilers.
>
> George.
>
>
>
> On Wed, Jul 26, 2017 at 11:59 AM, Volker Blum wrote:
> Did you use Intel Fortran 2017 as well?
>
> (I’m asking because I did see the same issue with a combination of an earlier
> Intel Fortran 2017 version and OpenMPI on an Intel/Infiniband
esses.
>
> George.
>
>
>
> On Wed, Jul 26, 2017 at 10:59 AM, Volker Blum wrote:
> Thanks!
>
> I tried ‘use mpi’, which compiles fine.
>
> Same result as with ‘include mpif.h', in that the output is
>
> * MPI_IN_PLACE does not appear to wo
to
>
> use mpi
>
> Cheers,
>
> Gilles
>
> Volker Blum wrote:
>> Hi Gilles,
>>
>> Thank you very much for the response!
>>
>> Unfortunately, I don’t have access to a different system with the issue
>> right now. As I said, it’s not
de 'mpif.h'
> with
> use mpi_f08
> and who knows, that might solve your issue
>
>
> Cheers,
>
> Gilles
>
> On Wed, Jul 26, 2017 at 5:22 PM, Volker Blum wrote:
>> Dear Gilles,
>>
>> Thank you very much for the fast answer.
>>
>>
unable to reproduce this issue on linux
>
> can you please post your full configure command line, your gnu
> compiler version and the full test program ?
>
> also, how many mpi tasks are you running ?
>
> Cheers,
>
> Gilles
>
> On Wed, Jul 26, 2017 at 4:25 PM, Volk
, &
test_data(:), &
n_data, &
MPI_DOUBLE_PRECISION, &
MPI_SUM, &
mpi_comm_global, &
mpierr )
! The value of all entries of test_data should now be 1.d0 on all MPI tasks.
! If that is not the case, then the MPI_IN_PLACE flag may
18 matches
Mail list logo