Re: [OMPI users] OpenMPI 5.0.0 & Intel OneAPI 2023.2.0 on MacOS 14.0:

2023-11-06 Thread Matt Thompson via users
I have built Open MPI 5 (well, 5.0.0rc12) with Intel oneAPI under Rosetta2 with: $ lt_cv_ld_force_load=no ../configure --disable-wrapper-rpath --disable-wrapper-runpath \ CC=clang CXX=clang++ FC=ifort \ --with-hwloc=internal --with-libevent=internal --with-pmix=internal I'm fairly sure t

Re: [OMPI users] OpenMPI 5.0.0 & Intel OneAPI 2023.2.0 on MacOS 14.0:

2023-10-28 Thread Matt Thompson via users
On my Mac I build Open MPI 5 with (among other flags): --with-hwloc=internal --with-libevent=internal --with-pmix=internal In my case, I should have had libevent through brew, but it didn't seem to see it. But then I figured I might as well let Open MPI build its own for convenience. Matt On Fr

[OMPI users] Building Open MPI without zlib: what might go wrong/different?

2022-01-31 Thread Matt Thompson via users
Open MPI List, Recently in trying to build some libraries with NVHPC + Open MPI, I hit an error building HDF5 where it died at configure time saying that the zlib that Open MPI wanted to link to (my system one) was incompatible with the zlib I built in my libraries leading up to HDF5. So, in the e

Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI 4.1.2

2021-12-30 Thread Matt Thompson via users
ny given compiler supported > enough F2008 support for some / all of the mpi_f08 module. That's why the > configure tests are... complicated. > > -- > Jeff Squyres > jsquy...@cisco.com > > > From: users on behalf of Matt Tho

Re: [OMPI users] Mac OS + openmpi-4.1.2 + intel oneapi

2021-12-30 Thread Matt Thompson via users
? > > -- > Jeff Squyres > jsquy...@cisco.com > > ________ > From: users on behalf of Matt Thompson > via users > Sent: Thursday, December 30, 2021 9:56 AM > To: Open MPI Users > Cc: Matt Thompson; Christophe Peyret > Subject: Re: [OMPI us

Re: [OMPI users] Mac OS + openmpi-4.1.2 + intel oneapi

2021-12-30 Thread Matt Thompson via users
Oh yeah. I know that error. This is due to a long standing issue with Intel on macOS and Open MPI: https://github.com/open-mpi/ompi/issues/7615 You need to configure Open MPI with "lt_cv_ld_force_load=no" at the beginning. (You can see an example at the top of my modulefile here: https://github.c

Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI 4.1.2

2021-12-23 Thread Matt Thompson via users
gards, > > On Thu, 23 Dec 2021, 13:18 Matt Thompson via users, < > users@lists.open-mpi.org> wrote: > >> Oh. Yes, I am on macOS. The Linux cluster I work on doesn't have NAG 7.1 >> on it...mainly because I haven't asked for it. Until NAG fix the bug we are >

Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI 4.1.2

2021-12-23 Thread Matt Thompson via users
Oh. Yes, I am on macOS. The Linux cluster I work on doesn't have NAG 7.1 on it...mainly because I haven't asked for it. Until NAG fix the bug we are seeing, I figured why bother the admins. Still, it does *seem* like it should work. I might ask NAG support about it. On Wed, Dec 22, 2021 at 6:28 P

Re: [OMPI users] NAG Fortran 2018 bindings with Open MPI 4.1.2

2021-12-22 Thread Matt Thompson via users
All, When I build Open MPI with NAG, I have to pass in: FCFLAGS"=-mismatch_all -fpp" this flag tells nagfor to downgrade some errors with interfaces to warnings: -mismatch_all Further downgrade consistency checking of procedure argument lists so that calls to routines

Re: [OMPI users] Cannot build working Open MPI 4.1.1 with NAG Fortran/clang on macOS (but I could before!)

2021-10-29 Thread Matt Thompson via users
>>> If you want to use shared libraries, I would try to run configure, >>> and then edit the generated libtool file: >>> look a line like >>> >>> CC="nagfor" >>> >>> and then edit the next line >>> >>>

Re: [OMPI users] Cannot build working Open MPI 4.1.1 with NAG Fortran/clang on macOS (but I could before!)

2021-10-29 Thread Matt Thompson via users
igure, >> and then edit the generated libtool file: >> look a line like >> >> CC="nagfor" >> >> and then edit the next line >> >> >> # Commands used to build a shared archive. >> >> archive_cmds="\$CC -dynamiclib \$all

Re: [OMPI users] Cannot build working Open MPI 4.1.1 with NAG Fortran/clang on macOS (but I could before!)

2021-10-29 Thread Matt Thompson via users
ld a shared archive. > > archive_cmds="\$CC -dynamiclib \$allow_undef ..." > > simply manually remove "-dynamiclib" here and see if it helps > > > Cheers, > > Gilles > On Fri, Oct 29, 2021 at 12:30 AM Matt Thompson via users < > users@lists.open-

[OMPI users] Cannot build working Open MPI 4.1.1 with NAG Fortran/clang on macOS (but I could before!)

2021-10-28 Thread Matt Thompson via users
Dear Open MPI Gurus, This is a...confusing one. For some reason, I cannot build a working Open MPI with NAG 7.0.7062 and clang on my MacBook running macOS 11.6.1. The thing is, I could do this back in July with NAG 7.0.7048. So my fear is that something changed with macOS, or clang/xcode, or somet

Re: [OMPI users] [External] Help with MPI and macOS Firewall

2021-03-19 Thread Matt Thompson via users
d then simply > > mpirun ... > > Cheers, > > Gilles > > On Fri, Mar 19, 2021 at 5:44 AM Matt Thompson via users > wrote: > > > > Prentice, > > > > Ooh. The first one seems to work. The second one apparently is not liked > by zsh and I had to do:

Re: [OMPI users] [External] Help with MPI and macOS Firewall

2021-03-18 Thread Matt Thompson via users
See > > https://www.open-mpi.org/faq/?category=sm > > for more info. > > Prentice > > On 3/18/21 12:28 PM, Matt Thompson via users wrote: > > All, > > This isn't specifically an Open MPI issue, but as that is the MPI stack I > use on my laptop, I'm ho

[OMPI users] Help with MPI and macOS Firewall

2021-03-18 Thread Matt Thompson via users
All, This isn't specifically an Open MPI issue, but as that is the MPI stack I use on my laptop, I'm hoping someone here might have a possible solution. (I am pretty sure something like MPICH would trigger this as well.) Namely, my employer recently did something somewhere so that now *any* MPI a

Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI

2020-02-25 Thread Matt Thompson via users
ving trouble after fixing the above you may need to > check yama on the host. You can check with "sysctl -w > kernel.yama.ptrace_scope", if it returns a value other than 0 you may > need to disable it with "sysctl -w kernel.yama.ptrace_scope=0". > > Adam > >

Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI

2020-02-24 Thread Matt Thompson via users
4, 2020, at 2:59 PM, Gabriel, Edgar via users < > users@lists.open-mpi.org> wrote: > >  > > I am not an expert for the one-sided code in Open MPI, I wanted to comment > briefly on the potential MPI -IO related item. As far as I can see, the > error message > > >

Re: [OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI

2020-02-24 Thread Matt Thompson via users
On Mon, Feb 24, 2020 at 4:57 PM Gabriel, Edgar wrote: > I am not an expert for the one-sided code in Open MPI, I wanted to comment > briefly on the potential MPI -IO related item. As far as I can see, the > error message > > > > “Read -1, expected 48, errno = 1” > > does not stem from MPI I/O, at

[OMPI users] Help with One-Sided Communication: Works in Intel MPI, Fails in Open MPI

2020-02-24 Thread Matt Thompson via users
All, My guess is this is a "I built Open MPI incorrectly" sort of issue, but I'm not sure how to fix it. Namely, I'm currently trying to get an MPI project's CI working on CircleCI using Open MPI to run some unit tests (on a single node, so need some oversubscribe). I can build everything just fin