Re: [OMPI users] How do you change ports used? [EXT]

2021-03-18 Thread Sendu Bala via users
Thanks, it made it work when I was running “true” as a test, but then my real MPI app failed with: [node-5-8-2][[48139,1],0][btl_tcp_component.c:966:mca_btl_tcp_component_create_listen] bind() failed: no port available in the range [46107..46139] -

Re: [OMPI users] How do you change ports used? [EXT]

2021-03-18 Thread Ralph Castain via users
Hard to say - unless there is some reason, why not make it large enough to not be an issue? You may have to experiment a bit as there is nothing to guarantee that other processes aren't occupying those regions. On Mar 18, 2021, at 2:13 AM, Sendu Bala mailto:s...@sanger.ac.uk> > wrote: Thanks,

Re: [OMPI users] How do you change ports used? [EXT]

2021-03-18 Thread Sendu Bala via users
Yes, that’s the trick. I’m going to have to check port usage on all hosts and pick suitable ranges just-in-time - and hope I don’t hit a race condition with other users of the cluster. Does mpiexec not have this kind of functionality built in? When I use it with no port options set (pure defaul

Re: [OMPI users] How do you change ports used? [EXT]

2021-03-18 Thread Ralph Castain via users
Hmmm...then you have something else going on. By default, OMPI will ask the local OS for an available port and use it. You only need to specify ports when working thru a firewall. Do you have firewalls on this cluster? On Mar 18, 2021, at 8:55 AM, Sendu Bala mailto:s...@sanger.ac.uk> > wrote:

[OMPI users] Help with MPI and macOS Firewall

2021-03-18 Thread Matt Thompson via users
All, This isn't specifically an Open MPI issue, but as that is the MPI stack I use on my laptop, I'm hoping someone here might have a possible solution. (I am pretty sure something like MPICH would trigger this as well.) Namely, my employer recently did something somewhere so that now *any* MPI a

Re: [OMPI users] [External] Re: Error intialising an OpenFabrics device.

2021-03-18 Thread Prentice Bisbal via users
If you disable it with -mtl ^openib the warning goes away. And the performance of openib goes away right along with it. Prentice On 3/13/21 5:43 PM, Heinz, Michael William via users wrote: I’ve begun getting this annoyingly generic warning, too. It appears to be coming from the openib provi

Re: [OMPI users] [External] Help with MPI and macOS Firewall

2021-03-18 Thread Prentice Bisbal via users
OpenMPI should only be using shared memory on the local host automatically, but maybe you need to force it. I think mpirun -mca btl self,vader ... should do that. or you can exclude tcp instead mpirun -mca btl ^tcp See https://www.open-mpi.org/faq/?category=sm for more info. Prentice On

Re: [OMPI users] [External] Re: Error intialising an OpenFabrics device.

2021-03-18 Thread Cunningham, Brendan via users
I believe this is an expected warning in OMPI 4.0.x series as the openib BTL is being deprecated (https://www.open-mpi.org/software/ompi/major-changes.php) Try adding: --mca btl_openib_warn_no_device_params_found 0 --mca btl_openib_allow_ib true To suppress this warning. This issue (htt

Re: [OMPI users] [External] Re: Error intialising an OpenFabrics device.

2021-03-18 Thread Cunningham, Brendan via users
(sorry, formatting got munged) Try adding: --mca btl_openib_warn_no_device_params_found 0 --mca btl_openib_allow_ib true To your mpirun line to suppress this warning. > -Original Message- > From: Cunningham, Brendan > Sent: Thursday, March 18, 2021 3:40 PM > To: 'Open MPI Users' > C

Re: [OMPI users] [External] Help with MPI and macOS Firewall

2021-03-18 Thread Matt Thompson via users
Prentice, Ooh. The first one seems to work. The second one apparently is not liked by zsh and I had to do: ❯ mpirun -mca btl '^tcp' -np 6 ./helloWorld.mpi3.exe Compiler Version: GCC version 10.2.0 MPI Version: 3.1 MPI Library Version: Open MPI v4.1.0, package: Open MPI mathomp4@gs6101-parcel.local

Re: [OMPI users] [External] Help with MPI and macOS Firewall

2021-03-18 Thread Gilles Gouaillardet via users
Matt, you can either mpirun --mca btl self,vader ... or export OMPI_MCA_btl=self,vader mpirun ... you may also add btl = self,vader in your /etc/openmpi-mca-params.conf and then simply mpirun ... Cheers, Gilles On Fri, Mar 19, 2021 at 5:44 AM Matt Thompson via users wrote: > > Prentice, >