Hi all,
there seems to be a host order-dependent timing issue. The issue occurs
when a set of processes is placed on the same node. Mpirun of the job exits
at MPI_Init() with:
num local peers failed
--> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
Non-MPI applications launch jus
Hi again, and thank you to Florent for answering my questions last time. The
answers were very helpful!
We have some strange errors occurring randomly when running MPI jobs. We are
using openmpi 4.0.3 with UCX and GPUDirect RDMA and are running multi-node
applications using SLURM on a cluster.
Thanks for the hint.
Regards,
Mahmood
On Thu, Apr 18, 2019 at 2:47 AM Reuti wrote:
> Hi,
>
> Am 17.04.2019 um 11:07 schrieb Mahmood Naderan:
>
> > Hi,
> > After successful installation of v4 on a custom location, I see some
> errors while the default installation (v2) hasn't.
>
> Did you als
Hi,
Am 17.04.2019 um 11:07 schrieb Mahmood Naderan:
> Hi,
> After successful installation of v4 on a custom location, I see some errors
> while the default installation (v2) hasn't.
Did you also recompile your application with this version of Open MPI?
-- Reuti
> $ /share/apps/softwares/open
Hi,
After successful installation of v4 on a custom location, I see some errors
while the default installation (v2) hasn't.
$ /share/apps/softwares/openmpi-4.0.1/bin/mpirun --version
mpirun (Open MPI) 4.0.1
Report bugs to http://www.open-mpi.org/community/help/
$ /share/apps/softwares/openmpi-4.0
Folks,
for the records, this was investigated off-list
- the root cause was bad permissions on the /.../lib/openmpi directory
(no components could be found)
- then it was found tm support was not built-in, so mpirun did not
behave as expected under torque/pbs
Cheers,
Gilles
On 5/15/2
On 07/03/2016 18:58, Marco Lubosch wrote:
Thanks Marco,
I reinstalled Cygwin and OMPI like 10 times. I had an issues with
gcc(mingw) because it was preinstalled under windows. I then had to
remove it and reinstall gcc under cygwin and got it working but as I
said only copiling plain C code with
Thanks Marco,
I reinstalled Cygwin and OMPI like 10 times. I had an issues with
gcc(mingw) because it was preinstalled under windows. I then had to
remove it and reinstall gcc under cygwin and got it working but as I
said only copiling plain C code with "mpicc". I also disabled Windows
Firewa
On 06/03/2016 10:06, Marco Lubosch wrote:
Hello guys,
I try to do the first steps with Open MPI and I finally got it work on
Cygwin64(Windows 7 64bit).
I am able to compile plain C code without any issues via "mpicc ..." but
when I try to initialize MPI the program is getting stuck within
"MPI
Hello guys,
I try to do the first steps with Open MPI and I finally got it work on
Cygwin64(Windows 7 64bit).
I am able to compile plain C code without any issues via "mpicc ..." but
when I try to initialize MPI the program is getting stuck within
"MPI_INIT" without creating CPU load. Example
As Ralph mentioned, the 1.4.x series is very old.
Any chance you can upgrade to 1.8.x?
> On Apr 15, 2015, at 7:12 AM, cristian wrote:
>
> Hello,
>
> I noticed when performing a profiling of an application that the MPI_init()
> function takes a considerable amount of time.
> There is a big d
With an OMPI that old, it’s anyone’s guess - I have no idea.
> On Apr 15, 2015, at 4:12 AM, cristian wrote:
>
> Hello,
>
> I noticed when performing a profiling of an application that the MPI_init()
> function takes a considerable amount of time.
> There is a big difference when running 32 p
Hello,
I noticed when performing a profiling of an application that the
MPI_init() function takes a considerable amount of time.
There is a big difference when running 32 processes over 32 machines and
32 processes over 8 machines (Each machine has 8 cores).
These are the results of the profil
On Oct 28, 2014, at 9:02 AM, maxinator333 wrote:
> It doesn't seem to work. (switching off wlan still works)
> mpicc mpiinit.c -o mpiinit.exe; time mpirun --mca btl sm,self -n 2
> ./mpiinit.exe
>
> real0m43.733s
> user0m0.888s
> sys 0m0.824s
Ah, this must be an ORTE issue, then (i.
It doesn't seem to work. (switching off wlan still works)
mpicc mpiinit.c -o mpiinit.exe; time mpirun --mca btl sm,self -n 2
./mpiinit.exe
real0m43.733s
user0m0.888s
sys 0m0.824s
Am 28.10.2014 13:40, schrieb Jeff Squyres (jsquyres):
On Oct 27, 2014, at 1:25 PM, maxinator333 wrote
On Oct 27, 2014, at 1:25 PM, maxinator333 wrote:
> Deactivating my WLAN did indeed the trick!
> It also seems to not work, if a LAN-cable is plugged in. No difference if I
> am correctly connected (to the internet/gateway) or not (wrong IP, e.g.
> static given IP instead of mandatory DHCP)
> Ag
Hello,
After compiling and running a MPI program, it seems to hang at
MPI_Init(), but it eventually will work after a minute or two.
While the problem occured on my Notebook it did not on my desktop PC.
It can be a timeout on a network interface.
I see a similar issue with wireless ON but no
On 10/27/2014 8:32 AM, maxinator333 wrote:
Hello,
After compiling and running a MPI program, it seems to hang at
MPI_Init(), but it eventually will work after a minute or two.
While the problem occured on my Notebook it did not on my desktop PC.
It can be a timeout on a network interface.
I
Hello,
After compiling and running a MPI program, it seems to hang at
MPI_Init(), but it eventually will work after a minute or two.
While the problem occured on my Notebook it did not on my desktop PC.
Both run on Win 7, cygwin 64 Bit, OpenMPI version 1.8.3 r32794
(ompi_info), g++ v 4.8.3.
On Mar 21, 2013, at 9:52 PM, David A. Boger wrote:
> If I add "-mca oob_tcp_if_exclude cscotun0", then the corresponding address
> for that vpn interface no longer shows up in contact.txt, but the problem
> remains. I also add "-mca btl ^cscotun0 -mca btl_tcp_if_exclude cscotun0"
> with no eff
Ah, yes - that pinpoints the problem. It looks like your vpn is indeed
interfering with connections. I suspect it must be some kind of vpn/Ubuntu
configuration issue as I have the same experience on my Mac laptop that you
report (i.e., no issue).
I'd suggest googling the Ubuntu site (or just in
Following up on your TCP remark, I found that during the delay, netstat -tnp
shows
tcp0 1 192.168.1.3:56343 192.168.1.3:47830 SYN_SENT
24191/mpi_hello
and that while the vpn is connected, I am unable to ping 192.168.1.3 (the
machine I am on).
On the other hand, on the
Thanks Ralph. I have a Mac OS X 10.6.8 laptop where I can run
open-mpi 1.2.8 and open-mpi 1.6.4 with the vpn connected without any problem,
even without having to exclude the vpn interface, so you're probably right --
the existence of the vpn interface alone doesn't explain the problem.
Neverthele
The process is hanging trying to open a TCP connection back to mpirun. I would
have thought that excluding the vpn interface would help, but it could be that
there is still some interference from the vpn software itself - as you probably
know, vpn generally tries to restrict connections.
I don'
I am having a problem on my linux desktop where mpi_init hangs for
approximately 64 seconds if I have my vpn client connected but runs immediately
if I disconnect the vpn. I've picked through the FAQ and Google but have failed
to come up with a solution.
Some potentially relevant information: I a
> Sent: Tuesday, August 28, 2012 2:40 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Init
>
> Okay, I fixed this on our trunk - I'll post it for transfer to the 1.7 and
> 1.6 series in their next releases.
>
> Thanks!
>
> On Aug 28, 2012, at 2:27 PM,
...@open-mpi.org] On Behalf Of
Ralph Castain [r...@open-mpi.org]
Sent: Tuesday, August 28, 2012 2:40 PM
To: Open MPI Users
Subject: Re: [OMPI users] MPI_Init
Okay, I fixed this on our trunk - I'll post it for transfer to the 1.7 and 1.6
series in their next releases.
Thanks!
On Aug 28, 201
Okay, I fixed this on our trunk - I'll post it for transfer to the 1.7 and 1.6
series in their next releases.
Thanks!
On Aug 28, 2012, at 2:27 PM, Ralph Castain wrote:
> Oh crud - yes we do. Checking on it...
>
> On Aug 28, 2012, at 2:23 PM, Ralph Castain wrote:
>
>> Glancing at the code, I
Oh crud - yes we do. Checking on it...
On Aug 28, 2012, at 2:23 PM, Ralph Castain wrote:
> Glancing at the code, I don't see anywhere that we trap SIGCHLD outside of
> mpirun and the orte daemons - certainly not inside an MPI app. What version
> of OMPI are you using?
>
> On Aug 28, 2012, at
Glancing at the code, I don't see anywhere that we trap SIGCHLD outside of
mpirun and the orte daemons - certainly not inside an MPI app. What version of
OMPI are you using?
On Aug 28, 2012, at 2:06 PM, Tony Raymond wrote:
> Hi,
>
> I have an application that uses openMPI and creates some chi
Hi,
I have an application that uses openMPI and creates some child processes using
fork(). I've been trying to catch SIGCHLD in order to check the exit status of
these processes so that the program will exit if a child errors out.
I've found out that if I set the SIGCHLD handler before calling
On Jul 7, 2010, at 10:12 AM, Grzegorz Maj wrote:
> The problem was that orted couldn't find ssh nor rsh on that machine.
> I've added my installation to PATH and it now works.
> So one question: I will definitely not use MPI_Comm_spawn or any
> related stuff. Do I need this ssh? If not, is there
The problem was that orted couldn't find ssh nor rsh on that machine.
I've added my installation to PATH and it now works.
So one question: I will definitely not use MPI_Comm_spawn or any
related stuff. Do I need this ssh? If not, is there any way to say
orted that it shouldn't be looking for ssh b
Check your path and ld_library_path- looks like you are picking up some stale
binary for orted and/or stale libraries (perhaps getting the default OMPI
instead of 1.4.2) on the machine where it fails.
On Jul 7, 2010, at 7:44 AM, Grzegorz Maj wrote:
> Hi,
> I was trying to run some MPI processes
Hi,
I was trying to run some MPI processes as a singletons. On some of the
machines they crash on MPI_Init. I use exactly the same binaries of my
application and the same installation of openmpi 1.4.2 on two machines
and it works on one of them and fails on the other one. This is the
command and it
On Mar 30, 2010, at 3:15 PM, Shaun Jackman wrote:
> Hi Jeff,
>
> I tested 1.4.2a1r22893, and it does not hang in ompi_free_list_grow.
>
> I hadn't noticed that the 1.4.1 installation I was using was configured
> with --enable-mpi-threads. Could that have been related to this problem?
Yes, very
Hi Jeff,
I tested 1.4.2a1r22893, and it does not hang in ompi_free_list_grow.
I hadn't noticed that the 1.4.1 installation I was using was configured
with --enable-mpi-threads. Could that have been related to this problem?
Cheers,
Shaun
On Mon, 2010-03-29 at 17:00 -0700, Jeff Squyres wrote:
> C
Could you try one of the 1.4.2 nightly tarballs and see if that makes the issue
better?
http://www.open-mpi.org/nightly/v1.4/
On Mar 29, 2010, at 7:47 PM, Shaun Jackman wrote:
> Hi,
>
> On an IA64 platform, MPI_Init never returns. I fired up GDB and it seems
> that ompi_free_list_grow nev
Hi,
On an IA64 platform, MPI_Init never returns. I fired up GDB and it seems
that ompi_free_list_grow never returns. My test program does nothing but
call MPI_Init. Here's the backtrace:
(gdb) bt
#0 0x20075620 in ompi_free_list_grow () from
/home/aubjtl/openmpi/lib/libmpi.so.0
#1 0x200
Using funneled will make your code more portable in the long run
as it is guaranteed by the MPI standard. Using single, i.e. MPI_Init,
is working now for typical OpenMP+MPI program that MPI calls are outside
OpenMP sections. But as MPI implementations implement more performance
optimized feature
Hi all,
I can understand the difference between SINGLE and FUNNELED, and why I
should use FUNNELED now. Thank you!
Yuanyuan
On Mar 4, 2010, at 10:52 AM, Anthony Chan wrote:
- "Yuanyuan ZHANG" wrote:
For an OpenMP/MPI hybrid program, if I only want to make MPI calls
using the main thread, ie., only in between parallel sections, can
I just
use SINGLE or MPI_Init?
If your MPI calls is NOT within OpenMP direc
- "Yuanyuan ZHANG" wrote:
> For an OpenMP/MPI hybrid program, if I only want to make MPI calls
> using the main thread, ie., only in between parallel sections, can I just
> use SINGLE or MPI_Init?
If your MPI calls is NOT within OpenMP directives, MPI does not even
know you are using thre
On Mar 4, 2010, at 7:36 AM, Richard Treumann wrote:
A call to MPI_Init allows the MPI library to return any level of
thread support it chooses.
This is correct, insofar as the MPI implementation can always choose
any level of thread support.
This MPI 1.1 call does not let the application say
Re: [OMPI users] MPI_Init() and MPI_Init_thread()
Sent by:users-boun...@open-mpi.org
Hi guys,
Thanks for
On Thursday 04 March 2010 01:32:39 Yuanyuan ZHANG wrote:
> Hi guys,
>
> Thanks for your help, but unfortunately I am still not clear.
>
> > You are right Dave, FUNNELED allows the application to have multiple
> > threads but only the man thread calls MPI.
>
> My understanding is that even if I u
Hi guys,
Thanks for your help, but unfortunately I am still not clear.
> You are right Dave, FUNNELED allows the application to have multiple
> threads but only the man thread calls MPI.
My understanding is that even if I use SINGLE or MPI_Init, I can still
have multiple threads if I use OpenMP P
Re: [OMPI users] MPI_Init() and MPI_Init_thread()
Sent by:users-boun...@open-mpi.org
On Mar 3, 2010, at 11:3
On Mar 3, 2010, at 11:35 AM, Richard Treumann wrote:
If the application will make MPI calls from multiple threads and
MPI_INIT_THREAD has returned FUNNELED, the application must be
willing to take the steps that ensure there will never be concurrent
calls to MPI from the threads. The threads
Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
users-boun...@open-mpi.org wrote on 03/03/2010 11:59:45 AM:
> [image removed]
>
> Re: [OMPI users] MPI_Init() and MP
I believe that it specifies the *minimum* threading model supported. If I
recall, opmi is already funnel safe even in single mode. However, if mpi
calls are made from outside the main thread, you should specify funneled for
portability
Brian
On Mar 2, 2010 11:59 PM, "Terry Frankcombe" wrote:
I can't speak for the developers. However, I think it's to do with the
library internals.
>From here: http://www.mpi-forum.org/docs/mpi-20-html/node165.htm
"Advice to implementors.
"If provided is not MPI_THREAD_SINGLE then the MPI library should not
invoke C/ C++/Fortran library calls that
Hi all,
I am a beginner of MPI and a little confused with
MPI_Init_thread() function:
If we use MPI_Init() or MPI_Init_thread(MPI_THREAD_SINGLE, provided)
to initialize MPI environment, when we use OpenMP
PARALLEL directive each process is forked to multiple
threads and when an MPI function is ca
On Sat, Oct 24, 2009 at 07:00:11PM -0600, Damien Hocking wrote:
> Roberto,
>
> Ipopt doesn't use MPI. It can use the MUMPS parallel linear solver in
> sequential mode, but nothing is set up in IPOPT to use the parallel MPI
> version. For sequential mode, MUMPS dummies out the MPI headers. Th
Roberto,
Ipopt doesn't use MPI. It can use the MUMPS parallel linear solver in
sequential mode, but nothing is set up in IPOPT to use the parallel MPI
version. For sequential mode, MUMPS dummies out the MPI headers. The
dummy headers are part of the MUMPS distribution in the libseq
directo
Hi,
I am in the process of packaging coinor-ipopt for Debian. The build
process fails during the 'make test' phase. The error message referense
otre_init, ompi_mpi_init and MPI_INIT. I have already asked on the
ipopt mailing list [0]. However, that query has not received any
replies. I though
On Mon, 2008-07-28 at 20:01 -0500, Dirk Eddelbuettel wrote:
> On 24 July 2008 at 14:39, Adam C Powell IV wrote:
> | Greetings,
> |
> | I'm seeing a segfault in a code on Ubuntu 8.04 with gcc 4.2. I
> | recompiled the Debian lenny openmpi 1.2.7~rc2 package on Ubuntu, and
> | compiled the Debian le
On 24 July 2008 at 14:39, Adam C Powell IV wrote:
| Greetings,
|
| I'm seeing a segfault in a code on Ubuntu 8.04 with gcc 4.2. I
| recompiled the Debian lenny openmpi 1.2.7~rc2 package on Ubuntu, and
| compiled the Debian lenny petsc and libmesh packages against that.
|
| Everything works just
If you are not using iWARP or InfiniBand networking, try configuring
Open MPI --without-memory-manager and see if that solves your
problem. Issues like this can come up, especially in C++ codes, when
the application (or supporting libraries) have their own memory
managers that conflict wit
Greetings,
I'm seeing a segfault in a code on Ubuntu 8.04 with gcc 4.2. I
recompiled the Debian lenny openmpi 1.2.7~rc2 package on Ubuntu, and
compiled the Debian lenny petsc and libmesh packages against that.
Everything works just fine in Debian lenny (gcc 4.3), but in Ubuntu
hardy it fails dur
60 matches
Mail list logo