Re: [OMPI users] Open mpi based program runs as root and gives SIGSEGV under unprivileged user

2014-12-11 Thread Luca Fini
Many thanks for the replies. The mismatch in OpeMPI version is my fault: while writing the request for help I looked at the name of the directory where OpenMPI was built (I did not build it myself) and did not notice that the name of the directory did not reflect the version actually compiled. I

[OMPI users] [ICCS/Alchemy] Deadline January 15: Architecture, Languages, Compilation and Hardware support for Emerging ManYcore systems

2014-12-11 Thread CUDENNEC Loic
Please accept our apologies if you receive multiple copies of this CfP. News: [1] Submission deadline extended to January 15. [2] 27 thematic workshops are registered http://www.iccs-meeting.org/iccs2015/registered-workshops. [3] In addition to the Full Paper submission, we offer a Presentation O

Re: [OMPI users] Open mpi based program runs as root and gives SIGSEGV under unprivileged user

2014-12-11 Thread Gilles Gouaillardet
Luca, you might want to double check the environment : env | grep ^OMPI and the per user config ls $HOME/.openmpi Cheers, Gilles On 2014/12/11 17:40, Luca Fini wrote: > Many thanks for the replies. > > The mismatch in OpeMPI version is my fault: while writing the request > for help I looked at

Re: [OMPI users] Oversubscribing in 1.8.3 vs 1.6.5

2014-12-11 Thread Ralph Castain
You are more than welcome - we really appreciate your spotting the problem! As a side note: you commented about how this works now even if you don’t set the “yield” MCA param. Just as an FYI: we automatically set the “yield” param for you when we detect that you are oversubscribing the node as w

[OMPI users] ERROR: C_FUNLOC function

2014-12-11 Thread Siegmar Gross
Hi Jeff, will you have the time to fix the Fortran problem for the new Oracle Solaris Studio 12.4 compiler suite in openmpi-1.8.4? tyr openmpi-1.8.4rc2-SunOS.sparc.64_cc 103 tail -15 log.make.SunOS.sparc.64_cc PPFC comm_compare_f08.lo PPFC comm_connect_f08.lo PPFC comm_create_e

[OMPI users] OpenMPI 1.8.4 and hwloc in Fedora 14 using a beta gcc 5.0 compiler.

2014-12-11 Thread Jorge D'Elia
Dear Jeff, Our updates of OpenMPI to 1.8.3 (and 1.8.4) were all OK using Fedora >= 17 and system gcc compilers on ia32 or ia64 machines. However, the "make all" step failed using Fedora 14 with a beta gcc 5.0 compiler on an ia32 machine with message like: Error: symbol `Lhwloc1' is already d

Re: [OMPI users] OpenMPI 1.8.4 and hwloc in Fedora 14 using a beta gcc 5.0 compiler.

2014-12-11 Thread Brice Goglin
This problem was fixed in hwloc upstream recently. https://github.com/open-mpi/hwloc/commit/790aa2e1e62be6b4f37622959de9ce3766ebc57e Brice Le 11/12/2014 23:40, Jorge D'Elia a écrit : > Dear Jeff, > > Our updates of OpenMPI to 1.8.3 (and 1.8.4) were > all OK using Fedora >= 17 and system gcc com

[OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
Dear OpenMPI users, Regarding to this previous post from 2009, I wonder if the reply from Ralph Castain is still valid. My need is similar but quite simpler: to make a system call from an openmpi fortran application to run a third pa

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Gilles Gouaillardet
Alex, can you try something like call system(sh -c 'env -i /.../mpirun -np 2 /.../app_name') -i start with an empty environment that being said, you might need to set a few environment variables manually : env -i PATH=/bin ... and that being also said, this "trick" could be just a bad idea : you

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
Hello Gilles, Thanks for your reply. The "env -i PATH=..." stuff seems to work!!! call system("sh -c 'env -i PATH=/usr/lib64/openmpi/bin:/bin mpirun -n 2 hello_world' ") did produce the expected result with a simple openmi "hello_world" code I wrote. I might be harder though with the real third

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Gilles Gouaillardet
Alex, just ask MPI_Comm_spawn to start (up to) 5 tasks via the maxprocs parameter : int MPI_Comm_spawn(char *command, char *argv[], int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercomm, int array_of_errcodes[]) INPUT P

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
Gilles, Ok, very nice! When I excute do rank=1,3 call MPI_Comm_spawn('hello_world',' ',5,MPI_INFO_NULL,rank,MPI_COMM_WORLD,my_intercomm,MPI_ERRCODES_IGNORE,status) enddo I do get 15 instances of the 'hello_world' app running: 5 for each parent rank 1, 2 and 3. Thanks a lot, Gilles. Best

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Gilles Gouaillardet
Alex, just to make sure ... this is the behavior you expected, right ? Cheers, Gilles On 2014/12/12 13:27, Alex A. Schmidt wrote: > Gilles, > > Ok, very nice! > > When I excute > > do rank=1,3 > call MPI_Comm_spawn('hello_world',' > ',5,MPI_INFO_NULL,rank,MPI_COMM_WORLD,my_intercomm,MPI_ER

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
Gilles, Well, yes, I guess I'll do tests with the real third party apps and let you know. These are huge quantum chemistry codes (dftb+, siesta and Gaussian) which greatly benefits from a parallel environment. My code is just a front end to use those, but since we have a lot of data to proces