Re: [OMPI users] Question on suspending/resuming MPI processes with SIGSTOP

2014-04-11 Thread Frank Wein
Frank Wein wrote: [...] Or basically my question could also be rephrased as: Is there a barrier mechanism I could use in OMPI that causes basically very few to no CPU usage (with higher latency then)? Intel MPI for example seems to have the env var "I_MPI_WAIT_MODE=1" which uses

[OMPI users] Question on suspending/resuming MPI processes with SIGSTOP

2014-04-11 Thread Frank Wein
coding the logic for this into my program, I thought I'll ask here first if this will work at all :). Frank

Re: [OMPI users] User Interface for MPMD

2012-08-10 Thread Frank Kampe
>> (because there is currently no MPI-standardized way to get this information). >> >> I honestly don't remember what happened to this proposal, but I know who >> made it (Adam Moody, from Livermore). I've just pinged him off list to find >> out what happened

Re: [OMPI users] User Interface for MPMD

2012-08-10 Thread Frank Kampe
se to me - so I'm a little lost. On Aug 10, 2012, at 9:49 AM, Frank Kampe wrote: > No. I am looking for a user-callable function that will return information > about the running OpenMPI MPMD program from within the running program---the > information listed below in (1) --

Re: [OMPI users] User Interface for MPMD

2012-08-10 Thread Frank Kampe
ogram? Or an inter-program communication API that wants to tell another program some information? Or an API by which the app can tell MPI "I'm going to spawn N threads"? Or...? On Aug 10, 2012, at 9:00 AM, Gus Correa wrote: > On 08/10/2012 11:31 AM, Frank Kampe wrote: >

Re: [OMPI users] User Interface for MPMD

2012-08-10 Thread Frank Kampe
users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Gus Correa [g...@ldeo.columbia.edu] Sent: Friday, August 10, 2012 11:00 AM To: Open MPI Users Subject: Re: [OMPI users] User Interface for MPMD On 08/10/2012 11:31 AM, Frank Kampe wrote: > Are there any user level APIs to pro

[OMPI users] User Interface for MPMD

2012-08-10 Thread Frank Kampe
Are there any user level APIs to provide the following information to a running OpenMPI MPMD program: (1) Number of executable instances (2) 1st MPI Task rank of each instance (3) Number of MPI Tasks per instance Thank You

Re: [OMPI users] Mpirun: How to print STDOUT of just one process?

2012-02-01 Thread Frank
Great, that works!! Many Thanks! On Wed, Feb 1, 2012 at 4:17 PM, Paul Kapinos wrote: > Try out the attached wrapper: > $ mpiexec -np 2 masterstdout > >> mpirun -n 2 > > >> Is there a way to have mpirun just merger STDOUT of one process to its >> STDOUT stream? > > > > > > -- > Dipl.-Inform. Pau

[OMPI users] Mpirun: How to print STDOUT of just one process?

2012-02-01 Thread Frank
When running mpirun -n 2 the STDOUT streams of both processes are combined and are displayed by the shell. In such an interleaved format its hard to tell what line comes from which node. Is there a way to have mpirun just merger STDOUT of one process to its STDOUT stream? Best, Frank Cross

[OMPI users] How to determine MPI rank/process number local to a socket/node

2012-01-26 Thread Frank
Say, I run a parallel program using MPI. Execution command mpirun -n 8 -npernode 2 launches 8 processes in total. That is 2 processes per node and 4 nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual core) and network interconnect between nodes is InfiniBand. Now, the rank number

[OMPI users] OpenMPI-1.3 and XGrid

2009-01-23 Thread Frank Kahle
OpenMPI-1.3 in a different way than OpenMPI-1.2.8? Kind regards, Frank

Re: [OMPI users] Problems Compiling gfortran on mac os-x 10.5.3

2008-03-05 Thread Frank Tabakin
help. Frank

Re: [OMPI users] Problems Compiling gfortran on mac os-x 10.5.3

2008-03-05 Thread Frank Tabakin
Still have a problem with gfortran on on mac os-x 10.5.3 /usr/local/bin/mpif90 main.f90 f951: error: unrecognized command line option "-std=legacy " Almost there. Should I omit the -std=legacy part? I'll try that. Frank

Re: [OMPI users] Problems Compiling gfortran on mac os-x 10.5.3

2008-03-05 Thread Frank Tabakin
I used the info provided on the openmpi users list and used the command ./configure --prefix=/usr/local --enable-mpi-f77 --enable-mpi-f90 F77=gfortran FC=gfortran FFLAGS="-m32 -std=legacy" --with-wrapper- fflags="-m32 -std=legacy" --with-mpi-f90-size=medium --enable-mpirun- prefix-by-default

Re: [OMPI users] OpenMpi and Leopard

2008-02-29 Thread Frank Tabakin
You were right. Thanks Frank On Feb 29, 2008, at 4:41 PM, Jeff Squyres wrote: FWIW, this doesn't look like an Absoft error -- it looks like gcc cannot create executables. So Open MPI (or any C application) cannot be built. Did you install the OS X developer tools? IIRC, that's wh

Re: [OMPI users] OpenMpi and Leopard

2008-02-29 Thread Frank Tabakin
That's probably the problem. I will try to install it. Thanks so much for the rapid reply. Frank On Feb 29, 2008, at 4:41 PM, Jeff Squyres wrote: FWIW, this doesn't look like an Absoft error -- it looks like gcc cannot create executables. So Open MPI (or any C application) canno

[OMPI users] OpenMpi and Leopard

2008-02-29 Thread Frank Tabakin
I just upgraded to OSX 10.5.2 on my imacand am trying to install openmpi  for use with absoft fortran 90based on their instruction page:http://www.absoft.com/Products/Compilers/Fortran/Linux/fortran95/BuildingOpenMPI_MacIntel_v101.pdfI got into trouble as per info below.Seems to be a gcc problem.an

Re: [OMPI users] SEGV in libopal during MPI_Alltoall

2006-07-20 Thread Frank Gruellich
ee (and even understand). Great, thank you very much, it works now. Kind regards, -- Frank Gruellich HPC-Techniker Tel.: +49 3722 528 42 Fax:+49 3722 528 15 E-Mail: frank.gruell...@megware.com MEGWARE Computer GmbH Vertrieb und Service Nordstrasse 19 09247 Chemnitz/Roehrsdorf

Re: [OMPI users] SEGV in libopal during MPI_Alltoall

2006-07-20 Thread Frank Gruellich
, isn't it? I don't understand, why it should depend on the number of MPI nodes, as you said. Thanks for your help. Kind regards, -- Frank Gruellich HPC-Techniker Tel.: +49 3722 528 42 Fax:+49 3722 528 15 E-Mail: frank.gruell...@megware.com MEGWARE Computer GmbH Vertrieb und Service Nordstrasse 19 09247 Chemnitz/Roehrsdorf Germany http://www.megware.com/

Re: [OMPI users] SEGV in libopal during MPI_Alltoall

2006-07-20 Thread Frank Gruellich
Hi, shen T.T. wrote: > Do you have the other compiler? Could you check the error and report it ? I don't used other Intel Compilers at the moment, but I'm going to give gfortran a try today. Kind regards, -- Frank Gruellich HPC-Techniker Tel.: +49 3722 528 42 Fax:+49

Re: [OMPI users] SEGV in libopal during MPI_Alltoall

2006-07-20 Thread Frank Gruellich
efore we dig > deeper into your issue > > > and then more specifically run > host% ompi_info --param coll all Find attached ~/notes from $ ( ompi_info; echo '='; ompi_info --param coll all ) >~/notes Thanks in advance and kind regards, -- Frank

[OMPI users] SEGV in libopal during MPI_Alltoall

2006-07-19 Thread Frank Gruellich
is my first post (so be gentle) and at this time I'm not very used to the verbosity of this list, so if you need any further informations do not hesitate do request them. Thanks in advance and kind regards, -- Frank Gruellich HPC-Techniker Tel.: +49 3722 528 42 Fax:+49 3722

Re: [OMPI users] OS X, OpenMPI 1.1: An error occurred in MPI_Allreduce on, communicator MPI_COMM_WORLD (Jeff Squyres (jsquyres))

2006-07-05 Thread Frank Kahle
world" on both subnets yields the error [g5dual.3-net:00436] *** An error occurred in MPI_Send [g5dual.3-net:00436] *** on communicator MPI_COMM_WORLD [g5dual.3-net:00436] *** MPI_ERR_INTERN: internal error [g5dual.3-net:00436] *** MPI_ERRORS_ARE_FATAL (goodbye) Hope this helps! Frank Ju

Re: [OMPI users] OpenMPI 1.1: Signal:10 info.si_errno:0(Unknown error: 0), si_code:1(BUS_ADRALN) (Frank)

2006-06-28 Thread Frank
Hi! The very same error occured with openmpi-1.1rc2r10468, too. Yours, Frank users-requ...@open-mpi.org wrote: Send users mailing list submissions to us...@open-mpi.org To subscribe or unsubscribe via the World Wide Web, visit http://www.open-mpi.org/mailman/listinfo.cgi

[OMPI users] OpenMPI 1.1: Signal:10 info.si_errno:0(Unknown error: 0) si_code:1(BUS_ADRALN)

2006-06-28 Thread Frank
0, API v1.0, Component v1.1) Enclosed you'll find the config.log. Yours, Frank This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by Open MPI configure 1.1, which was generated by GNU Autoconf 2.59. I

[OMPI users] Open MPI 1.2a1r10185 and XGrid

2006-06-04 Thread Frank
1.1a3, too. vhone runs fine, when not submitted to the XGrid, i.e. without XGRID_CONTROLLER_HOSTNAME and XGRID_CONTROLLER_PASSWORD set up. Have you got any ideas to fix this? Yours, Frank This file contains any messages produced by compilers while running configure, to aid debugging if configure

[OMPI users] Open MPI 1.0.2 with XGrid on Netvolumes failed to run (wrong ownership)

2006-06-04 Thread Frank
ut XGrid set up vhone run's just fine on network-volumes, too (see attachment dirlist_noxgrid_netvolume.txt). Yours, Frank total 117520 drwxrwxrwx 80 motte wheel 2720 Jun 4 10:02 . drwxrwxrwx7 admin wheel 238 Jun 3 13:00 .. -rw-rw-rw-1 motte wheel 21508 Jun 3

[OMPI users] Mac OS X: sess_dir_finalize leave

2006-06-02 Thread Frank
[powerbook.2-net:06962] sess_dir_finalize: univ session dir not empty - leaving [powerbook:~/.local/MVH-1.0] admin% Have you got any idea what's wrong? Same leave happens when the job isn't supplied via XGrid. Enclosed you will find the ompi_info output and the config.log. Yo

Re: [OMPI users] Mac OS X 10.4.5 and XGrid, Open-MPI V1.0.1

2006-03-20 Thread Frank
nalize: found proc session dir empty - deleting [ibi:00734] sess_dir_finalize: found job session dir empty - deleting [ibi:00734] sess_dir_finalize: found univ session dir empty - deleting [ibi:00734] sess_dir_finalize: found top session dir empty - deleting [powerbook:/usr/local/MVH-1] admin% Than

[OMPI users] Mac OS X 10.4.5 and XGrid, Open-MPI V1.0.1

2006-03-19 Thread Frank
e-7_0/default-universe/1 [ibook-g4:14666] unidir: /tmp/openmpi-sessions-nobody@xgrid-node-7_0/default-universe [ibook-g4:14666] top: openmpi-sessions-nobody@xgrid-node-7_0 [ibook-g4:14666] tmp: /tmp Does this is of any help to you? Thanks, Frank On Mar 18, 2006, at 5:40 AM, Frank

[OMPI users] Mac OS X 10.4.5 and XGrid, Open-MPI V1.0.1

2006-03-18 Thread Frank
Has anyone any idea concerning this matter? Frank