Frank Wein wrote:
[...]
Or basically my question could also be rephrased as: Is there a barrier
mechanism I could use in OMPI that causes basically very few to no CPU
usage (with higher latency then)? Intel MPI for example seems to have
the env var "I_MPI_WAIT_MODE=1" which uses
coding the logic for this into my
program, I thought I'll ask here first if this will work at all :).
Frank
>> (because there is currently no MPI-standardized way to get this information).
>>
>> I honestly don't remember what happened to this proposal, but I know who
>> made it (Adam Moody, from Livermore). I've just pinged him off list to find
>> out what happened
se to me - so I'm a
little lost.
On Aug 10, 2012, at 9:49 AM, Frank Kampe wrote:
> No. I am looking for a user-callable function that will return information
> about the running OpenMPI MPMD program from within the running program---the
> information listed below in (1) --
ogram? Or an
inter-program communication API that wants to tell another program some
information? Or an API by which the app can tell MPI "I'm going to spawn N
threads"? Or...?
On Aug 10, 2012, at 9:00 AM, Gus Correa wrote:
> On 08/10/2012 11:31 AM, Frank Kampe wrote:
>
users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Gus
Correa [g...@ldeo.columbia.edu]
Sent: Friday, August 10, 2012 11:00 AM
To: Open MPI Users
Subject: Re: [OMPI users] User Interface for MPMD
On 08/10/2012 11:31 AM, Frank Kampe wrote:
> Are there any user level APIs to pro
Are there any user level APIs to provide the following information to a running
OpenMPI MPMD program:
(1) Number of executable instances
(2) 1st MPI Task rank of each instance
(3) Number of MPI Tasks per instance
Thank You
Great, that works!! Many Thanks!
On Wed, Feb 1, 2012 at 4:17 PM, Paul Kapinos wrote:
> Try out the attached wrapper:
> $ mpiexec -np 2 masterstdout
>
>> mpirun -n 2
>
>
>> Is there a way to have mpirun just merger STDOUT of one process to its
>> STDOUT stream?
>
>
>
>
>
> --
> Dipl.-Inform. Pau
When running
mpirun -n 2
the STDOUT streams of both processes are combined and are displayed by
the shell. In such an interleaved format its hard to tell what line
comes from which node.
Is there a way to have mpirun just merger STDOUT of one process to its
STDOUT stream?
Best,
Frank
Cross
Say, I run a parallel program using MPI. Execution command
mpirun -n 8 -npernode 2
launches 8 processes in total. That is 2 processes per node and 4
nodes in total. (OpenMPI 1.5). Where a node comprises 1 CPU (dual
core) and network interconnect between nodes is InfiniBand.
Now, the rank number
OpenMPI-1.3 in a different way than
OpenMPI-1.2.8?
Kind regards,
Frank
help.
Frank
Still have a problem with gfortran on on mac os-x 10.5.3
/usr/local/bin/mpif90 main.f90
f951: error: unrecognized command line option "-std=legacy "
Almost there. Should I omit the -std=legacy part?
I'll try that.
Frank
I used the info provided on the openmpi users list and used the command
./configure --prefix=/usr/local --enable-mpi-f77 --enable-mpi-f90
F77=gfortran FC=gfortran FFLAGS="-m32 -std=legacy" --with-wrapper-
fflags="-m32 -std=legacy" --with-mpi-f90-size=medium --enable-mpirun-
prefix-by-default
You were right.
Thanks
Frank
On Feb 29, 2008, at 4:41 PM, Jeff Squyres wrote:
FWIW, this doesn't look like an Absoft error -- it looks like gcc
cannot create executables. So Open MPI (or any C application) cannot
be built.
Did you install the OS X developer tools? IIRC, that's wh
That's probably the problem.
I will try to install it.
Thanks so much for the rapid reply.
Frank
On Feb 29, 2008, at 4:41 PM, Jeff Squyres wrote:
FWIW, this doesn't look like an Absoft error -- it looks like gcc
cannot create executables. So Open MPI (or any C application) canno
I just upgraded to OSX 10.5.2 on my imacand am trying to install openmpi for use with absoft fortran 90based on their instruction page:http://www.absoft.com/Products/Compilers/Fortran/Linux/fortran95/BuildingOpenMPI_MacIntel_v101.pdfI got into trouble as per info below.Seems to be a gcc problem.an
ee (and even understand). Great, thank you very much, it
works now.
Kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49 3722 528 15
E-Mail: frank.gruell...@megware.com
MEGWARE Computer GmbH
Vertrieb und Service
Nordstrasse 19
09247 Chemnitz/Roehrsdorf
, isn't it? I don't understand, why it should
depend on the number of MPI nodes, as you said.
Thanks for your help. Kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49 3722 528 15
E-Mail: frank.gruell...@megware.com
MEGWARE Computer GmbH
Vertrieb und Service
Nordstrasse 19
09247 Chemnitz/Roehrsdorf
Germany
http://www.megware.com/
Hi,
shen T.T. wrote:
> Do you have the other compiler? Could you check the error and report it ?
I don't used other Intel Compilers at the moment, but I'm going to give
gfortran a try today.
Kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49
efore we dig
> deeper into your issue
>
>
> and then more specifically run
> host% ompi_info --param coll all
Find attached ~/notes from
$ ( ompi_info; echo '='; ompi_info --param coll all )
>~/notes
Thanks in advance and kind regards,
--
Frank
is my first post (so be gentle) and at this time I'm
not very used to the verbosity of this list, so if you need any further
informations do not hesitate do request them.
Thanks in advance and kind regards,
--
Frank Gruellich
HPC-Techniker
Tel.: +49 3722 528 42
Fax:+49 3722
world" on both subnets yields the error
[g5dual.3-net:00436] *** An error occurred in MPI_Send
[g5dual.3-net:00436] *** on communicator MPI_COMM_WORLD
[g5dual.3-net:00436] *** MPI_ERR_INTERN: internal error
[g5dual.3-net:00436] *** MPI_ERRORS_ARE_FATAL (goodbye)
Hope this helps!
Frank
Ju
Hi!
The very same error occured with openmpi-1.1rc2r10468, too.
Yours,
Frank
users-requ...@open-mpi.org wrote:
Send users mailing list submissions to
us...@open-mpi.org
To subscribe or unsubscribe via the World Wide Web, visit
http://www.open-mpi.org/mailman/listinfo.cgi
0, API v1.0, Component v1.1)
Enclosed you'll find the config.log.
Yours,
Frank
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by Open MPI configure 1.1, which was
generated by GNU Autoconf 2.59. I
1.1a3, too.
vhone runs fine, when not submitted to the XGrid, i.e. without
XGRID_CONTROLLER_HOSTNAME and XGRID_CONTROLLER_PASSWORD set up.
Have you got any ideas to fix this?
Yours,
Frank
This file contains any messages produced by compilers while
running configure, to aid debugging if configure
ut XGrid set up vhone run's just fine on network-volumes, too (see
attachment dirlist_noxgrid_netvolume.txt).
Yours,
Frank
total 117520
drwxrwxrwx 80 motte wheel 2720 Jun 4 10:02 .
drwxrwxrwx7 admin wheel 238 Jun 3 13:00 ..
-rw-rw-rw-1 motte wheel 21508 Jun 3
[powerbook.2-net:06962] sess_dir_finalize: univ session dir not empty -
leaving
[powerbook:~/.local/MVH-1.0] admin%
Have you got any idea what's wrong? Same leave happens when the job
isn't supplied via XGrid.
Enclosed you will find the ompi_info output and the config.log.
Yo
nalize: found proc session dir empty - deleting
[ibi:00734] sess_dir_finalize: found job session dir empty - deleting
[ibi:00734] sess_dir_finalize: found univ session dir empty - deleting
[ibi:00734] sess_dir_finalize: found top session dir empty - deleting
[powerbook:/usr/local/MVH-1] admin%
Than
e-7_0/default-universe/1
[ibook-g4:14666] unidir:
/tmp/openmpi-sessions-nobody@xgrid-node-7_0/default-universe
[ibook-g4:14666] top: openmpi-sessions-nobody@xgrid-node-7_0
[ibook-g4:14666] tmp: /tmp
Does this is of any help to you?
Thanks,
Frank
On Mar 18, 2006, at 5:40 AM, Frank
Has anyone any idea concerning this matter?
Frank
31 matches
Mail list logo