Hi,
I have a problem with derived data types and MPI_Scatter/MPI_Gather in C
(Solaris 10 sparc, openmpi-1.2.4).
I want to distribute the columns of a matrix with MPI_Scatter/MPI_Gather.
Unfortunately my program didn't work with my derived data type, so I used
a 2x2 matrix to figure out what's wro
The derived datatype used together with the scatter operation is
wrong. Your datatype looks correct, except when you use it with a count.
A MPI datatype is defined by its size and content, as well as its
extent. When multiple elements of the same size are used in a
contiguous manner (such i
I am having a problem using an "application context" with OpenMPI 1.2.4.
My invocation of "mpirun" is shown below along with the "--app" file.
Invocation:
export LD_LIBRARY_PATH="/usr/local/openmpi-1.2.4/gnu/lib"
/usr/local/openmpi-1.2.4/gnu/bin/mpirun --app /my_id/appschema
Contents
I am having trouble building mpif77/mpif90 with gfortran on Mac OS 10.5. Or
maybe just running. The configure, make all, and make install seemed to go just
fine, finding my gfortran and apparently using it, but the scripts mpif77 and
mpif90 give the error that my openmpi was not built with for
I believe you still must add "--enable-f77" and "--enable-f90" to the
OMPI configure line in addition to setting the FC and F77 env variables.
-david
--
David Gunter
HPC-3: Parallel Tools Team
Los Alamos National Laboratory
On Jun 16, 2008, at 10:25 AM, Weirs, V Gregory wrote:
I am having
Greg,
If you use the absolute path names to run your mpif77 and mpif90 what
output do you get. In spite of the results from which mpif77, the
outputs from mpif77 and mpif90 look suspiciously like the outputs
from the apple supplied versions in /usr/bin.
Doug Reeder
On Jun 16, 2008, at 9:2
Greg,
In your run_output file you don't appear to be using the openmpi
versions that you built. From your make-install.out file it looks
like your versions are in /usr/local/openmpi/1.2.6-gcc4.0/bin. You
need to use that absolute path or prepend that path to your PATH
environment variable
Dear Sir:
I am trying to install Open MPI on a cluster that has been installed with mpich-
gm MPI.
I have followed the steps on your website.
I can compile and run the Hello_c application correctly.
But, how can I make sure that the application is run by Open MPI not by mpich-
Dear Mister Smith,
Thank you for installing Open MPI.
On 12:51 Mon 16 Jun , Tony Smith wrote:
> I have changed PATH and LD_LIBRARY_PATH:
Please be aware that you have to make those changes within your job
script. Otherwise they will only affect your local shell.
> But, how can I make sure t
Dear Sir:
thanks.
I have changed it to its absolute path:
/ptmp/myname/openmpi123/ompi123_install/bin/mpirun -np 8
/ptmp/myname/openmpi123/openmpi-1.2.3/examples/hello_c
But I still got the error :
[hpc-cluster-38 :32635] [0,0,0] ORTE_ERR
added
You should not need to delete, just add in front of MPICH.
> Would you please help me with that ?
I utterly hope I just did.
Most sincerely yours ;-)
-Andreas
--
Andreas Sch?fer
Cluster and Metacomputing Working Group
Friedrich-Schiller-
Can you check to see what the locked memory limits are *inside of a
job*? This can be different than what they are if you login to the
node independently / outside of an LSF job.
For example, write a quickie script that runs "ulimit -a" and submit
that through LSF and see what results you
Hi,
Can I use OpenMPI with C++ without any POGs ? Are there some kind of
wrapper of OpenMPI to C++ ?
--
Davi Vercillo Carneiro Garcia
Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br
"Good things come to those who... wait." - Deb
OpenMPI codes with C++ MPI methods if you want.
Most people I know use the C bindings in their C++ anyway.
Nothing special needs to be done to call C from C++ if it is written
for this (OpenMPI is)
Not sure what a POG is so I may be wrong on that point.
Brock Palen
www.umich.edu/~brockp
Cen
Greetings Open MPI users; we thought you'd be interested in the
following announcement...
A new supercomputer, powered by Open MPI, has broken the petaflop
barrier to become the world's fastest supercomputer. The
"Roadrunner" system was jointly developed by Los Alamos National
Laboratories and IB
Brad just curious.
Did you tweak any other values for starting and running a job on such
a large system? You say unmodified, but OpenMPI lets you tweak many
values at runtime.
I would be curious to expand what I know from what you discovered.
Brock Palen
www.umich.edu/~brockp
Center for
16 matches
Mail list logo