Hi Jack
100 kbytes are not really big messages sizes. My applications
routinely exchange larger amounts.
The MPI_ERR_TRUNCATE error means that a buffer you provided to
MPI_Recv is too small
to hold the data to be received. Check the size of the data you send
and compare it with the size
of the bu
Hello there,
I have a problem setting up MPI/LAM. Here we go:
I start lam with the lamboot command successfully:
$ lamboot -v hostnames
LAM 7.1.2/MPI 2 C++/ROMIO - Indiana University
n-1<11960> ssi:boot:base:linear: booting n0 (frost)
n-1<11960> ssi:boot:base:linear: booting n1 (hurricane)
n-
If you're just starting with MPI, is there any chance you can upgrade to Open
MPI instead of LAM/MPI? All of the LAM/MPI developers moved to Open MPI years
ago.
On Jul 8, 2010, at 6:01 AM, Oliver Stolpe wrote:
> Hello there,
>
> I have a problem setting up MPI/LAM. Here we go:
>
> I start l
I thought this is OpenMPI what I was using. I do not have permission to
install something, only in my home directory. All tutorials I found
started the environment with the lamboot command. Whats the difference
using only OpenMPI?
$ whereis openmpi
openmpi: /etc/openmpi /usr/lib/openmpi /usr/l
Hi Oliver,
Looks like you are mixing LAM and OpenMPI. Remove LAM from your environment
(PATH, LD_LIBRARY_PATH or similar) and try again.
HTH,
Mac
>From my PDA: no type good
- Original Message -
From: users-boun...@open-mpi.org
To: us...@open-mpi.org
Sent: Thu Jul 08 05:01:00 2
Lam and open mpi are two different mpi implementations. Lam came before open
mpi; we stopped developing lam years ago.
Lamboot is a lam-specific command. It has no analogue in open mpi.
Orterun is open mpi's mpirun.
>From a quick look at your paths and whatnot, its not immediately obvious ho
You were right, it was linked to the lam compiler. I didn't find the
open mpi compiler on the system, though.
Now I downloaded and compiled the current stable version of openmpi.
That worked. I had to make symbolic links to the exectuables so that the
system won't get confused with the old mpi i
Hi,
Am 08.07.2010 um 13:13 schrieb Oliver Stolpe:
> I thought this is OpenMPI what I was using. I do not have permission to
> install something, only in my home directory.
even with this setup you could install an Open MPI version or other software
for your own usage if necessary. I put such s
Usually, systems like yours (that have executables named "mpirun.openmpi" and
"mpirun.lam" and so on) have some kind of system for selecting what the system
default MPI should be. It should then set a bunch of sym links to make the
"simple" names point to the "specific" names. For example, if
Douglas Guptill wrote:
On Wed, Jul 07, 2010 at 12:37:54PM -0600, Ralph Castain wrote:
Noafraid not. Things work pretty well, but there are places
where things just don't mesh. Sub-node allocation in particular is
an issue as it implies binding, and slurm and ompi have conflicting
methods.
On Jul 7, 2010, at 9:27 PM, Jed Brown wrote:
> Sorry, that didn't register. The displ argument is MPI_Aint which is 8
> bytes (at least on LP64, probably also on LLP64), so your use of kind=8
> for that is certainly correct. The count argument is a plain int, I
> don't see how your code could be
Hi Zhigang
Are you talking about a run time failure?
If you are, I think what is missing is just to set the PATH and the
LD_LIBRARY_PATH environment variables to point to the OpenMPI directories.
This can be done in your .[t]cshrc / .profile / .bashrc
file in your home directory (assuming it
On Thu, 8 Jul 2010 09:53:11 -0400, Jeff Squyres wrote:
> > Do you "use mpi" or the F77 interface?
>
> It shouldn't matter; both the Fortran module and mpif.h interfaces are the
> same.
Yes, but only the F90 version can do type checking, the function
prototypes are not present in mpif.h. The tr
Hi, thanks, the LD_LIBRARY_PATH has been set, and I checked again, and I don't
think there is a confict.
May I ask you a question, how do you normally configure your openmpi?
I guess you will not use simply "./configure --prefix=blahblah", pls correct me
if I am wrong.
So, what is your procedu
Hi Zhigang
So, did setting the LD_LIBRARY_PATH work?
**
I don't add many options to the OpenMPI configure,
besides --prefix.
OpenMPI does a very good job searching and checking
for everything that is available and that it needs in the system.
It will build with support for nearly everything it
Thank you Gus, your answer is very helpful.
I use a CFD opensource called OpenFOAM, from official build suggestions, I
found something like "--with-sge",
but I just don't know if it make sense in my school 's hpc setting.
The basic question is, if simply "./configure --prefix=blahblah" works (a
I'm trying to use MPI with fortran on Linux 2.6.18-164.6.1.el5 x86_64
I compiled this trivial code with mpif90:
program simple
include 'mpif.h'
integer numtasks, rank, ierr, rc
rc=1
call MPI_INIT(ierr)
if (ierr .ne. 0) then
print *,'Error starting MPI pr
Hi Zhigang
I never used OpenFoam
(we're atmosphere/ocean/climate/Earth Science CFD practitioners here!)
but I would guess it should work with any
resource manager, not necessarily with SGE.
In any case, it doesn't make much sense to configure OpenMPI with SGE,
if your university HPC uses anoth
PS - BTW, --with-psm that you said before you are using,
refers to specific hardware (see below).
You need to check which interconnect (network)
your HPC computer uses for MPI communication.
Ask your HPC system administrator or help desk.
If it is
"QLogic InfiniPath PSM" (I don't have this one h
Hi Gus,
I am very glad to have your help, it's really helpful, and it clears many my
long time confusions.
I want to say thank you.
I come here for postdoc in civil engineering, we need to simulate the fluid
around the structures, in my case it is wind. That is, I am working on CFD from
an civ
Anton,
On the node that you saw the failure (u02n065) can you verify what the max
locked memory limit is set to? In a bash shell you can do this with ulimit -l.
It should be set to at least 128K. Also please verify that the available
memory on the node (/proc/meminfo shows this) is sufficient
21 matches
Mail list logo