Turned out it was the fftw package that was not installed perfectly.
I used the fftw package that came with QE as suggested and all went fine.
Thanks a lot.
CY
From: Jeff Squyres
To: Open MPI Users
List-Post: users@lists.open-mpi.org
Date: Sat, 16 Aug 2008 07:57:58 -0400
Subject: Re: [OMPI user
Hi Jitendra
Before you worry too much about the inefficiency of using a contiguous
scratch buffer to pack into and send from and a second contiguous scratch
buffer to receive into and unpack from, it would be worth knowing how
OpenMPI processes a discontiguous datatype on your platform.
Gatherin
George,
Yes that's what I understood after struggling with it over a week. I
need to send such messages frequently so creating and destroying the
data type each time may be expensive. What would be the best alternative
for sending such malloced data ? Though I can always pack the data in a
long arr
Jitendra,
You will not be able to reuse the same data-type. All internal arrays
are malloced and their position in memory will be completely random
from call to call (or use to use of the data-type). Therefore, even if
your structure looks the same, as you use non static arrays in your
st
Ralph,
Thanks!
I checked output of "ompi_info" and found that OpenMPI on PowerPC is
not built with heterogeneous support. We will rebuild OpenMPI and then
try the command you suggested.
Best Regards,
Mi
First, I trust that you built Open MPI to support heterogeneous
operations? I'm not sure what version you are using, but it may well
have done it by default.
Second, there is an error on your cmd line that is causing the
problem. It should read:
mpirun -np 1 -host b1 foo_x86 : -np 1 -host
I have one MPI job consisting of two parts. One is "foo_x86", the other
is "foo_ppc", and there is MPI communication between "foo_x86" and
"foo_ppc".
"foo_x86" is built on X86 box "b1", "foo_pcc" is built on PPC box
"b2".Anyone can tell me how to start this MPI job?
I tried "mpiru
Hello,
Three things...
1) Josh, the main developer for checkpoint/restart, has been away for
a few weeks
and has just returned. I suspect he will get unburied from e-mail in
another day or two.
2) The 1.4 (and 1.3) branch is very much under rapid development, and
there will be times
when basic fu
There was a bug that caused ompi-checkpoint not to find the correct
place in the session directory for mpirun's contact file. This was
fixed in r19265, so you should no longer have a problem.
On Aug 20, 2008, at 2:11 AM, Matthias Hovestadt wrote:
Hi Gabriele!
In this case, mpirun works w
Hi Gabriele!
In this case, mpirun works well, but the checkpoint procedure fails:
ompi-checkpoint 20109
[node0316:20134] Error: Unable to get the current working directory
[node0316:20134] [[42404,0],0] ORTE_ERROR_LOG: Not found in file
orte-checkpoint.c at line 395
[node0316:20134] HNP with PI
Dear OpenMPI developers,
i'm testing checkpoint and restart with OpenMPI 1.4 nightly. Test machine is
IBM Blade System over Infiniband with 4 processors every communication node.
At the moment, I have some problems. My application is a simply
communication ring between processors, with parametric
11 matches
Mail list logo