Check the FAQ section for processor affinity.
On Apr 25, 2008, at 2:27 PM, Roopesh Ojha wrote:
Hello
As a newcomer to the world of openMPI who has perused the faq and
searched
the archives, I have a few questions about how to schedule processes
across
a heterogeneous cluster where some process
Albert,
Please include all the information indicated here:
http://www.open-mpi.org/community/help/
Without more information, we would just be guessing as to
what is causing the issue you are seeing. As for that guess,
I would guess you are running an out-of-date version of Open MPI.
On Sat, Apr
Doesn't seem to work. This is the appfile I'm using:
# Application context files specify each sub-application in the
# parallel job, one per line.
# Server
-np 2 server
# Client
-np 1 client 0.1.0:2001
And the output:
mpirun --app ./appfile
Processor 0 (3659, Receiver) initialized
Processor 1 (
Daniel,
I fixed this issue few days ago in the trunk. More info at
https://svn.open-mpi.org/trac/ompi/changeset/18290
The problem was that the compilers do not preprocess an assembly files
with the suffix .s, and in this specific case don't expand the two
macros leaf and end. Changing the su
Dear Developers,
I am new to openmpi and IRIX, I used it for parallel computing of a CFD codes.
Attachment is the full version of the error I met! please help, thanks a lot!
The error message looks like this,
> atomic-asm.s: Assembler messages:
> atomic-asm.s:8: Error: unrecognized opcode > `le
Anyone else out there having this problem?
Albert
Begin forwarded message:
From: Albert Everett
Date: April 26, 2008 11:11:25 AM CDT
To: npaci-rocks-discuss...@sdsc.edu
Subject: hanging orteds
I'm getting a lot of orted processes that don't die out after jobs
finish, both interactive and v
Dear Developers,
I met the same message as Jonathan Day met before,
http://www.open-mpi.org/community/lists/users/2005/09/0138.php
I use
Irix6.5,
Openmpi-1.2.6,
gcc-4.3.0 (gcc g++ gfortran),
gnu-binutils-2.18,
and I saw the answer by Mr. Brain is:
This scenario is known to be buggy in some versions of Open MPI. It is
now fixed in svn version and will be part of the 1.3 release.
To quick fix your application, you'll need to spawn both applications
with the same mpirun, with MPMD syntax. However this will have the
adverse effect of hav
Yes, there can be all kinds of hidden dependencies and/or bootstrap
issues with linking Fortran codes with the C compilers. We've
typically used the table in the Automake docs to determine which
linker is used to understand this stuff (well, it *used* to be a
table, but I think AM now supp
I want to connect two MPI programs through the MPI_Comm_connect/
MPI_Comm_Accept API.
This is my server app:
int main(int argc, char* argv[])
{
int rank, count;
int i;
float data[100];
char myport[MPI_MAX_PORT_NAME];
MPI_Status status;
MPI_Comm intercomm;
MPI_Init(&argc, &argv);
10 matches
Mail list logo