I will be out of the office starting 12/21/2008 and will not return until
01/02/2009.
So your tests show:
1. "Shared library in FORTRAN + MPI executable in FORTRAN" works.
2. "Shared library in C++ + MPI executable in FORTRAN " does not work.
It seems to me that the symbols in C library are not really recognized by
FORTRAN executable as you thought.What compilers did
Where did you put the environment variable related to MCF licence file and
MCF share libraries?
What is your default shell?
Did you test indicate the following?
Suppose you have 4 nodes,
on node 1, " mpirun -np 4 --host node1,node2,node3,node4 hostname" works,
but "mpirun -np4 --host node1,no
which
could be the right one ;-)
Thanks in advance for your hints, Best Regards, Gilbert.
On Thu, 23 Oct 2008, Mi Yan wrote:
>
> 1. MCA BTL parameters
> With "-mca btl openib,self", both message between two Cell processors on
> one QS22 and messages between two
can you update me with the mapping or the way to get it from the OS on the
Cell.
thanks
On Thu, Oct 23, 2008 at 8:08 PM, Mi Yan wrote:.
Lenny,
Thanks.
I asked the Cell/BE Linux Kernel developer to get the CPU mapping :) The
mapping is fixed in current ker
M,0793-1RZ
On Thu, Oct 23, 2008 at 3:00 PM, Mi Yan wrote:.
Hi, Lenny,
So rank file map will be supported in OpenMPI 1.3? I'm using OpenMPI1.2.6
and did not find parameter "rmaps_rank_file_".
Do you have idea when OpenMPI 1.3 will be available? OpenMPI 1.3 has
quite a few
Hi, Lenny,
So rank file map will be supported in OpenMPI 1.3?I'm using
OpenMPI1.2.6 and did not find parameter "rmaps_rank_file_".
Do you have idea when OpenMPI 1.3 will be available?OpenMPI 1.3
has quite a few features I'm looking for.
Thanks,
Mi
1. MCA BTL parameters
With "-mca btl openib,self", both message between two Cell processors on
one QS22 and messages between two QS22s go through IB.
With "-mca btl openib,sm,slef", message on one QS22 go through shared
memory, message between QS22 go through IB,
Depending on the message si
Open MPI Users
On Mon, 25 Aug 2008, Mi Yan wrote:
> Does OpenMPI always use SEND/RECV protocol bet
. Seems to me SEND/RECV is
always used no matter btl_openib_flags is. Can I force OpenMPI to use
RDMA between x86 and PPC? I only transfer MPI_BYTE, so we do not need
the support for endianness.
thanks,
Mi Yan
Ralph,
How does OpenMPI pick up the map between physical vs. logical
processors?Does OMPI look into "/sys/devices/system/node/node for
the cpu topology?
Thanks,
Mi Yan
Ralph Ca
ts (which would
default to whatever node you were on when you executed this), while
foo_ppc will run on both hosts b1 and b2 (which means the first rank
will always go on b1).
Hope that helps
Ralph
On Aug 20, 2008, at 10:02 AM, Mi Yan wrote:
> I have one MPI job consisting of two parts. One
PC box
"b2".Anyone can tell me how to start this MPI job?
I tried "mpirun -np 1 foo_x86 : -np 1 foo_ppc -H b1,b2"
I tried the above command on "b1", the X86 box, and I got "foo_ppc:
Exec Format error"
I tired on "b2", the PPC box, and I got "foo_x86: Exec format error"
Anybody has a clue? Thanks in advance.
Mi Yan
13 matches
Mail list logo