Hi,
Please someone could help me with this error:
[node11][0,1,7][/SourceCache/openmpi/openmpi-5/openmpi/ompi/mca/btl/
tcp/btl_tcp_frag.c:202:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv:
readv failed with errno=54
[node28][0,1,22][/SourceCache/openmpi/openmpi-5/openmpi/ompi/mca/btl/
tcp/btl_t
Did you try using the Apple compiler too ?
Le 09-09-01 à 19:31, Marcus Herrmann a écrit :
Hi,
I am trying to install openmpi 1.3.3 under OSX 10.6 (Snow Leopard)
using the 11.1.058 intel compilers. Configure and build seem to work
fine. However trying to run ompi_info after install causes di
did you saw that, maybe, just maybe using:
xserve01.local slots=8 max-slots=8
xserve02.local slots=8 max-slots=8
xserve03.local slots=8 max-slots=8
xserve04.local slots=8 max-slots=8
it can set the number of process specifically for each node, the
"slots" does this setting the configuration of
Hi,
So you have 4 nodes each one with 2 processors, each processor 4-core
- quad-core.
So you have capacity for 32 process in parallel.
I think that only using the hostfile is enough is how I use. If you to
specify a specific host or a different sequence, the mpirun will obey
the host sequ
Ok, after all the considerations, I'll try Boost, today, make some
experiments and see if I can use it or if I'll avoid it yet.
But as said by Raimond I think, the problem is been dependent of a
rich-incredible-amazing-toolset but still implementing only
MPI-1, and do not implement all
Hi Raymond, thanks for your answer
Le 09-07-06 à 21:16, Raymond Wan a écrit :
I've used Boost MPI before and it really isn't that bad and
shouldn't be seen as "just another library". Many parts of Boost
are on their way to being part of the standard and are discussed and
debated on. And
serialization transparently and has some great natural
extensions to the MPI C interface for C++, e.g.
bool global = all_reduce(comm, local, logical_and());
This sets "global" to "local_0 && local_1 && ... && local_N-1"
Luis Vitorio Cargnini wrote:
Thank
just one additional and if I have:
vector< vector > x;
How to use the MPI_Send
MPI_Send(&x[0][0], x[0].size(),MPI_DOUBLE, 2, 0, MPI_COMM_WORLD);
?
Le 09-07-05 à 22:20, John Phillips a écrit :
Luis Vitorio Cargnini wrote:
Hi,
So, after some explanation I start to use the bin
Thank you very much John, the explanation of &v[0], was the kind of
think that I was looking for, thank you very much.
This kind of approach solves my problems.
Le 09-07-05 à 22:20, John Phillips a écrit :
Luis Vitorio Cargnini wrote:
Hi,
So, after some explanation I start to use
Hi,
So, after some explanation I start to use the bindings of C inside my C
++ code, then comme my new doubt:
How to send a object through Send and Recv of MPI ? Because the types
are CHAR, int, double, long double, you got.
Someone have any suggestion ?
Thanks.
Vitorio.
smime.p7s
Descripti
Thanks Jeff.
Le 09-07-04 à 08:24, Jeff Squyres a écrit :
There is a proposal that has passed one vote so far to deprecate the
C++ bindings in MPI-2.2 (meaning: still have them, but advise
against using them). This opens the door for potentially removing
the C++ bindings in MPI-3.0.
As h
st is working like a giant wrapper for many non-
OO things to C++, and it seems to use boost I have to install a lot of
additional things.
Thanks.
Regards.
Vitorio.
Le 09-07-03 à 19:44, Dorian Krause a écrit :
I'm sorry. I meant boost.mpi ...
Luis Vitorio Cargnini wrote:
Hi,
Hi,
Please I'm writing a C++ applications that will use MPI. My problem
is, I want to use the C++ bindings and then come my doubts. All the
examples that I found people is using almost like C, except for the
fact of adding the namespace MPI:: before the procedure calls.
For example I want to
maybe add the slots=1 for example to your first node
Le 09-05-09 à 11:42, Venu Gopal a écrit :
I am venu,
I have tried to setup a simple 2 node openmpi system.
on two machines one is running debian lenny (ip 10.0.3.1)
other is running ubuntu hardy (ip 10.0.3.3)
I am getting error when i try
This problem is occuring because the fortran wasn't compiled with the
debug symbols:
warning: Could not find object file "/Users/admin/build/i386-apple-
darwin9.0.0/libgcc/_udiv_w_sdiv_s.o" - no debug information available
for "../../../gcc-4.3-20071026/libgcc/../gcc/libgcc2.c".
Is the same
$0,02 of contribution, try macports
Le 09-05-04 à 11:42, Jeff Squyres a écrit :
FWIW, I don't use Xcode, but I use the precompiled gcc/gfortran from
here with good success:
http://hpc.sourceforge.net/
On May 4, 2009, at 11:38 AM, Warner Yuen wrote:
Have you installed a Fortran compil
mpted for a password before Open MPI will work
properly.
If you're still having problems after fixing this, please send all
the information from the "help" URL I sent earlier.
Thanks!
On Apr 22, 2009, at 3:24 PM, Luis Vitorio Cargnini wrote:
ok this is the debug information debug runn
p" URL I sent earlier.
Thanks!
On Apr 22, 2009, at 3:24 PM, Luis Vitorio Cargnini wrote:
ok this is the debug information debug running on 5 nodes (trying at
least), the process is locked until now:
each node is composed by two quad-core microprocessors.
(don't finish), one node yet
w.open-mpi.org/community/help/
Thanks.
On Apr 21, 2009, at 10:34 AM, Luis Vitorio Cargnini wrote:
Hi,
Please someone can answer me which can be this problem ?
daemon INVALID arch ffc91200
the debug output:
[[41704,1],14] node[4].name cluster-srv4 daemon INVALID arch ffc91200
[cluster-sr
Hi,
Please I did as mentioned into the FAQ for SSH password-less but the
mpirun still requesting me the password ?
-bash-3.2$ mpirun -d -v -hostfile chosts -np 16 ./hello
[cluster-srv0.logti.etsmtl.ca:31929] procdir: /tmp/openmpi-sessions-AH72000@cluster-srv0.logti.etsmtl.ca_0
/41688/0/0
[cl
Hi,
Please someone can answer me which can be this problem ?
daemon INVALID arch ffc91200
the debug output:
[[41704,1],14] node[4].name cluster-srv4 daemon INVALID arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[0].name cluster-srv0 daemon 0
arch ffc91200
[cluster-srv3:09684] [[41704
21 matches
Mail list logo