Hi,
Since you are using std::string in your structure you should allocate
the memory with "new" instead of "malloc". Otherwise the constructor of
std::string is not called and things like the length() of a string might
not give the desired result leading boost to iterate over too many chars.
Hi Dave,
thanks for your answer.
The question to me is: Is an MPI process supposed to eventually exit or
can it be a server process running for eternity?
In the later case, no progress will be made ...
I think it might be helpful to users to give a clarification in the
standard (e.g. in an "
Dear list,
from some small tests I ran it appears to me that progress in passive
target sided communication is only guaranteed if the origin issues some
"deeper" MPI function (i.e., a simple MPI_Comm_rank is not sufficient).
Can someone confirm this experimental observation?
I have two ques
On 1/20/10 5:38 PM, Eloi Gaudry wrote:
Hi,
FYI, This issue is solved with the last version of the library
(v2-1.11), at least on my side.
Eloi
Hi Eloi,
Thanks a lot for the message. The issues are fixed on my side too.
Dorian
oblem doesn't show up?!
I'm interested in digging further but I need some advices/hints where to
go from here.
Thanks,
Dorian
On 1/19/10 1:29 PM, Jeff Squyres wrote:
Can you get a core dump, or otherwise see exactly where the seg fault is
occurring?
On Jan 18, 2010, at 8:34 AM
Hi Eloi,
Does the segmentation faults you're facing also happen in a sequential
environment (i.e. not linked against openmpi libraries) ?
No, without MPI everything works fine. Also, linking against mvapich
doesn't give any errors. I think there is a problem with GotoBLAS and
the shared lib
Hi,
has any one successfully combined OpenMPI and GotoBLAS2? I'm facing
segfaults in any program which combines the two libraries (as shared
libs). The segmentation fault seems to occur in MPI_Init(). The gdb
backtrace is
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thr
dlock...
Just my $0.02 ...
Thanks
Edgar
Dorian Krause wrote:
Dear list,
the attached program deadlocks in MPI_File_write_all when run with 16
processes on two 8 core nodes of an Infiniband cluster. It runs fine
when I
a) use tcp
or
b) replace MPI_File_write_all by MPI_File_write
I'm using
Dear list,
the attached program deadlocks in MPI_File_write_all when run with 16
processes on two 8 core nodes of an Infiniband cluster. It runs fine when I
a) use tcp
or
b) replace MPI_File_write_all by MPI_File_write
I'm using openmpi V. 1.3.2 (but I checked that the problem is also
occurs
Hi Marcus,
Marcus Daniels wrote:
Hi,
I'm trying to do passive one-sided communication, unlocking a receive
buffer when it is safe and then re-locking it when data has arrived.
Locking also occurs for the duration of a send.
I also tried using post/wait and start/put/complete, but with that I
Hi,
a similar question was recently discussed on the mailing list:
http://www.open-mpi.org/community/lists/users/2009/08/10458.php
George Markomanolis wrote:
Dear all,
I am trying to figure out the algorithms that are used for some
collective communications (allreduce, bcast, alltoall). Is
Hi,
Dominik Táborský wrote:
Okay, now it's getting more confusing since I just found out that it
somehow stopped working for me!
Anyway, let's find a solution.
I found out that there is difference between
ssh node1 echo $PATH
In this case the $PATH variable is expanded by the shell *befor
Hi,
--mca mpi_leave_pinned 1
might help. Take a look at the FAQ for various tuning parameters.
Michael Di Domenico wrote:
I'm not sure I understand what's actually happened here. I'm running
IMB on an HP superdome, just comparing the PingPong benchmark
HP-MPI v2.3
Max ~ 700-800MB/sec
OpenM
Hi,
you do not send the trailing '0' which is used to determine the
stringlength. I assume that chdata[i] has at least length 5 (otherwise
you overrun your memory). Replace the "4" by "5" in MPI_Isend and
MPI_Recv and everything should work (If I get the problem right).
Dorian.
Alexey Soko
Hi,
you can ignore MP... if you set the compiler and linker to mpicc. In my
makefile for hpl I have
# --
# - MPI directories - library --
# --
Catalin David wrote:
Hello, all!
Just installed Valgrind (since this seems like a memory issue) and got
this interesting output (when running the test program):
==4616== Syscall param sched_setaffinity(mask) points to unaddressable byte(s)
==4616==at 0x43656BD: syscall (in /lib/tls/libc-2.3
Hi,
//Initialize step
MPI_Init(&argc,&argv);
//Here it breaks!!! Memory allocation issue!
MPI_Comm_size(MPI_COMM_WORLD, &pool);
std::cout<<"I'm here"<
and your PATH is also okay? (I see that you use plain mpicxx in the
build) ...
Moreover, I wanted to see if the installation is actually
I'm sorry. I meant boost.mpi ...
Luis Vitorio Cargnini wrote:
Hi,
Please I'm writing a C++ applications that will use MPI. My problem
is, I want to use the C++ bindings and then come my doubts. All the
examples that I found people is using almost like C, except for the
fact of adding the na
Hi,
Luis Vitorio Cargnini wrote:
Hi,
Please I'm writing a C++ applications that will use MPI. My problem
is, I want to use the C++ bindings and then come my doubts. All the
examples that I found people is using almost like C, except for the
fact of adding the namespace MPI:: before the proced
nan}
mem[5] = { nan, nan, nan}
mem[6] = { nan, nan, nan}
mem[7] = { nan, nan, nan}
mem[8] = { nan, nan, nan}
mem[9] = { nan, nan, nan}
Dorian
> -Ursprüngliche Nachricht-----
> Von: "Dorian Krause"
> Gesendet: 1
Thanks George (and Brian :)).
The MPI_Put error is gone. Did you take a look at the problem
that with the block_indexed type the PUT doesn't work? I'm
still getting the following output (V1 corresponds to the datatype
created with MPI_Type_create_indexed_block while the V2 type
is created with MPI
21 matches
Mail list logo