On Jan 29, 2010, at 9:13 AM, Laurence Marks wrote:
> OK, but trivial codes don't always reproduce problems.
Yes, but if the problem is a file reading beyond the end, that should be fairly
isolated behavior.
> Is strace useful?
Sure. But let's check to see if the apps are actually dying or han
Josh,
I was following this thread as I had similar symptoms and discovered a
peculiar error. when I launch the program, openmpi follows the
$TMPDIR environment variable and puts the session information in the
$TMPDIR directory. However ompi-checkpoint seems to be requiring the
sessions file to b
On 28 Jan 2010, at 21:04, DevL wrote:
> Hi,
> it looks that there is an issue with totalview and
> openmpi
>
> message queue just empty and output shows:
> WARNING: Field mtc_ndims_or_nnodes of type mca_topo_base_comm_1_0_0_t not
> found!
> WARNING: Field mtc_dims_or_index of type mca_topo_bas
I have just created a small cluster consisting of three nodes:
bellhuey AMD 64 with 4 cores
wolf1 AMD 64 with 2 cores
wolf2 AMD 64 with 2 cores
The host file is:
bellhuey slots=4
wolf1 slots=2
wolf2 slots=2
bellhuey is the master and wolf1 and wolf2 share the /usr and /home file
On Fri, 29 Jan 2010 11:25:09 -0500, Richard Treumann
wrote:
> Any support for automatic serialization of C++ objects would need to be in
> some sophisticated utility that is not part of MPI. There may be such
> utilities but I do not think anyone who has been involved in the discussion
> knows o
Tim wrote:
By serialization, I mean in the context of data storage and transmission. See
http://en.wikipedia.org/wiki/Serialization
e.g. in a structure or class, if there is a pointer pointing to some memory
outside the structure or class, one has to send the content of the memory
besides th
Tim
MPI is a library providing support for passing messages among several
distinct processes. It offers datatype constructors that let an
application describe complex layouts of data in the local memory of a
process so a message can be sent from a complex data layout or received
into a complex l
By serialization, I mean in the context of data storage and transmission. See
http://en.wikipedia.org/wiki/Serialization
e.g. in a structure or class, if there is a pointer pointing to some memory
outside the structure or class, one has to send the content of the memory
besides the structure or
Tim wrote:
Sorry, my typo. I meant to say OpenMPI documentation.
Okay. "Open (space) MPI" is simply an implementation of the MPI
standard -- e.g., http://www.mpi-forum.org/docs/mpi21-report.pdf . I
imagine an on-line search will turn up a variety of tutorials and
explanations of that stand
Sorry, my typo. I meant to say OpenMPI documentation.
How to send/recieve and broadcast objects of self-defined class and of
std::vector? If using MPI_Type_struct, the setup becomes complicated if the
class has various types of data members, and a data member
of another class.
How to deal with
Tim wrote:
BTW: I would like to find some official documentation of OpenMP, but there
seems none?
OpenMP (a multithreading specification) has "nothing" to do with Open
MPI (an implementation of MPI, a message-passing specification).
Assuming you meant OpenMP, try their web site: http://o
Thanks!
How to send/recieve and broadcast objects of self-defined class and of
std::vector?
How to deal with serialization problems?
BTW: I would like to find some official documentation of OpenMP, but there
seems none?
--- On Fri, 1/29/10, Eugene Loh wrote:
> From: Eugene Loh
> Subject:
OK, but trivial codes don't always reproduce problems.
Is strace useful?
On Fri, Jan 29, 2010 at 7:32 AM, Jeff Squyres wrote:
> On Jan 29, 2010, at 8:23 AM, Laurence Marks wrote:
>
>> I'll try, but sometimes these things are hard to reproduce and I have
>> to wait for free nodes to do the test.
On Jan 29, 2010, at 8:23 AM, Laurence Marks wrote:
> I'll try, but sometimes these things are hard to reproduce and I have
> to wait for free nodes to do the test.
Understood.
> If I do manage to reproduce the
> issue (I've added ERR= traps, so would have to regress) any thing else
> to look at?
On Fri, Jan 29, 2010 at 6:59 AM, Jeff Squyres wrote:
> On Jan 28, 2010, at 2:23 PM, Laurence Marks wrote:
>
>> > If one process dies prematurely in Open MPI (i.e., before MPI_Finalize),
>> > all the others > should be automatically killed.
>>
>> This does not seem to be happening. Part of the pro
On Jan 28, 2010, at 2:23 PM, Laurence Marks wrote:
> > If one process dies prematurely in Open MPI (i.e., before MPI_Finalize),
> > all the others > should be automatically killed.
>
> This does not seem to be happening. Part of the problem may be (and I
> am out of my depth here) that the fortr
Doh! Sorry about that. :-(
We took extra effort in Open MPI to try to obey serialized precedents. That
is, when we're discussing some feature X, we frequently ask ourselves, "What
would happen if you tried to do X with no MPI/parallelization involved?"
Meaning: what would happen if you ran
Hi Charles,
You don't need to install anything, but just a few security setting has
to be correctly configured. Here are two links might be helpful (will be
added into README.WINDOWS too):
http://msdn.microsoft.com/en-us/library/aa393266(VS.85).aspx
http://community.spiceworks.com/topic/578
O
Tim wrote:
Sorry, complicated_computation() and f() are simplified too much. They do take more inputs.
Among the inputs to complicated_computation(), some is passed from the main() to f() by address since it is a big array, some is passed by value, some are created inside f() before the call to
In rank 0 main broadcast feature to all processes.
In f calculate a slice of array based on rank, then either send/recv
back to rank 0 or maybe gather.
Only rank 0 does everything else. (Other ranks must call f after
recv'ing feature.)
On Thu, 2010-01-28 at 21:23 -0800, Tim wrote:
> Sorry, compl
Sorry, complicated_computation() and f() are simplified too much. They do take
more inputs.
Among the inputs to complicated_computation(), some is passed from the main()
to f() by address since it is a big array, some is passed by value, some are
created inside f() before the call to complicat
21 matches
Mail list logo