Afraid I'll need more info than that, David - how was it configured, what is
the resource manager, and what was the command line to launch?
We aren't seeing any problems in our test systems, so I suspect the most likely
reason is version confusion where the mpirun being used doesn't match the
b
Hi all,
I am having troubles with the newly-available 1.6 release (tar.gz).
I built it with my "normal" configure options, with no obvious
configure or make errors. I used both PGI 12.4, and GCC 4.7.0, under
Scientific Linux 5.5.
I then compiled my "normal" matrix-multiply test case. Upon exec
Hi all,
Can anybody tell me how to enable the polling and interrupt/blocking
execution in
OpenMPI?
Thanks
On Sun, May 13, 2012 at 8:31 PM, George Bosilca wrote:
> Get the free out of the #ifndef LEAK and your problem will be solved.
Compiling with -DNEBUG whould also solve the problem.
Bert
>
> george.
>
Dear Open-MPI developers,
I have built my own package of openmpi 1.6 based on the RHEL6 package
on my SL6 test machine. My tests fail like this:
Open RTE was unable to open the hostfile:
/usr/lib64/openmpi-intel/etc/openmpi-default-hostfile
Check to make sure the path and filename are correct
On May 14, 2012, at 2:16 AM, Andreas Schäfer wrote:
> Not much more surprising than an array allocated by malloc() not being
> automatically deallocated once the pointer dies. The datatype variable
> is merely a handle, Open MPI has an internal data store for each
> user-defines datatype. Same for
On 09:06 Mon 14 May , Ilja Honkonen wrote:
> Thanks, so it's a feature. A bit surprising though since usually local
> variables are deallocated automatically.
Not much more surprising than an array allocated by malloc() not being
automatically deallocated once the pointer dies. The datatype v
Get the free out of the #ifndef LEAK and your problem will be solved.
for (int i = 0; i< 1000; i++) {
MPI_Datatype type;
assert(
MPI_Type_contiguous(
10 * sizeof(double),