Hi
this is to report that building openmpi-1.5 from rpm fails on Linux
SLES10sp3 x86_64,
due to --program-prefix switch use now checked in configure script
delivered with 1.5.
rpm is version 4.4.2-43.36.1
rpmbuild --rebuild SRPMS/openmpi-1.5.0.src.rpm --define
'configure_options
Hi Ralf !
I saw that the new realease 1.5 is out.
I didn't found this fix in the "list of changes", is it present but not
mentioned since is a minor fix ?
Thank you,
Federico
2010/4/1 Ralph Castain
> Hi there!
>
> It will be in the 1.5.0 release, but not 1.4.2 (couldn't backport the fix).
>
The fix should be there - just didn't get mentioned.
Let me know if it isn't and I'll ensure it is in the next one...but I'd be very
surprised if it isn't already in there.
On Oct 19, 2010, at 3:03 AM, Federico Golfrè Andreasi wrote:
> Hi Ralf !
>
> I saw that the new realease 1.5 is out.
>
On Thu, Sep 30, 2010 at 09:00:31AM -0400, Richard Treumann wrote:
> It is possible for MPI-IO to be implemented in a way that lets a single
> process or the set of process on a node act as the disk i/O agents for the
> entire job but someone else will need to tell you if OpenMPI can do this,
> I
As Rob mentions
There are three capabilities to consider:
1) The process (or processes) that will do the I/O are members of the file
handle's hidden communicator and the call is collective
2)) The process (or processes) that will do the I/O are members of the
file handle's hidden communicator
Hi,
I need to design a data structure to transfer data between nodes on Open MPI
system.
Some elements of the the structure has dynamic size.
For example,
typedef struct{
double data1;vector dataVec;
} myDataType;
The size of the dataVec depends on some intermidiate computing results.
If I o
Thanks for the report. Someone reported pretty much the same issue to me
off-list a few days ago for RHEL5.
It looks like RHEL5 / 6 ship with Autoconf 2.63, and have a /usr/lib/rpm/macros
that defines %configure to include options such as --program-suffix. We
bootstrapped Open MPI v1.5 with
yes, sorry. I did mean 1.5. In my case, going back to 1.43 solved my
oom problem.
On Sun, Oct 17, 2010 at 4:57 PM, Ralph Castain wrote:
> There is no OMPI 2.5 - do you mean 1.5?
>
> On Oct 17, 2010, at 4:11 PM, Brian Budge wrote:
>
>> Hi Jody -
>>
>> I noticed this exact same thing the other da
Hi all -
I just ran a small test to find out the overhead of an MPI_Recv call
when no communication is occurring. It seems quite high. I noticed
during my google excursions that openmpi does busy waiting. I also
noticed that the option to -mca mpi_yield_when_idle seems not to help
much (in fac
Brian Budge wrote:
Hi all -
I just ran a small test to find out the overhead of an MPI_Recv call
when no communication is occurring. It seems quite high. I noticed
during my google excursions that openmpi does busy waiting. I also
noticed that the option to -mca mpi_yield_when_idle seems no
10 matches
Mail list logo