Has anyone gotten Open MPI 1.2.4 to compile with the latest Intel
compilers 10.1.007 and Leopard? I can get Open MPI-1.2.4 to to build
with GCC + Fortran IFORT 10.1.007. But I can't get any configuration
to work with Intel's 10.1.007 Compilers.
The configuration completes, but the compilati
Hello Ken,
This is a known bug, which is fixed in the upcoming 1.2.5 release. We
expect 1.2.5
to come out very soon. We should have a new release candidate for 1.2.5 posted
by tomorrow.
See these tickets about the bug if you care to look:
https://svn.open-mpi.org/trac/ompi/ticket/1166
https://sv
I cannot reproduce the error. Please make sure you have the lib/
openmpi/mca_pml_v.so file in your build. If you don't, maybe you
forgot to run autogen.sh at the root of the trunk when you
removed .ompi_ignore.
If this does not fix the problem, please let me know your command line
options
I recently ran into a problem with GATHERV while running some randomized
tests on my MPI code. The problem seems to occur when running
MPI_Gatherv with a displacement on a communicator with a single process.
The code listed below exercises this errant behavior. I have tried it
on OpenMPI 1.1.2 an
Mmm, I'll investigate this today.
Aurelien
Le 11 déc. 07 à 08:46, Thomas Ropars a écrit :
Hi,
I've tried to test the message logging component vprotocol pessimist.
(svn checkout revision 16926)
When I run an mpi application, I get the following error :
mca: base: component_find: unable to ope
>This is an implementation details. You should avoid relying on such
>things in a portable MPI applications. The safe assumption here is
>that MPI_Bsend always copy the buffer, as described in the MPI standard.
I'm fully aware of the MPI standard, and the program will be
standard-compliant. Ho
On Dec 11, 2007, at 10:33 AM, Gleb Natapov wrote:
On Tue, Dec 11, 2007 at 10:27:32AM -0500, Bradley, Peter C. (MIS/
CFD) wrote:
In OpenMPI, does MPI_Bsend always copy the message to the user-
specified
buffer, or will it avoid the copy in situations where it knows the
send can
complete?
If
On Tue, Dec 11, 2007 at 10:27:32AM -0500, Bradley, Peter C. (MIS/CFD) wrote:
> In OpenMPI, does MPI_Bsend always copy the message to the user-specified
> buffer, or will it avoid the copy in situations where it knows the send can
> complete?
If the message size if smaller than eager limit Open MPI
In OpenMPI, does MPI_Bsend always copy the message to the user-specified
buffer, or will it avoid the copy in situations where it knows the send can
complete?
I recognize bsend is generally to be avoided, but I have a need to emulate
an in-house message-passing library that guarantees that writes
On 12/10/07, Jeff Squyres wrote:
> Brian / Lisandro --
> I don't think that I heard back from you on this issue. Would you
> have major heartburn if I remove all linking of our components against
> libmpi (etc.)?
>
> (for a nicely-formatted refresher of the issues, check out
> https://svn.open-m
Hi,
I've tried to test the message logging component vprotocol pessimist.
(svn checkout revision 16926)
When I run an mpi application, I get the following error :
mca: base: component_find: unable to open vprotocol pessimist:
/local/openmpi/lib/openmpi/mca_vprotocol_pessimist.so: undefined sy
Neeraj,
The rationale is clearly explained in the MPI standard. Here is the
relevant paragraph from section 7.3.2:
The ``in place'' operations are provided to reduce unnecessary memory
motion by both the MPI implementation and by the user. Note that while
the simple check of testing wheth
Thanks George, But what is the need for user to
specify it. The api can check the address of input buffers and output
buffers. Is there some extra advantage of MPI_IN_PLACE over automatically
detecting it using pointers?-NeerajOn Tue, 11 Dec 2007 06:10:06 -0500 Open MPI
Users wrote Neer
Neeraj,
MPI_IN_PLACE is defined by the MPI standard in order to allow the
users to specify that the input and output buffers for the collectives
are the same. Moreover, not all collectives support MPI_IN_PLACE and
for those that support it some strict rules apply. Please read the
collecti
Hello everyone, While going through collective algorithms, I
came across preprocessor directive MPI_IN_PLACE which is (void *)1. Its always
being compared against source buffer(sbuf). My question is when MPI_IN_PLACE ==
sbuf condition would be true. As far as i understand, sbuf is the address
15 matches
Mail list logo