The issue is described in the ticket that I cited -- we used a newer version of
the GNU Autotools to bootstrap the v1.5 series than the v1.4 series. The RPM
macros that ship with RHEL 5 and 6 (and I think SLES 11?) don't seem to be
compatible with this version -- so I'm not quite sure what the
Hi Jim
Please, read what the OpenMPI folks say about the 1.5 release:
"PLEASE NOTE: According to Open MPI's release methodology, the v1.5
series is a "feature release" series. This means that it has rich new
features that we think are tested and stable, but they are not as mature
as the stabl
On Wed, Oct 27, 2010 at 2:29 AM, Jeremiah Willcock wrote:
> On Tue, 26 Oct 2010, Jeff Squyres wrote:
>
>> Open MPI users --
>>
>> I took a little heat at the last the MPI Forum for not having Open MPI be
>> fully complaint with MPI-2.2 yet (OMPI is compliant with MPI-2.1).
>> Specifically, there's
By build from tarball, are you saying that I can build RPMs from the
tarball, and it will work?
(keep in mind that with rocks, all software must be made into RPMs for
installation)
--Jim
On Tue, Nov 2, 2010 at 10:22 AM, Jeff Squyres wrote:
> Jim --
>
> I have an open issue about exactly this wi
Jim --
I have an open issue about exactly this with Red Hat. I am awaiting guidance
from them to know how to fix it.
https://svn.open-mpi.org/trac/ompi/ticket/2611
The only workaround for the moment is to build from tarball, not RPM.
On Nov 2, 2010, at 12:47 PM, Jim Kusznir wrote:
> Hi
Hi all:
I finally decided to rebuild openmpi on my cluster (last built when
1.3.2 was current). I have a ROCKS cluster, so I need to build RPMs
to install accross the cluster rebuilds. Previously, I did so with
the following command:
rpmbuild -bb --define 'install_in_opt 1' --define 'install_mo
On Nov 2, 2010, at 6:21 AM, Jerome Reybert wrote:
> Each host_comm communicator is grouping tasks by machines. I ran this version,
> but performances are worst than the current version (each task performing its
> own Lapack function). I have several questions:
> - in my implementation, is MPI_Bc
I'm guessing that our configure script doesn't handle directories with spaces
in it properly.
Can you re-build in a directory with an absolute name that does not contain a
space and see if the problem goes away?
On Nov 1, 2010, at 3:47 PM, Carrasco, Cesar J. wrote:
> I am trying to install O
On Nov 2, 2010, at 4:57 AM, jody wrote:
> So i guess the basic question is:
> is it permitted to rename openMPI installations, and if yes how is
> this porperly done (since a simple mv doesn't work)
Yes: http://www.open-mpi.org/faq/?category=building#installdirs
--
Jeff Squyres
jsquy...@cisco.
On 2 Nov 2010, at 10:21, Jerome Reybert wrote:
> - in my implementation, is MPI_Bcast aware that it should use shared memory
> memory communication? Is data go through the network? It seems it is the case,
> considering the first results.
> - is there any other methods to group task by machine,
Hello,
I am using OpenMPI 1.4.2 and 1.5. I am working on a very large scientific
software. The source code is huge and I don't have lot of freedom in this code.
I can't even force the user to define a topology with mpirun.
At the moment, the software is using MPI in a very classical way: in a clu
Hi Jack
> the buffersize is the same in two iterations.
this doesn't help if the message which is sent is larger than
buffersize in the second iteration.
But as David says, without the details of the message sending and
potential changes to the
receive buffer one can't make any precise diagnosis.
HI
@trent
no, i didn't use the other calls, because i think they are all the
same (on my installation they are all soft links to opal_wrapper)
@tim
gentoo on 64 bit does have lib and lib64 directories for the
respective architectures (at / and at /usr)
but in my 64-bit installation of openMP
13 matches
Mail list logo