Ok, here's an old thread :)
It turns out I'm having the same issues with OMPI 1.4.3 (current
stable, srpm downloaded from openmpi website).
My build command (which was the same for the 1.3.x build originally
cited here) is:
(working command):
rpmbuild -bb --define 'install_in_opt 1' --define 'in
Thanks for filing a bug -- I've asked Ralph to take a first crack at it...
On Nov 6, 2010, at 3:32 AM, Jed Brown wrote:
> Previous versions would set mpi_yield_when_idle automatically when
> oversubscribing a node. I assume this behavior was not intentionally
> changed, but the parameter is n
On Wed, Nov 10, 2010 at 22:08, e-mail number.cruncher <
number.crunc...@ntlworld.com> wrote:
> In short, someone from Intel submitted a glibc patch that does faster
> memcpy's on e.g. Intel i7, respects the ISO C definition, but does
> things backwards.
>
However, the commit message and mailing l
On 10 November 2010 17:25, Jed Brown wrote:
>
> Is the memcpy-back code ever executed when called as memcpy()? I can't
> imagine why it would be, but it would make plenty of sense to use it inside
> memmove when the destination is at a higher address than the source.
> Jed
Oh yes. And, after fur
On Wed, Nov 10, 2010 at 18:25, Jed Brown wrote:
> Is the memcpy-back code ever executed when called as memcpy()? I can't
> imagine why it would be, but it would make plenty of sense to use it inside
> memmove when the destination is at a higher address than the source.
Apparently the backward
Have you double checked your firewall settings, TCP/IP settings, and SSH
keys are all setup correctly for all machines including the host?
On Wed, Nov 10, 2010 at 2:57 AM, Grzegorz Maj wrote:
> Hi all,
> I've got a problem with sending messages from one of my machines. It
> appears during MPI_Se
On Wed, Nov 10, 2010 at 18:11, Number Cruncher wrote:
> Just some observations from a concerned user with a temperamental Open MPI
> program (1.4.3):
>
> Fedora 14 (just released) includes glibc-2.12 which has optimized versions
> of memcpy, including a copy backward.
>
> http://sourceware.org/gi
Just some observations from a concerned user with a temperamental Open
MPI program (1.4.3):
Fedora 14 (just released) includes glibc-2.12 which has optimized
versions of memcpy, including a copy backward.
http://sourceware.org/git/?p=glibc.git;a=commitdiff;h=6fb8cbcb58a29fff73eb2101b34caa19a7f
Note: I had the same failure with OMPI 1.4.2
libtool: link: f90 -G .libs/mpi.o .libs/mpi_sizeof.o
.libs/mpi_comm_spawn_multiple_f90.o .libs/mpi_testall_f90.o
.libs/mpi_testsome_f90.o .libs/mpi_waitall_f90.o .libs/mpi_waitsome_f90.o
.libs/mpi_wtick_f90.o .libs/mpi_wtime_f90.o -rpath
/afs/c
Nysal --
Does the same issue occur in OMPI 1.5?
Should we put in a local patch for OMPI 1.4.x and/or OMPI 1.5? (we've done
this before while waiting for upstream Libtool patches to be released, etc.)
On Nov 10, 2010, at 2:19 AM, Nysal Jan wrote:
> Hi Brian,
> This problem was first reported
They are being specified as environment variables, but it looks like the
problem's already been documented and fixed. Thanks.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Prentice Bisbal
Sent: Tuesday, November 09, 2010 1:04 PM
To:
Thanks, Nysal.
The only problem I'm having now is connecting a libtool version (e.g. 2.2.8)
with an OpenMPI version. I'm sorry if it's a silly question, but can you tell
me in which version of OpenMPI this problem will go away?
Thanks, again.
Brian
From: users-boun...@open-mpi.org [mailto:u
Hi all,
I've got a problem with sending messages from one of my machines. It
appears during MPI_Send/MPI_Recv and MPI_Bcast. The simplest case I've
found is two processes, rank 0 sending a simple message and rank 1
receiving this message. I execute these processes using mpirun with
-np 2.
- when bo
Hi Brian,
This problem was first reported by Paul H. Hargrove in the developer mailing
list. It is a bug in libtool and has been fixed in the latest release
(2.2.8). More details are available here -
http://www.open-mpi.org/community/lists/devel/2010/10/8606.php
Regards
--Nysal
On Wed, Nov 10, 20
Recently I installed OpenMPI 1.4.3 on my cluster. I found that
SYSTEM CPU is higher than older versions of 1.4.X, when I ran
our FEM program. Further more, OMPI_1.4.3 is a little bit slower
than 1.4.2. What is the differnece between these two versions, or
what affects the "SYSTEM CPU" and executio
Hello,
I am using OpenMPI 1.4.1. I have a small test case that calls
MPI_Test() too many times. I see one or two random time spikes when
this happens. On the other hand, if I avoid calling MPI_Test() based
on a timeout, this problem disappears.
For example, with no timeout, the results I'm gettin
16 matches
Mail list logo