Hi all,
A simple program at my 4-node ROCKS cluster runs fine with command:
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./mpi-ring
Another bigger programs runs fine on the head node only with command:
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/flo2d
But with the command:
cd /sp
>From the look of it, this is not an OMPI problem, but a problem with
your paths. You need to make sure that libGLU.so.1 can be found by the
system at runtime. This is true for _all_ the systems that are in your
machinefile. So make sure that on all systems the path to that library
is in the LD_LIB
please try using full ( drdb0235.en.desres.deshaw.com ) hostname
in the hostfile/rankfile.
It should help.
Lenny.
On Mon, Aug 31, 2009 at 7:43 PM, Ralph Castain wrote:
> I'm afraid the rank-file mapper in 1.3.3 has several known problems that
> have been described on the list by users. We hopefu
I changed error message, I hope it will be more clear now.
r21919.
On Tue, Sep 1, 2009 at 2:13 PM, Lenny Verkhovsky wrote:
> please try using full ( drdb0235.en.desres.deshaw.com ) hostname
> in the hostfile/rankfile.
> It should help.
> Lenny.
>
> On Mon, Aug 31, 2009 at 7:43 PM, Ralph Castain
I'm receiving the error posted at the bottom of this message with a code
compiled with Intel Fortran/C Version 11.1 against OpenMPI version 1.3.2.
The same code works correctly when compiled against MPICH2. (We have
re-compiled with OpenMPI to take advantage of newly-installed Infiniband
hardware
Hi Marcus,
Marcus Daniels wrote:
Hi,
I'm trying to do passive one-sided communication, unlocking a receive
buffer when it is safe and then re-locking it when data has arrived.
Locking also occurs for the duration of a send.
I also tried using post/wait and start/put/complete, but with that I
A. Austen wrote:
On Fri, 28 Aug 2009 10:16 -0700, "Eugene Loh"
wrote:
Big topic and actually the subject of much recent discussion. Here are
a few comments:
1) "Optimally" depends on what you're doing. A big issue is making
sure each MPI process gets as much memory bandwidth
Hi,
I am trying to install openmpi 1.3.3 under OSX 10.6 (Snow Leopard)
using the 11.1.058 intel compilers. Configure and build seem to work
fine. However trying to run ompi_info after install causes directly a
segmentation fault without any additional information being printed.
Did anyone h
Did you try using the Apple compiler too ?
Le 09-09-01 à 19:31, Marcus Herrmann a écrit :
Hi,
I am trying to install openmpi 1.3.3 under OSX 10.6 (Snow Leopard)
using the 11.1.058 intel compilers. Configure and build seem to work
fine. However trying to run ompi_info after install causes di
I haven't installed my copy of Snow Leopard yet - I'm not sure anyone
has tested it. Too soon!
On Sep 1, 2009, at 5:53 PM, Luis Vitorio Cargnini wrote:
Did you try using the Apple compiler too ?
Le 09-09-01 à 19:31, Marcus Herrmann a écrit :
Hi,
I am trying to install openmpi 1.3.3 under
Marcus,
What version of openmpi ships with 10.6. Are you making sure that you
are getting the includes and libraries for 1.3.3 and not the native
apple version of openmpi.
Doug Reeder
On Sep 1, 2009, at 4:31 PM, Marcus Herrmann wrote:
Hi,
I am trying to install openmpi 1.3.3 under OSX 10.
just tried the gnu compilers that come with 10.6, and they seem to
work (at least ompi_info doesn't crash). So it appears to be an intel
compiler issue on 10.6. Just checked out the intel compiler pages, and
apparently they don't yet support Snow Leopard.
Thanks all for the input.
Marcus
Are all the commercial PS3 games developed in "parallel way".(unlike
sequential like XBox development) ?
Do the developers have *think* in parallel way and use all the MPI_*
like commands to communicate with SPEs ?
I am not a game developer, but I don't think any games are actually
using MPI in general, let alone to communicate with the SPEs (if that
isn't what you were implying, my mistake - I apologize). If you're
interested in learning about programming on the Cell BE, take a look at
IBM's "Redbook" on
Sorry for the delay in replying...
On Sep 1, 2009, at 1:11 AM, Shaun Jackman wrote:
> Looking at the source code of MPI_Request_get_status, it...
> calls OPAL_CR_NOOP_PROGRESS()
> returns true in *flag if request->req_complete
> calls opal_progress()
> returns false in *flag
Keep in mind th
15 matches
Mail list logo