Hi Dave
Dave Love wrote:
Gus Correa writes:
Or run a serial version on the same set of machines,
compiled in similar ways (compiler version, opt flags, etc)
to the parallel versions, and compare results.
If the results don't differ, then you can start blaming MPI.
That wouldn't show that th
On Tue, 27 Apr 2010, Frederik Himpe wrote:
OpenMPI is installed in its own prefix
(/shared/apps/openmpi/gcc-4.4/1.4.1), and can be loaded by the
environment module (http://modules.sourceforge.net/) openmpi.
Now I can successfully run this pe job:
#!/bin/bash
#$ -N test
#$ -q all.q
#$ -pe openm
I'm not intimately familiar with boost++ -- you might want to try the "hello
world" and "ring" example programs in the OMPI examples/ directory as a
baseline.
Additionally, try executing a non-MPI program such as "hostname" to verify that
your remote connectivity is working. For example:
$ mp
Frederik Himpe writes:
> bash: module: line 1: syntax error: unexpected end of file
> bash: error importing function definition for `module'
It's nothing to do with open-mpi -- the job hasn't even started
executing at that point. Consult the archives of the SGE users list and
the issue tracker.
Gus Correa writes:
> Or run a serial version on the same set of machines,
> compiled in similar ways (compiler version, opt flags, etc)
> to the parallel versions, and compare results.
> If the results don't differ, then you can start blaming MPI.
That wouldn't show that there's actually any Ope
Hi all,
I'am writing a small program where the process of rank 0 sends "alo
alo" to the process of rank 1 and then process 1 will show this message on
screen. I am using boost++ library but result stays the same when I use the MPI
standard.
The program work locally ( that means: mpirun --host l
On Tue, 2010-04-27 at 07:52 -0600, Ralph Castain wrote:
> Looks to me like you have an error in the openmpi module file...
I cannot trigger this error by running module add openmpi/gcc-4.4, so I
don't have the feeling the module file in itself is erroneous.
Just in case, this is what it looks lik
Hi Terry,
> How does the stack for the non-SM BTL run look, I assume it probably is the
> same? Also, can you dump the message queues for rank 1? What's interesting
> is you have a bunch of pending receives, do you expect that to be the case
> when the MPI_Gatherv occurred?
It turns out we
Looks to me like you have an error in the openmpi module file...
On Apr 27, 2010, at 6:38 AM, Frederik Himpe wrote:
> I'm using SGE 6.1 and OpenMPI 1.4.1 built with gridengine support.
>
> I've got this parallel environment defined in SGE:
>
> pe_name openmpi
> slots 100
>
Can you provide a small chunk of code that replicates the problem, perchance?
On Apr 27, 2010, at 9:22 AM, Terry Dontje wrote:
> How does the stack for the non-SM BTL run look, I assume it probably is the
> same? Also, can you dump the message queues for rank 1? What's interesting
> is you ha
How does the stack for the non-SM BTL run look, I assume it probably is
the same? Also, can you dump the message queues for rank 1? What's
interesting is you have a bunch of pending receives, do you expect that
to be the case when the MPI_Gatherv occurred?
--td
Teng Lin wrote:
Hi,
We rece
I'm using SGE 6.1 and OpenMPI 1.4.1 built with gridengine support.
I've got this parallel environment defined in SGE:
pe_name openmpi
slots 100
user_listsNONE
xuser_lists NONE
start_proc_args /bin/true
stop_proc_args/bin/true
allocation_rule $fill_up
co
Hello,
Are you using heterogeneous environment? There was a similar issue
recently with segfault on mixed x86 and x86_64 environment. Here is
corresponding thread in ompi-devel:
http://www.open-mpi.org/community/lists/devel/2010/04/7787.php
This was fixed in trunk and will likely be fixed in next 1
13 matches
Mail list logo