Jeff,
With the proper MPI_Finalize added at the end of the main function,
your program orks fine with the current version of Open MPI up to 32
processors. Here is the output I got for 4 processors:
I am 2 of 4 WORLD procesors
I am 3 of 4 WORLD procesors
I am 0 of 4 WORLD procesors
I am 1 of
Hi,
ANL suggested I post this question to you. This is my second
posting..but now with the proper attachments.
--- Begin Message ---
Hello All,
This will probably turn out to be my fault as I haven't used MPI in a
few years.
I am attempting to use an MPI implementation of a "nxtval" (see
Hello openMPI users,
ANL suggested I post to you. BTW, the missing mpi_finalize is not the
problem.
--jeff
--- Begin Message ---
On Wed, 20 Jun 2007, Jeffrey Tilson wrote:
> Hello All,
> This will probably turn out to be my fault as I haven't used MPI in a
> few years.
>
> I am attempting
Hello list,
I would appreciate recommendations on what to use for developing mpi
python codes. I've seen several packages on public domain: mympi, pypar,
mpi python, mpi4py and it would be helpful to start in the right
direction.
Thanks,
--
Valmor de Almeida
ORNL
PS. I apologize if this messag
The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.2.3. This release is mainly a bug fix release over the v1.2.2
release, but there are few minor new features. We strongly
recommend that all users upg
Just started working with OpenMPI / SLURM combo this morning. I can
successfully launch this job from the command line and it runs to
completion, but when launching from SLURM they hang.
They appear to just sit with no load apparent on the compute nodes even
though SLURM indicates they are run
Why not edit libtool to see what it is doing (it's just a script)
- you will get a lot of output:
Add a "set -x" as the second line and stand well back :-)
#! /bin/sh
set -x
Mostyn
On Wed, 20 Jun 2007, Andrew Friedley wrote:
I'm not seeing anything particularly relevant in the libtool
docume
On Jun 20, 2007, at 11:41 AM, Andrew Friedley wrote:
I'm not seeing anything particularly relevant in the libtool
documentation. I think this might be referring to hardcoding paths in
shared libraries?
Using pathf90 for both FC and F77 does not change anything. Should
have
been more clear
I'm not seeing anything particularly relevant in the libtool
documentation. I think this might be referring to hardcoding paths in
shared libraries?
Using pathf90 for both FC and F77 does not change anything. Should have
been more clear in my first email -- gcc 3.4.5 using pathf90 for FC
wo
On Jun 19, 2007, at 11:35 AM, Alf Wachsmann wrote:
In line 568 of openmpi-1.2.2/orte/mca/pls/rsh/pls_rsh_module.c
the call "p = getpwuid(getuid());" returns an invalid shell on our
compute
nodes. This leads to "pls:rsh: local csh: 0, local sh: 0", i.e. the
local
shell is not defined and only
It could be; I didn't mention it because this is building ompi_info,
a C++ application that should have no fortran issues with it.
But then again, who knows? Maybe you're right :-) -- perhaps libtool
is just getting confused because you used g77 and pathf90 -- why not
use pathf90 for both
Isn't this another case of trying to use two different Fortran compilers
at the same time?
On Tue, 2007-06-19 at 20:04 -0400, Jeff Squyres wrote:
> I have not seen this before -- did you look in the libtool
> documentation? ("See the libtool documentation for more information.")
>
> On Jun 19
I had almost the same situation when I upgraded OpenMPI from very old
version to 1.2.2. All processes seemed to stuck in MPI_Barrier, as a
walk-around I just commented out all MPI_Barrier occurrences in my
program and it started to work perfectly.
greets, Marcin
Chris Reeves wrote:
(This tim
On Tue, Jun 19, 2007 at 11:24:24AM -0700, George Bosilca wrote:
> 1. I don't believe the OS to release the binding when we close the
> socket. As an example on Linux the kernel sockets are release at a
> later moment. That means the socket might be still in use for the
> next run.
>
This is n
14 matches
Mail list logo