On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
& MPI_COMM_WORLD,ierr)
Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
Thanks for any help!
MPI_IN_PLACE is an MPI-2 construct, and is
Jeff,
Thanks.
Here is a simple code from the book "Using MPI" that I want to modify to use
MPI_IN_PLACE.
program main
include "mpif.h"
double precision PI25DT
parameter(PI25DT = 3.141592653589793238462643d0)
double precision mypi, pi, h, sum, x, f, a
On Mar 1, 2006, at 11:08 AM, Benoit Semelin wrote:
call MPI_BCAST(boundary_cond,
8,MPI_CHARACTER,master,MPI_COMM_WORLD,mpi_err)
1
Error: Generic subroutine 'mpi_bcast' at (1) is not an intrinsic
subroutine
It looks like we goofed; we neglected to include F90 routines for the
CHARACTE
On Mar 3, 2006, at 4:40 PM, Xiaoning (David) Yang wrote:
Does Open MPI supports MPI_IN_PLACE? Thanks.
Yes.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
Does Open MPI supports MPI_IN_PLACE? Thanks.
David
* Correspondence *
The lf95 compiler expects a filename that ends with .o if you give it the -o
option with the -c option. Is there a reason the Makefile is trying to make
the file called "mpi_kinds.ompi_module" instead of "mpi_kinds.o". If it
ends with .o, it compiles.
Sam Adams
General Dynamics - Network System
Brian,
Thank you so much! It is working now.
David
* Correspondence *
> From: Brian Barrett
> Reply-To: Open MPI Users
> Date: Thu, 2 Mar 2006 20:32:25 -0500
> To: Open MPI Users
> Subject: Re: [OMPI users] Problem running open mpi across nodes.
>
> On Mar 2, 2006, at 8:19 PM, Xia
I'm trying to write a routine which unpicks user defined datatypes
using MPI_Type_get_{envelope,contents}. I'm having a problem that
a derived type returned by a call of MPI_Type_get_contents, when handed
onwards to MPI_Type_get_envelope, causes the system to bomb:
[suse10:15004] *** An error oc
On Thu, 02 Mar 2006 03:55:46 -0700, Jeff Squyres
wrote:
That being said, I have been unable to get OpenMPI to compile with
PGI 6.1
(but it does finish ./configure; it breaks during 'make').
Can you provide some details on what is going wrong?
We currently only have PGI 5.2 and 6.0 to test
Jeff --
I've tried waht you told me and made some tests:
cluster master machine
eth0 mpihosts_out --> for outside use (getting its own ip via dhcp)
eth1, mpihosts_cluster --> for cluster use (serves ip's to cluster nodes)
--- TESTS 1,2 -openmpi-1.0.2a9 --
1.- cd openmpi-1.0.1
2.-
Just to add an example that may help to this "disconnect" discussion :
Attached is the code of a test that does the following (and it works
perfectly with OpenMPI 1.0.1)
1) master spawns slave1
2) master spawns slave2
3) exechange messages between master and slaves over intercommunicator
4) sl
Thanks for your answer. Your example address one possible situation
where a parallel
application is spawned by a driver with MPI_Comm_Spawn, or multiple
parallel applications
are spawned at the same time with a MPI_Comm_Span_Multiple, over a set
of processors
described in the machinefile. It is
12 matches
Mail list logo