[OMPI users] calling a customized MPI_Allreduce with MPI_PACKED datatype

2011-02-05 Thread Massimo Cafaro
Dear all,

in one of my C codes developed using Open MPI v1.4.3 I need to call 
MPI_Allreduce() passing as sendbuf and recvbuf arguments two MPI_PACKED arrays. 
The reduction requires my own MPI_User_function that needs to  MPI_Unpack() its 
first and second argument, process them and finally MPI_Pack() the result in 
the second argument.

I need to use MPI_Pack/MPI_Unpack because I am not able to create a derived 
datatype, since many data I need to send are dynamically allocated.
However, the code fails at runtime with the following message:

An error occurred in MPI_Unpack
on communicator MPI_COMM_WORLD
MPI_ERR_TRUNCATE: message truncated
MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

I have verified that, after unpacking the data in my own reduction function, 
all of the data are wrong.
Is this possible in MPI? I did not find anything on the "MPI reference Volume 
1" and "Using MPI"  that prevents this. This should just require using as 
datatype MPI_PACKED in MPI_Allreduce() . However, searching on the web I did 
not find any examples. 

Thank you in advance for any clue/suggestions/source code examples. 
This is driving me crazy now ;-(

Massimo Cafaro


- 

***

 Massimo Cafaro, Ph.D.   Additional affiliations:
 Assistant Professor  Euro-Mediterranean 
Centre for Climate Change
 Dept. of Engineering for Innovation  SPACI Consortium  
 University of Salento, Lecce, Italy E-mail 
massimo.caf...@unisalento.it
 Via per Monteroni 
massimo.caf...@cmcc.it
 73100 Lecce, Italy 
caf...@ieee.org
 Voice/Fax  +39 0832 297371 
caf...@acm.org   
 Web http://sara.unisalento.it/~cafaro  
   
   

***







smime.p7s
Description: S/MIME cryptographic signature


[OMPI users] Default hostfile not being used by mpirun

2011-02-05 Thread Barnet Wagman
There have been many postings about openmpi-default-hostfile on the
list, but I haven't found one that answers my question, so I hope you
won't mind one more.

When I use mpirun, openmpi-default-hostfile does not appear to get used.
I've added three lines to the default host file:

node0 slots=3
node1 slots=4
node2 slots=4

'node0' is the local (master) host.

If I explicitly list the hostfile in the mpirun command, everything
works correctly.  E.g.

mpirun -np 15 -hostfile /full/path/to/openmpi-default-hostfile hello_c

works correctly - hello_c gets run using all three nodes.

However, if I don't specify the hostfile, only the local node, node0, is
used. E.g.

mpirun -np 15 hello_c

creates all 15 processes on node0.  I was under the impression that all
machines listed in openmpi-default-hostfile should get used by default. 
Is that correct?

Unfortunately I can't use the hostfile command line option.  I'm going
to be using a mpi app (npRmpi) that doesn't let me pass params to
mpirun. So I need all my nodes used by default.

Configuration details:

openmpi 1.4.3, built from source.

OS: Debian lenny (but the Debian openmpi package is NOT installed).

Installation dir: /home/omu/openmpi

The default host file has pathname
/home/omu/openmpi/etc/openmpi-default-hostfile

I've set two envirnmental variables to support open mpi:

PATH=/home/omu/openmpi/bin:...
LD_LIBRARY_PATH=/home/omu/openmpi/lib:...


Are there any other environmental variables that need to be set?

I'd appreciate any suggestions about this.

thanks,

Barnet Wagman




Re: [OMPI users] Default hostfile not being used by mpirun

2011-02-05 Thread ETHAN DENEAULT
Barnet,

This isn't the most straightforward solution, but as a workaround, create a 
bash script and run that script through npRmpi? Something like:

!#/bin/bash

openmpi -np 15 -hostfile /path/to/hostfile $1

Cheers,
Ethan

--
Dr. Ethan Deneault
Assistant Professor of Physics
The University of Tampa
401 W Kennedy Blvd
Tampa, FL 33606
(813) 732-3718

Barnet Wagman  wrote:

There have been many postings about openmpi-default-hostfile on the
list, but I haven't found one that answers my question, so I hope you
won't mind one more.

When I use mpirun, openmpi-default-hostfile does not appear to get used.
I've added three lines to the default host file:

node0 slots=3
node1 slots=4
node2 slots=4

'node0' is the local (master) host.

If I explicitly list the hostfile in the mpirun command, everything
works correctly.  E.g.

mpirun -np 15 -hostfile /full/path/to/openmpi-default-hostfile hello_c

works correctly - hello_c gets run using all three nodes.

However, if I don't specify the hostfile, only the local node, node0, is
used. E.g.

mpirun -np 15 hello_c

creates all 15 processes on node0.  I was under the impression that all
machines listed in openmpi-default-hostfile should get used by default. 
Is that correct?

Unfortunately I can't use the hostfile command line option.  I'm going
to be using a mpi app (npRmpi) that doesn't let me pass params to
mpirun. So I need all my nodes used by default.

Configuration details:

openmpi 1.4.3, built from source.

OS: Debian lenny (but the Debian openmpi package is NOT installed).

Installation dir: /home/omu/openmpi

The default host file has pathname
/home/omu/openmpi/etc/openmpi-default-hostfile

I've set two envirnmental variables to support open mpi:

PATH=/home/omu/openmpi/bin:...
LD_LIBRARY_PATH=/home/omu/openmpi/lib:...


Are there any other environmental variables that need to be set?

I'd appreciate any suggestions about this.

thanks,

Barnet Wagman





Re: [OMPI users] Default hostfile not being used by mpirun

2011-02-05 Thread Ralph Castain
The easiest solution is to take advantage of the fact that the default hostfile 
is an MCA parameter - so you can specify it in several ways other than on the 
cmd line. It can be in your environment, in the default MCA parameter file, or 
in an MCA param file in your home directory.

See

http://www.open-mpi.org/faq/?category=tuning#setting-mca-params

for a full description on how to do this.


On Feb 5, 2011, at 3:14 PM, ETHAN DENEAULT wrote:

> Barnet,
> 
> This isn't the most straightforward solution, but as a workaround, create a 
> bash script and run that script through npRmpi? Something like:
> 
> !#/bin/bash
> 
> openmpi -np 15 -hostfile /path/to/hostfile $1
> 
> Cheers,
> Ethan
> 
> --
> Dr. Ethan Deneault
> Assistant Professor of Physics
> The University of Tampa
> 401 W Kennedy Blvd
> Tampa, FL 33606
> (813) 732-3718
> 
> Barnet Wagman  wrote:
> 
> There have been many postings about openmpi-default-hostfile on the
> list, but I haven't found one that answers my question, so I hope you
> won't mind one more.
> 
> When I use mpirun, openmpi-default-hostfile does not appear to get used.
> I've added three lines to the default host file:
> 
>node0 slots=3
>node1 slots=4
>node2 slots=4
> 
> 'node0' is the local (master) host.
> 
> If I explicitly list the hostfile in the mpirun command, everything
> works correctly.  E.g.
> 
>mpirun -np 15 -hostfile /full/path/to/openmpi-default-hostfile hello_c
> 
> works correctly - hello_c gets run using all three nodes.
> 
> However, if I don't specify the hostfile, only the local node, node0, is
> used. E.g.
> 
>mpirun -np 15 hello_c
> 
> creates all 15 processes on node0.  I was under the impression that all
> machines listed in openmpi-default-hostfile should get used by default. 
> Is that correct?
> 
> Unfortunately I can't use the hostfile command line option.  I'm going
> to be using a mpi app (npRmpi) that doesn't let me pass params to
> mpirun. So I need all my nodes used by default.
> 
> Configuration details:
> 
>openmpi 1.4.3, built from source.
> 
>OS: Debian lenny (but the Debian openmpi package is NOT installed).
> 
>Installation dir: /home/omu/openmpi
> 
>The default host file has pathname
>/home/omu/openmpi/etc/openmpi-default-hostfile
> 
>I've set two envirnmental variables to support open mpi:
> 
>PATH=/home/omu/openmpi/bin:...
>LD_LIBRARY_PATH=/home/omu/openmpi/lib:...
> 
> 
> Are there any other environmental variables that need to be set?
> 
> I'd appreciate any suggestions about this.
> 
> thanks,
> 
> Barnet Wagman
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users