[OMPI users] Open-MPI and TCP port range

2006-04-18 Thread Laurent . POREZ
Hi, 

I am  a new user of Open-MPI, and I need to use 2 kinds of programs on an 
unique cluster :
1) MPI based programs
2) Others, using TCP and UDP

In order to get my non-MPI programs run, I need to know which ports may be used 
by MPI programs.
Is there a way to know/set the range of the ports used by MPI programs ?

Thanks
Laurent.


Re: [OMPI users] users Digest, Vol 261, Issue 4

2006-04-21 Thread Laurent . POREZ
> -Original Message-
> Date: Thu, 20 Apr 2006 19:35:27 -0400
> From: "Jeff Squyres \(jsquyres\)" 
> Subject: Re: [OMPI users] Open-MPI and TCP port range
> To: "Open MPI Users" 
> Message-ID:
>   
> Content-Type: text/plain; charset="us-ascii"
> 
> 
> That being said, we are not opposed to putting port number controls in
> Open MPI.  Especially if it really is a problem for someone, not just a
> hypothetical problem ;-).  But such controls should not be added to
> support firewalled operations, because -- at a minimum -- unless you do
> a bunch of other firewall configuration, it will not be enough.

This point is a real problem for me (but I may be the only one in the world...).
I have to build a system that uses MPI softwares and non MPI COTS.
I can't change TCP ports used by the COTS.
Restricting MPI TCP/UDP port range loks like being the best solution for me.



Re: [OMPI users] Open-MPI and TCP port range

2006-04-21 Thread Laurent . POREZ

> -Original Message-
> Date: Thu, 20 Apr 2006 19:35:27 -0400
> From: "Jeff Squyres \(jsquyres\)" 
> Subject: Re: [OMPI users] Open-MPI and TCP port range
> To: "Open MPI Users" 
> Message-ID:
>   
> Content-Type: text/plain; charset="us-ascii"
> 
> 
> That being said, we are not opposed to putting port number controls in
> Open MPI.  Especially if it really is a problem for someone, not just a
> hypothetical problem ;-).  But such controls should not be added to
> support firewalled operations, because -- at a minimum -- unless you do
> a bunch of other firewall configuration, it will not be enough.

This point is a real problem for me (but I may be the only one in the world...).
I have to build a system that uses MPI softwares and non MPI COTS.
I can't change TCP ports used by the COTS.
Restricting MPI TCP/UDP port range loks like being the best solution for me.





[OMPI users] Checking the cluster status with MPI_Comm_spawn_multiple

2006-04-25 Thread Laurent . POREZ
Hi, 

Before starting programs on my cluster, I want to check on every CPU if it is 
up and able to run MPI applications.

For this, I use a kind of 'ping' program that just send a message saying 'I'm 
OK' tu a superviser program.
The 'ping' program is sent by the superviser on each CPU by the 
MPI_Comm_spawn_multiple command.

It works fine when every CPU is up, but when one is down, my superviser stops 
when calling the MPI_Comm_spawn_multiple command.

So the questions are : 
* 'What am I doing wrong ?'
* 'Is there a other way to check my CPUs ?'

Thanks for your help.

Laurent.


[OMPI users] CPU use in MPI_recv

2006-06-06 Thread Laurent . POREZ
Hi, 

I'm using Open-MPI 1.0.2 on a debian system.

I'm testing the MPI_recv function with a small C program (source code at the 
end of the message). And I see that when I'm waiting a message, calling 
MPI_recv, the CPU is used at 100 %.

Is that normal ?
Is there other ways to use a recv function (irecv, etc) that is not using the 
CPU ?

Laurent.

Source code :

#include 
#include 
#include 

int main(int argc, char *argv[])
{
int rc;
int numtasks, rank;
int myint = 0;

rc = MPI_Init(&argc, &argv);
if(rc != 0) {
printf("open error\n");
MPI_Abort(MPI_COMM_WORLD, rc);
}

MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

printf("from cpu_test : number of tasks : %d. My rank :%d\n", numtasks, 
rank);


MPI_Recv(&myint, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, NULL);

printf("message received\n");

MPI_Finalize();

exit(0);
}




[OMPI users] mpi_comm_spawn_multiple and 'host' MPI_Info key

2006-08-10 Thread Laurent . POREZ

> Hi, 
> 
> I saw with great pleasure that in the last version of open mpi (1.1.1b4), the 
> 'host' MPI_Info key was available for the mpi_comm_spawn_multiple function.
> 
> I tested it and I could spawn my processes on the wanted hosts.
> Now, the question is : How can I tell mpi_comm_spawn_multiple which processor 
> to spawn, in a multi-processor architecture ?
> 
> Maybe the 'host' MPI_Info key allows this feature, but I can't find the right 
> syntax.
> 
> I also would like to know, if possible, when the v1.1.1 will be released.
> 
> Thanks for your help, 
>   Laurent.
> 
> 


[OMPI users] MPI_Comm_spawn_multiple and BProc

2006-09-27 Thread Laurent . POREZ
Hi, 

I'm using MPI_Comm_spawn_multiple with Open MPI 1.1.1.
It used to work well, until I used the Bproc kernel patch.

When I use the Bproc patch, my program freezes when calling 
MPI_Comm_spawn_multiple.

Does MPI_Comm_spawn_multiple and BProc can work together ?

Thanks, 
    Laurent Porez


Re: [OMPI users] MPI_Comm_spawn_multiple and BProc

2006-09-27 Thread Laurent . POREZ
Oups, sorry !

I followed these steps :

1) Install a debian system (sarge 3.1r2).

2) Use a 2.6.9 kernel patched with bproc 4.0.0pre8 
(http://bproc.sourceforge.net)
the options CONFIG_BPROC, CONFIG_ROMFS_FS, CONFIG_BLK_DEV_RAM, 
CONFIG_BLK_DEV_INITRD, CONFIG_TMPFS where activated via the 'make menuconfig' 
command.

3) install bproc 4.0.0.pre8

4) install beoboot-cm1.10

5) load bproc modules

6) Install open-mpi V1.1.1

Hope this will help.

Thanks, 
Laurent.



--

Could you please clarify - what "Bproc kernel patch" are you referring to?

Thanks
Ralph


On 9/27/06 2:37 AM, "laurent.po...@fr.thalesgroup.com"
 wrote:

> Hi, 
> 
> I'm using MPI_Comm_spawn_multiple with Open MPI 1.1.1.
> It used to work well, until I used the Bproc kernel patch.
> 
> When I use the Bproc patch, my program freezes when calling
> MPI_Comm_spawn_multiple.
> 
> Does MPI_Comm_spawn_multiple and BProc can work together ?
> 
> Thanks, 
> Laurent Porez
> ___



[OMPI users] OpenMPI 1.1.1 with Multiple Thread Support

2006-10-17 Thread Laurent . POREZ
Hi, 

Could you explain what's wrong with thread support ?
Does it hang, or something else ?

I'm developping an application using multiple processes with multiple threads, 
and I have to use MPI to make process communicate. Typically, I will have to 
use the following functions :
- MPI_Comm_spawn_multiple, 
- MPI_Bsend, MPI_Recv, MPI_Irecv, MPI_Test
- MPI_Barrier
- MPI_Allgather

Can this work with the actual version of Open-MPI (1.1.1) or a later one, or 
even with  an other MPI library (free or commercial) ?
Do I have to think about giving up MPI ?

Thanks, 
Laurent.



[OMPI users] Error Handling Problem

2006-10-26 Thread Laurent . POREZ
Hi, 

I developped a launcher application : 
a MPI  application (say main_exe) lauches 2 MPI applications (say exe1 and 
exe2), using MPI_Comm_spawn_multiple.

Now, I'm looking at the behavior when an exe crashes.

What I can see is the following :
1) when everybody is launched, I see the followings processes, using 'ps' :
- the 'mpiexec -v -d -n 1 ./main_exe' command
- the orted server used for 'main_exe' (say 'orted1')
- main_exe
- the orted server used for 'exe1' and 'exe2' (say 'orted2')
- exe1
- exe2

2) I use kill -9 to 'crash' exe2

3) orted2 and exe1 finish.

4) with ps, I see it remains the following processes : mpiexec, 'orted1', 
main_exe

5) main_exe tries to send a message to exe1, using MPI_Bsend :
main_exe gets killed by a SIG_PIPE signal 

So what I see is that when a part of an MPI application crashes, the whole 
application crashes !
Is there a way to get an other behavior ? For exemple, MPI_Bsend could return 
an error message.

A few additionnal informations : 
- I work on linux, with Open-MPI 1.1.1.
- I'm developping in C and C++.

Thanks, 
Laurent.







Re: [OMPI users] Error Handling Problem

2006-10-27 Thread Laurent . POREZ

> From: George Bosilca 
> Subject: Re: [OMPI users] Error Handling Problem
> To: Open MPI Users 
> Message-ID: 
> Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed
> 
> How about changing the default error handler ?

I did change the default error handler (using Mpi_Comm_set_errhandler) in the 
main_exe program. I replaced it with a printf.
My error handler is never called, but main_exe receives a SIGPIPE signal.
So the only solution I found is to catch SIGPIPE and forget it...> 

> It is not supposed to work, and if you find an MPI implementation  
> that support this approach please tell me. I know the paper 
> where you  
> read about this, but even with their MPI library this approach does  
> not work.

which paper are you talking about ?


> 
> Soon, Open MPI will be able to support this feature. Several fault  
> tolerant modes are under way, but no precise timeline yet.

OK. I keep watching for new versions of Open MPI. 

Thanks, 
Laurent.

> 
>Thanks,
>  george.
> 
> On Oct 26, 2006, at 10:19 AM, laurent.po...@fr.thalesgroup.com wrote:
> 
> > Hi,
> >
> > I developped a launcher application :
> > a MPI  application (say main_exe) lauches 2 MPI applications (say  
> > exe1 and exe2), using MPI_Comm_spawn_multiple.
> >
> > Now, I'm looking at the behavior when an exe crashes.
> >
> > What I can see is the following :
> > 1) when everybody is launched, I see the followings processes,  
> > using 'ps' :
> > - the 'mpiexec -v -d -n 1 ./main_exe' command
> > - the orted server used for 'main_exe' (say 'orted1')
> > - main_exe
> > - the orted server used for 'exe1' and 'exe2' (say 'orted2')
> > - exe1
> > - exe2
> >
> > 2) I use kill -9 to 'crash' exe2
> >
> > 3) orted2 and exe1 finish.
> >
> > 4) with ps, I see it remains the following processes : mpiexec,  
> > 'orted1', main_exe
> >
> > 5) main_exe tries to send a message to exe1, using MPI_Bsend :
> > main_exe gets killed by a SIG_PIPE signal 
> >
> > So what I see is that when a part of an MPI application crashes,  
> > the whole application crashes !
> > Is there a way to get an other behavior ? For exemple, MPI_Bsend  
> > could return an error message.
> >
> > A few additionnal informations :
> > - I work on linux, with Open-MPI 1.1.1.
> > - I'm developping in C and C++.
> >
> > Thanks,
> > Laurent.
> 


[OMPI users] spawn on a cluster with 2 Ethernet interfaces

2006-11-23 Thread Laurent . POREZ
Hi, 

I have to spawn multiple slaves processes on a cluster, from a unique master 
process.

The open mpi distribution I use is 1.1.2.
I'm using a HP cluster, with 2 ethernet NICs on each machine.

My problem was a freeze of master when calling mpi_call_spawn_multiple, and of 
slaves when calling MPI_Init. This appened when I tried to spawn on multiple 
hosts (worked well on a unique host).


After working on the problem, I discovered that when I disabled eth1 on the 
hosts, everything got fine...
The same behavior appens fortunately when I use the "--mca btl_tcp_if_include 
eth0" parameter.

what is strange is that the problem stays if I use one of the followings :
"--mca btl_tcp_if_include eth1"
"--mca btl_tcp_if_exclude eth1"
"--mca btl_tcp_if_exclude eth0"

Is it impossible to use 2 Ethernet NICs at the same time for MPI applications ?
Will I have to always use eth0, and not eth1 for MPI communications ?

thanks, 
Laurent.


[OMPI users] Choosing the processor Id when spawning a process

2006-11-23 Thread Laurent . POREZ
Hi, 

I have to spawn a set of processes on multiple hosts, with my own mapping 
pattern, including processor ID, for example :
* process 1 on cpu0 of host 1
* process 2 on cpu1 of host 1
* process 3 on cpu1 of host 1
* process 4 on cpu0 of host 2
* process 5 on cpu1 of host 2

I see that only the "host" MPI_Info parameter can be used (see 
ompi_comm_start_processes() ), but other kinds of mapping could be used : 
create_app( ), in orte\tools\orterun\orterun.c may handle 
ORTE_APP_CONTEXT_MAP_ARCH or ORTE_APP_CONTEXT_MAP_CN mapping, that would be 
perfect for me.

Is there a next released that will take care of this, or is it of no use for 
most MPI users ?

Thanks, 
Laurent.


[OMPI users] return from MPI_Comm_spawn

2006-11-24 Thread Laurent . POREZ
Hi, 

I see that when a master process spawns slave processes, MPI_Comm_spawn() does 
not return until all MPI_Init() function ends in slave processes.

Is there a way to set a time-out, or something to detect when an error occurs 
in a slave process ?



Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Laurent . POREZ
I agree with this solution, for the machinefile.

Using mpiexec or a spawn command, you can add the cpu number attached to the 
hostname :
mpiexec -host [hostname]:[cpu number] -n 1 mpi_test
or, for MPI_Comm_spawn : 
MPI_Info_set( mpi_info, "host", "[hostname]:[cpu number]" );

Cheers, 
Laurent.
> 
> In the machinefile, add for each node with M cpus:
> myhost@mydomain slots=N cpus_allowed=,
>  being the subset of 0..M-1  in some yours-to-decide format and
> with yours-to-decide default values.
> 
>  Best Regards,
>  Alexander Shaposhnikov
> 
> On Wednesday 29 November 2006 06:16, Jeff Squyres wrote:
> > There is not, right now.  However, this is mainly because 
> back when I
> > implemented the processor affinity stuff in OMPI (well over a year
> > ago), no one had any opinions on exactly what interface to expose to
> > the use.  :-)
> >
> > So right now there's only this lame control:
> >
> >  http://www.open-mpi.org/faq/?category=tuning#using-paffinity
> >
> > I am not opposed to implementing more flexible processor affinity
> > controls, but the Big Discussion over the past few months is exactly
> > how to expose it to the end user.  There have been several formats
> > proposed (e.g., mpirun command line parameters, magic MPI 
> attributes,
> > MCA parameters, etc.), but nothing that has been "good" and "right".
> > So here's the time to chime in -- anyone have any opinions on this?
> >
> > On Nov 25, 2006, at 9:31 AM, shap...@isp.nsc.ru wrote:
> > > Hello,
> > > i cant figure out, is there a way with open-mpi to bind all
> > > threads on a given node to a specified subset of CPUs.
> > > For example, on a multi-socket multi-core machine, i want to use
> > > only a single core on each CPU.
> > > Thank You.
> > >
> > > Best Regards,
> > > Alexander Shaposhnikov