[OMPI users] How to establish communication between two separate COM WORLD

2006-03-24 Thread Ali Eghlima
 Hello,

I have read MPI-2 documents as well as FAQ. I am confused as the best way 
to establish communication
between two  MPI_COMM_WORLD which has been created by two mpiexec calls on 
the same node.

mpiexec -conf  config1
 This start 20 processes on 7 nodes

mpiexec -conf  config2 
  This start 18 processes on 5 nodes

I do appreciate any comments or pointer to a document or example.

Thanks

Ali,



Re: [OMPI users] How to establish communication between two separate COM WORLD

2006-03-27 Thread Ali Eghlima
  Thanks Ralph and Jean.

Is there any chances that this feature be added to the next release of 
mpiexec (mpirun).

Thanks again

Ali,











Ralph Castain  
Sent by: users-boun...@open-mpi.org
03/27/2006 08:44 AM
Please respond to
r...@lanl.gov; Please respond to
Open MPI Users 


To
Open MPI Users 
cc

Subject
Re: [OMPI users] How to establish communication between two separate COM 
WORLD






Actually, in a not-too-distant future release, there will be an option to 
mpirun called "--connect" that will allow you to specify that this job is 
to be connected to a specified earlier job. The run-time environment will 
then spawn the new job and exchange all required communication information 
between the two jobs for you. You could therefore accomplish your desired 
operation by:

> nohup mpirun --np xx app1
(system returns job number to you)
> mpirun --np yy --connect job1 app2
(system starts app2 and connects it to job1)

Should be a little more transparent. No specific coding for making the 
connection would be required in your application itself.

Ralph


Jean Latour wrote: 
Hello, 

It seems to me there is only one way to create a communication between 
two MPI_COMM_WORLD :  use MPI_Open_Port with a specific 
IP + port address, and then MPI_comm_connect / MPI_comm_accept. 

In order to ease the port number communication, the use of 
MPI_publish-name 
/ MPI_lookup_name is also possible with the constraint that the "publish" 
must be done before the "lookup", and this involves some synchronization 
between the processes anyway. 

Simple examples can be found in the handbook on MPI : "Using MPI-2" 
by William Gropp et al. 

Best Regards, 
Jean 

Ali Eghlima wrote: 



Hello, 

I have read MPI-2 documents as well as FAQ. I am confused as the best way 
to establish communication 
between two  MPI_COMM_WORLD which has been created by two mpiexec calls on 
the same node. 

mpiexec -conf  config1 
 This start 20 processes on 7 nodes 

mpiexec -conf  config2 
  This start 18 processes on 5 nodes 

I do appreciate any comments or pointer to a document or example. 

Thanks 

Ali, 



 

___ 
users mailing list 
us...@open-mpi.org 
http://www.open-mpi.org/mailman/listinfo.cgi/users 




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] Open MPI

2006-04-04 Thread Ali Eghlima
  Hello,

We are evaluating to use Open MPI in one of our project. Our initial 
evaluation indicate that, Open MPI is a well defined and stable
product. I am looking to find a list of the government projects especially 
DOD or DOE related projects or Lab that are using MPI.

Thanks

Ali,






Ali Eghlima, Ph.D.  Raytheon Company
Sr. Principal EngineerMissile Defense 
Center
339.645-6044 business  225-235 Presidential 
Way
339.645-6788  fax  Woburn,  
Massachusetts
ali.eghl...@raytheon.com   01801, USA

***
NOTE: This e-mail, including any attached files, is confidential, may be 
legally 
privileged, and is solely for the intended recipient(s). If you received 
this e-mail in 
error, please destroy it and notify us immediately by reply e-mail or 
phone. Any 
unauthorized use, dissemination, disclosure, copying or printing is 
strictly prohibited. 
***
 





[OMPI users] Release date for V1.1 or V1.0.3?

2006-05-15 Thread Ali Eghlima
Hello,

Does anyone has info on release date for Open MPI V1.1 ?

Thanks

Ali, 

[OMPI users] OpenMPI on VxWorks?

2006-06-02 Thread Ali Eghlima
Hello,

Looking at the OpenMPI web site, I couldn't find any reference to support 
for VxWorks. 
Here are my questions:

   -  Is there any plan for OpenMPI to run on VxWorks? 
   -  Does anyone has ported/customized OpenMPI to work on VxWorks?
   -  What level of effort does it take to port OpenMPI to run on 
VxWorks.


Thanks

Ali,







[OMPI users] Open MPI 1.1.2 stdout problem with IBM AIX 5.3

2006-12-22 Thread Ali Eghlima
Hello,

We have Open MPI 1.1.2 installed on IBM AIX 5.3 cluster. It looks like
terminal output is broken. There are a few entry in the archive for this 
problem, 
with no suggested solution or real work around.

I am putting this posting with hope to get some advise for a work around 
or solution.



#mpirun -np 1  hostname 

   No out put, piping the command to "cat" or "more" generate no out 
put as well.
   The only way to get an output from this command is to add 
--debug-daemons

#mpirun -np 1 --debug-daemons  hostname

Even this debug option is not working for a real application which 
generate several output.

Looking forward for any comments.

Thanks

Ali,







Re: [OMPI users] Open MPI 1.1.2 stdout problem with IBM AIX 5.3

2006-12-22 Thread Ali Eghlima
Hi Ralph,

Thanks for the quick reply. The launch environment is rsh. I was also 
puzzled, when I find out --debug-daemons option
makes mpirun to work for a simple case. 

Thanks again

Ali, 
 
 



Ralph Castain  
Sent by: users-boun...@open-mpi.org
12/22/2006 10:49 AM
Please respond to
Open MPI Users 


To
Open MPI Users 
cc

Subject
Re: [OMPI users] Open MPI 1.1.2 stdout problem with IBM AIX 5.3






Hi Ali

I have seen this reported twice now – I think from two different sources, 
but I could be incorrect. Unfortunately, we don’t have access to an AIX 
cluster to investigate the problem. We don’t see it on any other platform 
at this time.

Could you tell me something more about your cluster? In particular, it 
would help to know your launch environment (e.g., rsh/ssh, SLURM, TM, 
etc.). The noted behavior of using —debug-daemons to resolve the problem 
has me puzzled as that flag only causes the daemons to keep their stdio 
ports open – it has nothing to do with the application processes nor the 
I/O forwarding subsystem.

I can suggest a couple of options in the interim, though I don’t know that 
they will solve the problem:

1.  You could upgrade to the 1.2 beta release. The runtime underwent 
some significant changes that might help here; or 
2.  You could try configuring Open MPI with “--disable-pty-support”. 
The I/O forwarding system is currently based upon pty’s. We have seen a 
problem on one other platform where the pty support wasn’t quite what Open 
MPI expects – disabling it solved the problem. You should first check if 
the 1.1.2 release supports this configuration option (I honestly can’t 
remember – it has been too long) - you may need to upgrade to 1.2 to do 
this.

I hope that provides some help. If/when we get access to an AIX cluster, 
we’ll try to dig deeper into this issue.

Ralph



On 12/22/06 7:44 AM, "Ali Eghlima"  wrote:


Hello, 

We have Open MPI 1.1.2 installed on IBM AIX 5.3 cluster. It looks like 
terminal output is broken. There are a few entry in the archive for this 
problem, 
with no suggested solution or real work around. 

I am putting this posting with hope to get some advise for a work around 
or solution. 



#mpirun -np 1  hostname 

   No out put, piping the command to "cat" or "more" generate no out 
put as well. 
   The only way to get an output from this command is to add 
--debug-daemons 

#mpirun -np 1 --debug-daemons  hostname 

Even this debug option is not working for a real application which 
generate several output. 

Looking forward for any comments. 

Thanks 

Ali, 
 





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Open MPI 1.1.2 stdout problem with IBM AIX 5.3

2006-12-22 Thread Ali Eghlima
Hi Ralph,

I could run any test that you want me to run to diagnose the problem. 

Thanks again, Have a great holiday

Ali 
 
 



Ralph Castain  
Sent by: users-boun...@open-mpi.org
12/22/2006 03:28 PM
Please respond to
Open MPI Users 


To
Open MPI Users 
cc

Subject
Re: [OMPI users] Open MPI 1.1.2 stdout problem with IBM AIX 5.3






Thanks Ali. That is indeed helpful.

I personally launch using rsh (both on OSX and Linux) frequently and have 
no problem with IO forwarding. So it must be something about the AIX 
environment that is causing the issue.

Debug daemons should have nothing to do with the application process’ 
stdio channels. I’ll take another look at that code and see if there is 
some unexpected interaction that might be occurring.

As I said earlier, though, there is little we can do about AIX at this 
time due to lack of access to that environment. If we can find someone 
with access and willing to help, we will explore it further.

Ralph



On 12/22/06 11:29 AM, "Ali Eghlima"  wrote:


Hi Ralph, 

Thanks for the quick reply. The launch environment is rsh. I was also 
puzzled, when I find out --debug-daemons option 
makes mpirun to work for a simple case. 

Thanks again 

Ali, 




Ralph Castain  
Sent by: users-boun...@open-mpi.org 12/22/2006 10:49 AM 
Please respond to
Open MPI Users  
To 
Open MPI Users  
cc
Subject 
Re: [OMPI users] Open MPI 1.1.2 stdout problem with IBM AIX 5.3 




Hi Ali

I have seen this reported twice now – I think from two different sources, 
but I could be incorrect. Unfortunately, we don’t have access to an AIX 
cluster to investigate the problem. We don’t see it on any other platform 
at this time.

Could you tell me something more about your cluster? In particular, it 
would help to know your launch environment (e.g., rsh/ssh, SLURM, TM, 
etc.). The noted behavior of using —debug-daemons to resolve the problem 
has me puzzled as that flag only causes the daemons to keep their stdio 
ports open – it has nothing to do with the application processes nor the 
I/O forwarding subsystem.

I can suggest a couple of options in the interim, though I don’t know that 
they will solve the problem:

1.You could upgrade to the 1.2 beta release. The runtime underwent 
some significant changes that might help here; or 
2.You could try configuring Open MPI with “--disable-pty-support”. 
The I/O forwarding system is currently based upon pty’s. We have seen a 
problem on one other platform where the pty support wasn’t quite what Open 
MPI expects – disabling it solved the problem. You should first check if 
the 1.1.2 release supports this configuration option (I honestly can’t 
remember – it has been too long) - you may need to upgrade to 1.2 to do 
this. 

I hope that provides some help. If/when we get access to an AIX cluster, 
we’ll try to dig deeper into this issue.

Ralph



On 12/22/06 7:44 AM, "Ali Eghlima"  wrote:


Hello, 

We have Open MPI 1.1.2 installed on IBM AIX 5.3 cluster. It looks like 
terminal output is broken. There are a few entry in the archive for this 
problem, 
with no suggested solution or real work around. 

I am putting this posting with hope to get some advise for a work around 
or solution. 



#mpirun -np 1  hostname 

   No out put, piping the command to "cat" or "more" generate no out 
put as well. 
   The only way to get an output from this command is to add 
--debug-daemons 

#mpirun -np 1 --debug-daemons  hostname 

Even this debug option is not working for a real application which 
generate several output. 

Looking forward for any comments. 

Thanks 

Ali, 
   





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users <
http://www.open-mpi.org/mailman/listinfo.cgi/users> 
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users 

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users