Re: [OMPI users] MPI_Recv, is it possible to switch on/off aggresive mode during runtime?
Brian Barrett wrote: On Jul 5, 2006, at 8:54 AM, Marcin Skoczylas wrote: I saw some posts ago almost the same question as I have, but it didn't give me satisfactional answer. I have setup like this: GUI program on some machine (f.e. laptop) Head listening on tcpip socket for commands from GUI. Workers waiting for commands from Head / processing the data. And now it's problematic. For passing the commands from Head I'm using: while(true) { MPI_Recv... do whatever head said (process small portion of the data, return result to head, wait for another commands) } So in the idle time workers are stuck in MPI_Recv and have 100% CPU usage, even if they are just waiting for the commands from Head. Normally, I would not prefer to have this situation as I sometimes have to share the cluster with others. I would prefer not to stop whole mpi program, but just go into 'idle' mode, and thus make it run again soon. Also I would like to have this aggresive MPI_Recv approach switched on when I'm alone on the cluster. So is it possible somehow to switch this mode on/off during runtime? Thank you in advance! Currently, there is not a way to do this. Obviously, there's not going to be a way that is portable (ie, compiles with MPICH), but it may be possible to add this in the future. It likely won't happen for the v1.1 release series, and I can't really speak for releases past that at this point. I'll file an enhancement request in our internal bug tracker, and add you to the list of people to be notified when the ticket is updated. Brian Is there any solution ready? using MPI_Probe before MPI_Recv didn't help too much. greetings, Marcin
Re: [OMPI users] Perl and MPI
Hi renato, thanks man! That was the detailed explanation. I got teh perl module too... Imran Renato Golin wrote: On 9/13/06, imran shaik wrote: > I need to run parallel jobs on a cluster typically of size 600 nodes and > running SGE, but the programmers are good at perl but not C or C++. So i > thought of MPI, but i dont know whether it has perl support? Hi Imran, SGE will dispatch process among the nodes of your cluster but it does not support interprocess communication, which MPI does. If your problem is easily splittable (like parse a large apache log, read a large xml list of things) you might be able to split the data and spawn as many process as you can. I do it using LSF (another dispatcher) and a Makefile that controls the dependencies and spawn the processes (using make's -j flag) and it works quite well. But if your job need the communication (like processing big matrices, collecting and distributing data among processes etc) you'll need an interprocess communication and that's what MPI is best at. In a nutshell, you'll need the runtime environment to run MPI programs as well as you need SGE's runtime environments on every node to dispatch jobs and collect information. About MPI bindings for Perl, there's this module: http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm but it's far too young to be trustworthy, IMHO, and you'll probably need the MPI runtime on all nodes as well... cheers, --renato ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users - Do you Yahoo!? Get on board. You're invited to try the new Yahoo! Mail.
Re: [OMPI users] Perl and MPI
Hi Renato, thanks for your response. Can you elaborate on this.? I have few doubts as well: 1) OpenMPI runtime supports SGE?? Does it uses SGE instead of MPI runtime when it finds SGE running?? 2) Is it possible to check point and run MPI jobs? 3) Is it possible to add and remove processes dynamically from the MPI communicator? 5) When do we actually need many different communicators? 4) Is MPI only suitable for low latency communication in a cluster environment? Ralph H Castain wrote: I can't speak to the Perl bindings, but Open MPI's runtime already supports SGE, so all you have to do is "mpirun" like usual and we take care of the rest. You may have to check your version of Open MPI as this capability was added in the more recent releases. Ralph On 9/13/06 8:52 AM, "Renato Golin" wrote: > On 9/13/06, imran shaik wrote: >> I need to run parallel jobs on a cluster typically of size 600 nodes and >> running SGE, but the programmers are good at perl but not C or C++. So i >> thought of MPI, but i dont know whether it has perl support? > > Hi Imran, > > SGE will dispatch process among the nodes of your cluster but it does > not support interprocess communication, which MPI does. If your > problem is easily splittable (like parse a large apache log, read a > large xml list of things) you might be able to split the data and > spawn as many process as you can. > > I do it using LSF (another dispatcher) and a Makefile that controls > the dependencies and spawn the processes (using make's -j flag) and it > works quite well. But if your job need the communication (like > processing big matrices, collecting and distributing data among > processes etc) you'll need an interprocess communication and that's > what MPI is best at. > > In a nutshell, you'll need the runtime environment to run MPI programs > as well as you need SGE's runtime environments on every node to > dispatch jobs and collect information. > > About MPI bindings for Perl, there's this module: > http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm > > but it's far too young to be trustworthy, IMHO, and you'll probably > need the MPI runtime on all nodes as well... > > cheers, > --renato > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users - Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1¢/min.
Re: [OMPI users] Perl and MPI
On 9/15/06 10:36 AM, "imran shaik" wrote: > Hi Renato, > thanks for your response. > Can you elaborate on this.? > I have few doubts as well: > 1) OpenMPI runtime supports SGE?? Does it uses SGE instead of MPI runtime > when it finds SGE running?? SGE support will be included in Open MPI v1.2, scheduled to be released at Supercomputing in November. As with any other resource manager, if you run Open MPI in an SGE job, "mpirun" will automatically use the back-end resource manager mechanisms to launch and monitor MPI processes (as appropriate). > 2) Is it possible to check point and run MPI jobs? This is ongoing work. Not yet, but we expect to have demonstratable versions of this at SC (November). > 3) Is it possible to add and remove processes dynamically from the MPI > communicator? No. MPI defines that communicators are a fixed set of processes. > 5) When do we actually need many different communicators? It's up to your applications. > 4) Is MPI only suitable for low latency communication in a cluster > environment? Yes and no; there's a lot of religious debate about this. ;-) Certainly, this is an extremely common environment for MPI usage, but there are many groups who are interested in using MPI in WAN kinds of scenarios, for example. -- Jeff Squyres Server Virtualization Business Unit Cisco Systems
Re: [OMPI users] Perl and MPI
Hi Prakash, Do i need MPI runtime environment for sure to ue those perl modules?? Cant i use some other clustring software.? Where can i get MPI::Simple?? Imran >Hello, >My users use Parallel::MPI and MPI::Simple perl modules consistently >without issues. But I am not sure of the support for MPI-2 standard with >either of these modules. Is there someone here that can answer that >question too? Also those modules seem to work only with MPICH now and >not the other MPI distributions. Prakash Velayutham wrote: Renato Golin wrote: > On 9/13/06, imran shaik wrote: > >> I need to run parallel jobs on a cluster typically of size 600 nodes and >> running SGE, but the programmers are good at perl but not C or C++. So i >> thought of MPI, but i dont know whether it has perl support? >> > > Hi Imran, > > SGE will dispatch process among the nodes of your cluster but it does > not support interprocess communication, which MPI does. If your > problem is easily splittable (like parse a large apache log, read a > large xml list of things) you might be able to split the data and > spawn as many process as you can. > > I do it using LSF (another dispatcher) and a Makefile that controls > the dependencies and spawn the processes (using make's -j flag) and it > works quite well. But if your job need the communication (like > processing big matrices, collecting and distributing data among > processes etc) you'll need an interprocess communication and that's > what MPI is best at. > > In a nutshell, you'll need the runtime environment to run MPI programs > as well as you need SGE's runtime environments on every node to > dispatch jobs and collect information. > > About MPI bindings for Perl, there's this module: > http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm > > but it's far too young to be trustworthy, IMHO, and you'll probably > need the MPI runtime on all nodes as well... > > cheers, > --renato Hello, My users use Parallel::MPI and MPI::Simple perl modules consistently without issues. But I am not sure of the support for MPI-2 standard with either of these modules. Is there someone here that can answer that question too? Also those modules seem to work only with MPICH now and not the other MPI distributions. Prakash ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users - All-new Yahoo! Mail - Fire up a more powerful email and get things done faster.
[OMPI users] MPI on large clusters
Hi folks, Is MPI suitable for running jobs on large clusters?? Is it best suited only for SMP ? I used MPI on relatively small cluster. But now i have to recommend MPI for a relatively large 600 node cluster. Shall I ?? The nature of jobs is well, processing tera bytes of data. Thanks in advance Regards, Imran - Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small Business.
Re: [OMPI users] Perl and MPI
AFAIK, both those modules work with MPI standard API and not others. The MPI::Simple I mentioned is actually Parallel::MPI::Simple. Both Parallel::MPI and Parallel::MPI::Simple are available from CPAN. Prakash imran shaik wrote: > Hi Prakash, > Do i need MPI runtime environment for sure to ue those perl modules?? > Cant i use some other clustring software.? > Where can i get MPI::Simple?? > > Imran > > >Hello, > > >My users use Parallel::MPI and MPI::Simple perl modules consistently > >without issues. But I am not sure of the support for MPI-2 standard with > >either of these modules. Is there someone here that can answer that > >question too? Also those modules seem to work only with MPICH now and > >not the other MPI distributions. > > Prakash Velayutham wrote: Renato Golin wrote: > >> On 9/13/06, imran shaik wrote: >> >>> I need to run parallel jobs on a cluster typically of size 600 nodes and >>> running SGE, but the programmers are good at perl but not C or C++. So i >>> thought of MPI, but i dont know whether it has perl support? >>> >> Hi Imran, >> >> SGE will dispatch process among the nodes of your cluster but it does >> not support interprocess communication, which MPI does. If your >> problem is easily splittable (like parse a large apache log, read a >> large xml list of things) you might be able to split the data and >> spawn as many process as you can. >> >> I do it using LSF (another dispatcher) and a Makefile that controls >> the dependencies and spawn the processes (using make's -j flag) and it >> works quite well. But if your job need the communication (like >> processing big matrices, collecting and distributing data among >> processes etc) you'll need an interprocess communication and that's >> what MPI is best at. >> >> In a nutshell, you'll need the runtime environment to run MPI programs >> as well as you need SGE's runtime environments on every node to >> dispatch jobs and collect information. >> >> About MPI bindings for Perl, there's this module: >> http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm >> >> but it's far too young to be trustworthy, IMHO, and you'll probably >> need the MPI runtime on all nodes as well... >> >> cheers, >> --renato >> > Hello, > > My users use Parallel::MPI and MPI::Simple perl modules consistently > without issues. But I am not sure of the support for MPI-2 standard with > either of these modules. Is there someone here that can answer that > question too? Also those modules seem to work only with MPICH now and > not the other MPI distributions. > > Prakash
Re: [OMPI users] Perl and MPI
On 9/15/06, imran shaik wrote: Where can i get MPI::Simple?? $ cpan cpan> install Parallel::MPI::Simple you can try other MPI implementations but I guess mpich is the only one that will work... cheers, --renato
Re: [OMPI users] Perl and MPI
Thanks prakash. Cheers, Imran Prakash Velayutham wrote: AFAIK, both those modules work with MPI standard API and not others. The MPI::Simple I mentioned is actually Parallel::MPI::Simple. Both Parallel::MPI and Parallel::MPI::Simple are available from CPAN. Prakash imran shaik wrote: > Hi Prakash, > Do i need MPI runtime environment for sure to ue those perl modules?? > Cant i use some other clustring software.? > Where can i get MPI::Simple?? > > Imran > > >Hello, > > >My users use Parallel::MPI and MPI::Simple perl modules consistently > >without issues. But I am not sure of the support for MPI-2 standard with > >either of these modules. Is there someone here that can answer that > >question too? Also those modules seem to work only with MPICH now and > >not the other MPI distributions. > > Prakash Velayutham wrote: Renato Golin wrote: > >> On 9/13/06, imran shaik wrote: >> >>> I need to run parallel jobs on a cluster typically of size 600 nodes and >>> running SGE, but the programmers are good at perl but not C or C++. So i >>> thought of MPI, but i dont know whether it has perl support? >>> >> Hi Imran, >> >> SGE will dispatch process among the nodes of your cluster but it does >> not support interprocess communication, which MPI does. If your >> problem is easily splittable (like parse a large apache log, read a >> large xml list of things) you might be able to split the data and >> spawn as many process as you can. >> >> I do it using LSF (another dispatcher) and a Makefile that controls >> the dependencies and spawn the processes (using make's -j flag) and it >> works quite well. But if your job need the communication (like >> processing big matrices, collecting and distributing data among >> processes etc) you'll need an interprocess communication and that's >> what MPI is best at. >> >> In a nutshell, you'll need the runtime environment to run MPI programs >> as well as you need SGE's runtime environments on every node to >> dispatch jobs and collect information. >> >> About MPI bindings for Perl, there's this module: >> http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm >> >> but it's far too young to be trustworthy, IMHO, and you'll probably >> need the MPI runtime on all nodes as well... >> >> cheers, >> --renato >> > Hello, > > My users use Parallel::MPI and MPI::Simple perl modules consistently > without issues. But I am not sure of the support for MPI-2 standard with > either of these modules. Is there someone here that can answer that > question too? Also those modules seem to work only with MPICH now and > not the other MPI distributions. > > Prakash ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users - Do you Yahoo!? Get on board. You're invited to try the new Yahoo! Mail.
Re: [OMPI users] Perl and MPI
Ok, thanks for the info Renato. Have a nice week end. Imran Renato Golin wrote: On 9/15/06, imran shaik wrote: > Where can i get MPI::Simple?? $ cpan cpan> install Parallel::MPI::Simple you can try other MPI implementations but I guess mpich is the only one that will work... cheers, --renato ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users - All-new Yahoo! Mail - Fire up a more powerful email and get things done faster.
Re: [OMPI users] Perl and MPI
On Sep 15, 2006, at 10:36 AM, imran shaik wrote: Can you elaborate on this.? I have few doubts as well: 1) OpenMPI runtime supports SGE?? Does it uses SGE instead of MPI runtime when it finds SGE running?? It's a difficult question if you expect an answer describing the deep internals of the Open MPI implementation. Let's say from a high level point of view that the MPI runtime detect SGE and use it in order to start the MPI job. 2) Is it possible to check point and run MPI jobs? Not with the released version. It's still work in progress. Eventually it will be one of the features of Open MPI but not before SC2006. 3) Is it possible to add and remove processes dynamically from the MPI communicator? Open MPI is MPI 2 compliant, therefore it support dynamic processes. The is a FAQ on the web site on how to do it. 5) When do we actually need many different communicators? It depend on what you plan to do. Usually, from the programmer point of view using multiple communicators make the code more readable as they allow you to have a logic view of the messages in transit. But it is not a requirement. One can write a million lines of code MPI application and only use the MPI_COMM_WORLD. 4) Is MPI only suitable for low latency communication in a cluster environment? MPI was designed as a programming paradigm. It allow expressing parallel algorithms based on communications between peers. These communications can be point-to-point or collectives. The goal is wider than just low latency communications, as the standard allow you [as an example] to describe the memory layout of the data that get involved in the communication. The MPI forum have the full documentation about all the features of the MPI 2 standard. george. Ralph H Castain wrote: I can't speak to the Perl bindings, but Open MPI's runtime already supports SGE, so all you have to do is "mpirun" like usual and we take care of the rest. You may have to check your version of Open MPI as this capability was added in the more recent releases. Ralph On 9/13/06 8:52 AM, "Renato Golin" wrote: > On 9/13/06, imran shaik wrote: >> I need to run parallel jobs on a cluster typically of size 600 nodes and >> running SGE, but the programmers are good at perl but not C or C+ +. So i >> thought of MPI, but i dont know whether it has perl support? > > Hi Imran, > > SGE will dispatch process among the nodes of your cluster but it does > not support interprocess communication, which MPI does. If your > problem is easily splittable (like parse a large apache log, read a > large xml list of things) you might be able to split the data and > spawn as many process as you can. > > I do it using LSF (another dispatcher) and a Makefile that controls > the dependencies and spawn the processes (using make's -j flag) and it > works quite well. But if your job need the communication (like > processing big matrices, collecting and distributing data among > processes etc) you'll need an interprocess communication and that's > what MPI is best at. > > In a nutshell, you'll need the runtime environment to run MPI programs > as well as you need SGE's runtime environments on every node to > dispatch jobs and collect information. > > About MPI bindings for Perl, there's this module: > http://search.cpan.org/~josh/Parallel-MPI-0.03/MPI.pm > > but it's far too young to be trustworthy, IMHO, and you'll probably > need the MPI runtime on all nodes as well... > > cheers, > --renato > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1¢/min. ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] MPI on large clusters
I think the answer to all your questions is: "it depends on your application." MPI is used on extremely large clusters (many thousands of nodes), but with applications that were specially written for those large numbers of nodes. You need to look at the specific requirements of your application (George said some helpful things in his mail that should help with this) and determine how to want to program your solution. Then look at spec'ing out a hardware solution (what kind of nodes? How much RAM? What processor(s)? How much disk? What kind of network? ...?), which is a complicated and involved process -- originally stemming from exactly what you want to *do* with your cluster (if you have only a single app that is going to run on this cluster, you should spec out a cluster that will fit the needs of that app). On 9/15/06 11:25 AM, "imran shaik" wrote: > Hi folks, > Is MPI suitable for running jobs on large clusters?? > Is it best suited only for SMP ? > > I used MPI on relatively small cluster. > > But now i have to recommend MPI for a relatively large 600 node cluster. > Shall I ?? > > The nature of jobs is well, processing tera bytes of data. > > Thanks in advance > > Regards, > Imran > > > > > > - > Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small > Business. > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres Server Virtualization Business Unit Cisco Systems
[OMPI users] Inter vs Intracommunicator...Who is the best?
I have a simple question for you... But who is the best between the intercommunicator and intracommunicator? Is better use an intercommunicator with Send/Recv or Bcast... or is better use the MPI_Intercomm_merge and to use the Send/Recv or Bcast inside the new intracommunicator created? I talk about the performance...is more faster the inter or the intracommunicator? Do they have the same performance?The same speed? Bye Alfonso
Re: [OMPI users] Inter vs Intracommunicator...Who is the best?
But who is the best between the intercommunicator and intracommunicator? Is better use an intercommunicator with Send/Recv or Bcast... or is better use the MPI_Intercomm_merge and to use the Send/Recv or Bcast inside the new intracommunicator created? I talk about the performance...is more faster the inter or the intracommunicator? Do they have the same performance?The same speed? Usually they do not have the same implementation due to differences in semmantics, the intercommunications can use highly optimized algorithms used in intracommunications but generally don't. It is currently better in Open MPI to do a merge and use intracommunications. Thanks, Graham. --- Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI & Open MPI Computer Science Dept | Suite 203, 1122 Volunteer Blvd, University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450 Email: f...@cs.utk.edu | Phone:+1(865)974-5790 | Fax:+1(865)974-8296 Broken complex systems are always derived from working simple systems --- Bye Alfonso ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users Thanks, Graham. -- Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI Computer Science Dept | Suite 203, 1122 Volunteer Blvd, University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450 Email: f...@cs.utk.edu | Phone:+1(865)974-5790 | Fax:+1(865)974-8296 Broken complex systems are always derived from working simple systems --