Re: [OMPI users] RPM build errors when creating multiple rpms

2008-03-25 Thread Jeff Squyres
On Mar 19, 2008, at 1:48 PM, Michael Jennings wrote: On Tuesday, 18 March 2008, at 18:18:36 (-0700), Christopher Irving wrote: Well you're half correct. You're thinking that _prefix is always defined as /usr. No, actually I'm not. :) But in the case were install_in_opt is defined they have

Re: [OMPI users] RPM build errors when creating multiple rpms

2008-03-25 Thread Jeff Squyres
Sorry for the delay in replying; I got caught up in other things... On Mar 18, 2008, at 3:15 PM, Christopher Irving wrote: Okay, I'm no longer sure to which spec file you're referring. I was referring to the one on the SVN trunk: https://svn.open-mpi.org/trac/ompi/browser/trunk/contrib/dist/l

Re: [OMPI users] Unexpected compile error setting FILE_NULL Errhandler using C++ Bindings

2008-03-25 Thread Jeff Squyres
This whole issue came up recently in the MPI Forum (the const-ness [or not] of the MPI C++ objects). I am a fervent believer that all the predefined C++ MPI objects should be const and that any MPI function that allows predefined handles as an argument should be a const argument. This go

[OMPI users] Propregate Data Transfer

2008-03-25 Thread Samir Faci
Hello All, I currently have an application that works perfectly fine on 2 cores, but I'm moving it to an 8 core machine and I was wondering if OpenMPI had a built in solution, or if I would have to code this out manually. Currently, I'm processing on core X and sending the data to core 0 where

Re: [OMPI users] [gent...@gmx.de: Re: 2 questions about Open MPI]

2008-03-25 Thread Andreas Schäfer
Hi, On 19:38 Tue 25 Mar , powernetfr...@surfeu.de wrote: > And i am also right when i think that Open MPI supports task level > parallism. > because i think that i can create a process for each task (or create a > task pool) and let them communicating over MPI. > am i right? You could do so

Re: [OMPI users] [gent...@gmx.de: Re: 2 questions about Open MPI]

2008-03-25 Thread powernetfr...@surfeu.de
And i am also right when i think that Open MPI supports task level parallism. because i think that i can create a process for each task (or create a task pool) and let them communicating over MPI. am i right? thanks. king regards, oeter >Ursprüngliche Nachricht >Von: gent...@gmx.de >Da

Re: [OMPI users] communicating with the caller

2008-03-25 Thread George Bosilca
MPI-2 standard CHAPTER 5. PROCESS CREATION AND MANAGEMENT Section 5.4 george. On Mar 25, 2008, at 12:37 PM, jody wrote: Could you explain what you mean by "comm accept/connect" ? jody On Tue, Mar 25, 2008 at 4:06 PM, George Bosilca wrote: There is a chapter in the MPI standard about thi

Re: [OMPI users] communicating with the caller

2008-03-25 Thread jody
Could you explain what you mean by "comm accept/connect" ? jody On Tue, Mar 25, 2008 at 4:06 PM, George Bosilca wrote: > There is a chapter in the MPI standard about this. Usually, people > will use comm accept/connect to do such kind of things. No need to > have your own communication protoco

[OMPI users] [gent...@gmx.de: Re: 2 questions about Open MPI]

2008-03-25 Thread Andreas Schäfer
Hallöle, On 13:34 Tue 25 Mar , powernetfr...@surfeu.de wrote: > So Open MPI is OS dependent and actually it dont support Windows > plattforms. Equating "not running on Windows" to "OS dependent" is a bit harsh, as Open MPI will run on any Unixish OS (Linux, Solaris, BSD...). You could have s

Re: [OMPI users] communicating with the caller

2008-03-25 Thread George Bosilca
There is a chapter in the MPI standard about this. Usually, people will use comm accept/connect to do such kind of things. No need to have your own communication protocol. george. On Mar 25, 2008, at 10:32 AM, slimti...@gmx.de wrote: I'm new to OpenMPI and would like to know whether there

Re: [OMPI users] 2 questions about Open MPI

2008-03-25 Thread George Bosilca
On Mar 25, 2008, at 8:34 AM, powernetfr...@surfeu.de wrote: Hello, thanks for your help. So Open MPI is OS dependent and actually it dont support Windows plattforms. I would want to know if (Open) MPI sipports data decomposition and\ or task level parallelism. I think that MPI supports task lev

[OMPI users] communicating with the caller

2008-03-25 Thread slimtimmy
I'm new to OpenMPI and would like to know whether there is a common way for a caller of mpirun to communicate with the mpi processes. Basically I have a setup where one process is responsible for distributing jobs to other mpi processes and collecting the respective results afterwards. Now for exa

Re: [OMPI users] 2 questions about Open MPI

2008-03-25 Thread powernetfr...@surfeu.de
Hello, thanks for your help. So Open MPI is OS dependent and actually it dont support Windows plattforms. I would want to know if (Open) MPI sipports data decomposition and\ or task level parallelism. I think that MPI supports task level parallelism. But i also think that OpenMPI dont support d

Re: [OMPI users] 2 questions about Open MPI

2008-03-25 Thread Jeff Squyres
On Mar 25, 2008, at 5:09 AM, powernetfr...@surfeu.de wrote: Hello, i need some information for my thesis and i am not sure if it's right what i found in the internet before. therefore i want to ask you if the following two sentences are right: - Open MPI is OS independent and it runs on windo

[OMPI users] 2 questions about Open MPI

2008-03-25 Thread powernetfr...@surfeu.de
Hello, i need some information for my thesis and i am not sure if it's right what i found in the internet before. therefore i want to ask you if the following two sentences are right: - Open MPI is OS independent and it runs on windows as well as on linux - Open MPI dont has data decomp