Whenever I have seen this problem, it has been because of a mismatch between
mpirun and the back-end libraries that are linked against the executable. For
example, the app was compiled/linked against Open MPI a.b.c and either some
other mpirun was used (e.g., from MPICH) or the mpirun from Open
up?
--td
Date: Sun, 18 Apr 2010 17:15:04 +0200
From: Mario Ogrizek
Subject: Re: [OMPI users] Fwd: Open MPI v1.4 cant find default
hostfile
To: Open MPI Users
Message-ID:
Content-Type: text/plain; charset="utf-8"
It is a parallel tools platform for eclipse IDE, a plugin.
I dont
Afraid I can't help you - I've never seen that behavior on any system, can't
replicate it anywhere, and have no idea what might cause it.
On Apr 18, 2010, at 9:15 AM, Mario Ogrizek wrote:
> It is a parallel tools platform for eclipse IDE, a plugin.
> I dont think it is a source of problem.
>
>
It is a parallel tools platform for eclipse IDE, a plugin.
I dont think it is a source of problem.
The same thing is happening running it from shell. It has something to do
with mapping or something else. Since it allways maps for job 0, what ever
that means.
On Sun, Apr 18, 2010 at 4:50 PM, Ralp
Again, what is PTP?
I can't replicate this on any system we can access, so it may be something
about this PTP thing.
On Apr 18, 2010, at 1:37 AM, Mario Ogrizek wrote:
> Ofcourse i checked that, i have all of this things,
> I simplified the program, and its the same.
> Nothing gave me clue, exc
Ofcourse i checked that, i have all of this things,
I simplified the program, and its the same.
Nothing gave me clue, except the more detailed writeout of the PTP.
Here is the critical part of it:
(1.2 one, this is correct)
[Mario.local:05548] Map for job: 1 Generated by mapping mode: byslot
S
Just to check what is going on, why don't you remove that message passing code
and just
printf("Hello MPI World from process %d!", my_rank
in each process? Much more direct - avoids any ambiguity.
Also, be certain that you compile this program for the specific OMPI version
you are running it
Ofcourse, its the same program, wasnt recompiled for a week.
#include
#include
#include "mpi.h"
int main(int argc, char* argv[]){
int my_rank; /* rank of process */
int p; /* number of processes */
int source; /* rank of sender */
int dest; /* rank of receiver */
int tag=
On Apr 17, 2010, at 11:17 AM, Mario Ogrizek wrote:
> Hahaha, ok then that WAS silly! :D
> So there is no way to utilize both cores with mpi?
We are using both cores - it is just that they are on the same node. Unless
told otherwise, the processes will use shared memory for communication.
>
>
Hahaha, ok then that WAS silly! :D
So there is no way to utilize both cores with mpi?
Ah well, I'll correct that.
>From console, im starting a job like this: mpirun -np 4 Program, where i
want to run a Program on 4 processors.
I was just stumbled when i got same output 4 times, like there are 4
p
On Apr 17, 2010, at 1:16 AM, Mario Ogrizek wrote:
> I am new to mpi, so I'm sorry for any silly questions.
>
> My idea was to try to use dual core machine as two nodes. I have a limited
> access to a cluster, so this was just for "testing" purposes.
> My default hostfile contains usual comments
I am new to mpi, so I'm sorry for any silly questions.
My idea was to try to use dual core machine as two nodes. I have a limited
access to a cluster, so this was just for "testing" purposes.
My default hostfile contains usual comments and this two nodes:
node0
node1
I thought that each processo
On Apr 16, 2010, at 5:08 PM, Mario Ogrizek wrote:
> I checked the default MCA param file, and found it was there that was
> (automatically) specified as a relative path, so i changed it.
> So now, it works, altho, still something is not right.
> Seems like its creating 4 times only 1 process.
>
I checked the default MCA param file, and found it was there that was
(automatically) specified as a relative path, so i changed it.
So now, it works, altho, still something is not right.
Seems like its creating 4 times only 1 process.
Not sure if it has to do something with my hostfile, it contain
I understand, so, its looking for a
working_dir/usr/local/etc/openmpi-default-hostfile
I managed to run a hello world program from the console, while my wd was
just "/" and it worked, altho strangely...
example for 4 procs:
Hello MPI World From process 0: Num processes: 1
Hello MPI World From proc
How did you specify it? Command line? Default MCA param file?
On Apr 16, 2010, at 11:44 AM, Mario Ogrizek wrote:
> Any idea how to solve this?
>
> On Fri, Apr 16, 2010 at 7:40 PM, Timur Magomedov
> wrote:
> Hello.
> It looks that you hostfile path should
> be /usr/local/etc/openmpi-default-hos
The problem is that you gave us a relative path - is that where the file is
located?
The system is looking for usr/local/etc/openmpi-default-hostfile relative to
your current working directory. If you want us to look in /usr/local/etc, then
you have to give us that absolute path.
We don't care
Any idea how to solve this?
On Fri, Apr 16, 2010 at 7:40 PM, Timur Magomedov <
timur.magome...@developonbox.ru> wrote:
> Hello.
> It looks that you hostfile path should
> be /usr/local/etc/openmpi-default-hostfile not
> usr/local/etc/openmpi-default-hostfile but somehow Open MPI gets the
> second
Hello.
It looks that you hostfile path should
be /usr/local/etc/openmpi-default-hostfile not
usr/local/etc/openmpi-default-hostfile but somehow Open MPI gets the
second path.
В Птн, 16/04/2010 в 19:10 +0200, Mario Ogrizek пишет:
> Well, im not sure why should i name it /openmpi-default-hostfile
>
Well, im not sure why should i name it /openmpi-default-hostfile
Especially, because mpirun v1.2 executes without any errors.
But, i made a copy named /openmpi-default-hostfile, and still, the same
result.
This is the whole error message for a simple hello world program:
Open RTE was unable to op
20 matches
Mail list logo