in be automounted, this time by the MPI procs.
We created that option specifically to address the problem you describe.
Hope it helps.
On Apr 16, 2009, at 8:57 AM, Jerome BENOIT wrote:
Hi,
thanks for the reply.
Ralph Castain wrote:
The orteds don't pass anything from MPI_Info to sru
it be on every
remote node, etc?
All things are doable - the devil is in defining the details. :-)
On Apr 16, 2009, at 8:23 AM, Jerome BENOIT wrote:
Hi !
thanks for the reply.
On a homeless workers cluster when the workers programs are spawned
via MPI_Comm_spaw{,multiple},
it would be nice to s
ry.
Jerome
On Apr 16, 2009, at 4:02 AM, Jerome BENOIT wrote:
Hi !
finally I got it:
passing the mca key/value `"plm_slurm_args"/"--chdir /local/folder"'
does the trick.
As a matter of fact, my code pass the MPI_Info key/value
`"wdir"/"/local/fol
On Apr 15, 2009, at 11:00 PM, Jerome BENOIT wrote:
Hello List,
in FAQ Running MPI jobs, point 12, we read:
-wdir : Set the working directory of the started applications.
If not supplied, the current working directory is assumed
(or $HOME, if the current working directory does not exist on all
s on the nodes of the spawned
programs
are `nodes:/local/folder' as expected, but the working directory of the oreted_s
is the working directory of the parent program. My guess is that the MPI_Info
key/vale
may also be passed to `srun'.
hth,
Jerome
Jerome BENOIT wrote:
Hello Again,
Hello Again,
Jerome BENOIT wrote:
Hello List,
I have just noticed that, when MPI_Comm_spawn is used to launch programs
around,
oreted working directory on the nodes is the working directory of the
spawnning program:
can we ask to oreted to use an another directory ?
Changing the working
Hello List,
I have just noticed that, when MPI_Comm_spawn is used to launch programs around,
oreted working directory on the nodes is the working directory of the spawnning
program:
can we ask to oreted to use an another directory ?
Thanks in advance,
Jerome
Hello List,
in FAQ Running MPI jobs, point 12, we read:
-wdir : Set the working directory of the started applications.
If not supplied, the current working directory is assumed
(or $HOME, if the current working directory does not exist on all nodes).
Is there a way to configure the default alte
ronment, put them in your personal params
file, or in the master params file. Just make sure that those files are
either the same on all nodes or visible on all nodes. :-)
I had my lesson !
Thanks,
Jerome
http://www.open-mpi.org/faq/?category=tuning#setting-mca-params
On Apr 3, 2009,
error
message that you showed?
On Mar 27, 2009, at 5:52 AM, Manuel Prinz wrote:
Am Freitag, den 27.03.2009, 20:34 +0800 schrieb Jerome BENOIT:
> I have just tried the Sid package (1.3-2), but it does not work
properly
> (when the firewall are off)
Though this should work, the version
Hello List,
Dirk Eddelbuettel wrote:
On 3 April 2009 at 06:35, Jerome BENOIT wrote:
| It appeared that the file /etc/openmpi/openmpi-mca-params.conf on node green
was the only one
| into the cluster to contain the line
|
| btl_tcp_port_min_v4 = 49152
Great -- so can we now put your claims
e all
help / error messages
[rainbow:07504] 1 more process has sent help message help-mpi-runtime /
mpi_init:startup:internal-failure
I would like to know what is to blame:
the btl_tcp_port_min_v4 (recent) feature ?
or the local SLURM set up ?
If the local SLURM set up is bad, what may be wron
Hi !
Dirk Eddelbuettel wrote:
On 3 April 2009 at 03:33, Jerome BENOIT wrote:
| The above submission works the same on my clusters.
| But in fact, my issue involve interconnection between the nodes of the
clusters:
| in the above examples involve no connection between nodes.
|
| My cluster is
Hi Again !
Dirk Eddelbuettel wrote:
Works for me (though I prefer salloc), suggesting that you did something to
your network topology or Open MPI configuration:
:~$ cat /tmp/jerome_hw.c
// mpicc -o phello phello.c
// mpirun -np 5 phello
#include
#include
#include
int main(int narg, char
2009 at 02:41, Jerome BENOIT wrote:
|
| Original Message
| Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
| Date: Fri, 03 Apr 2009 02:41:01 +0800
| From: Jerome BENOIT
| Reply-To: ml.jgmben...@mailsnare.net
| To: Dirk Eddelbuettel
| CC: ml.jgmben
Original Message
Subject: Re: [OMPI users] openmpi 1.3.1: bind() failed: Permission denied (13)
List-Post: users@lists.open-mpi.org
Date: Fri, 03 Apr 2009 02:41:01 +0800
From: Jerome BENOIT
Reply-To: ml.jgmben...@mailsnare.net
To: Dirk Eddelbuettel
CC: ml.jgmben
, Jerome BENOIT wrote:
Hello List,
I have just tried the current SVN Debian package:
it does not work even without firewall
Please find in attachement my test files and the associated outputs.
hth,
Jerome
Manuel Prinz wrote:
> Am Freitag, den 27.03.2009, 20:34 +0800 schrieb Jerome BENOIT:
>&g
Is there a firewall somewhere ?
Jerome
Guanyinzhu wrote:
Hi!
I'm using OpenMPI 1.3 on ten nodes connected with Gigabit Ethernet on
Redhat Linux x86_64.
I run a test like this: just killed the orted process and the job hung
for a long time (hang for 2~3 hours then I killed the job).
I h
://www.open-mpi.org/community/help/
We need to know exactly how you are invoking mpirun, what MCA parameters
have been set, etc.
On Mar 28, 2009, at 12:37 PM, Jerome BENOIT wrote:
Hello List,
I have just tried the current SVN Debian package:
it does not work even without firewall
Please find in
Hello List,
I have just tried the current SVN Debian package:
it does not work even without firewall
Please find in attachement my test files and the associated outputs.
hth,
Jerome
Manuel Prinz wrote:
Am Freitag, den 27.03.2009, 20:34 +0800 schrieb Jerome BENOIT:
I have just tried the Sid
to generate the error message
that you showed?
On Mar 27, 2009, at 5:52 AM, Manuel Prinz wrote:
Am Freitag, den 27.03.2009, 20:34 +0800 schrieb Jerome BENOIT:
> I have just tried the Sid package (1.3-2), but it does not work
properly
> (when the firewall are off)
Though this should wor
Hello List,
Manuel Prinz wrote:
Am Freitag, den 27.03.2009, 11:01 +0800 schrieb Jerome BENOIT:
Finally I succeeded with the sbatch approach ... when my firewall are
stopped !
So I guess that I have to configure my firewall (I use firehol):
I have just tried but without success. I will try
Original Message
Subject: Re: [OMPI users] Configure OpenMPI and SLURM on Debian (Lenny)
List-Post: users@lists.open-mpi.org
Date: Fri, 27 Mar 2009 04:36:39 +0800
From: Jerome BENOIT
Reply-To: jgmben...@mailsnare.net
Organization: none
CC: Open MPI Users
References
Hello Again !
Manuel Prinz wrote:
Hi Jerome!
Am Dienstag, den 24.03.2009, 16:27 +0800 schrieb Jerome BENOIT:
With LAM some configuration files must be set up, I guess it is the same here.
But as SLURM is also involved, it is not clear to me right now how I must
configure both SLURM and
Hi Manuel !
I read what you said on the web before I sent my email.
But it does not work with my sample. It is an old LAM source C.
Anyway, thanks a lot for your reply.
Jerome
Manuel Prinz wrote:
Hi Jerome!
Am Dienstag, den 24.03.2009, 16:27 +0800 schrieb Jerome BENOIT:
With LAM some
Hello List,
I have just installed OpenMPI on a Lenny cluster where SLURM works fine.
I can compile a sample C source of mine (I used LAM a few years ago),
but I can not run it on the cluster.
With LAM some configuration files must be set up, I guess it is the same here.
But as SLURM is also invo
26 matches
Mail list logo