Does openmpi mpirun command have the equivalent option "-O" as lam
(homogeneous universe)
I would like avoid automatic byteswap in heterogeneous execution
environment
Thanks in advance
Geoffroy
<>
Hi,
Does anybody face problems running Openmpi on two hosts with different
networks (gateway to reach the other) ?
Let say compil02 ip adress is 172.3.9.10 and r009n001 is 10.160.4.1
There is no problem with MPI_init free executables (for example hostname)
compil02% /tmp/HALMPI/openmpi-1.2.2/bin
>
> are the 172.x.y.z nodes behind a NAT (hence the communication back
> isn't possible - only the stdout from the rsh/ssh is working in this
> case)?
>
> -- Reuti
Actually I dont know exactly , I am asking extra informations to my
network architect
Interesting thing to notice is that LAM work
76348311 20956/orted
unix 3 [ ] STREAM CONNECTED 76348310 20956/orted
I hope it'll help
Does anyone run openmpi program in such environment ???
Thanks again
Geoffroy
2008/7/2 Geoffroy Pignot :
> are the 172.x.y.z nodes behind a NAT (hence the communication back
gt;>>>>
> >>>>>>>> complains with:
> >>>>>>>>
> >>>>>>>> "There are not enough slots available in the system to satisfy the
> 4
> >>>>>>>> slots
> >>>>>>>> t
Geoffroy Pignot
> Hi Lenny and Ralph,
>
> I saw nothing about rankfile in the 1.3.3 press release. Does it means that
> the bug fixes are not included there ??
> Thanks
>
> Geoffroy
>
> 2009/7/15
>
>> Send users mailing list submissions to
>>
Hello
I'm currently trying the new release but I cant reproduce the 1.2.8
behaviour
concerning --wdir option
Then
%% /tmp/openmpi-1.2.8/bin/mpirun -n 1 --wdir /tmp --host r003n030 pwd :
--wdir /scr1 -n 1 --host r003n031 pwd
/scr1
/tmp
but
%% /tmp/openmpi-1.3/bin/mpirun -n
; >>>
> >>>
> >>>
> >>> ___
> >>> users mailing list
> >>> us...@open-mpi.org
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> ___
Hello,
As you can notice , I am trying the work done on this new release.
preload-files and preload-binary options are very interesting to me because
I work on a cluster without any shared space between nodes.
I tried those basically , but no success . You will find below the error
messages.
If I
Hello
I redid few tests with my hello world , here are my results.
First of all my config :
configure --prefix=/tmp/openmpi-1.3 --libdir=/tmp/openmpi-1.3/lib64
--enable-heterogeneous . you will find attached my ompi_info -param all all
compil02 and compil03 are identical Rh43 64 bits nodes.
*Tes
ting
> your path to point at another OMPI installation. The fact that you can
> run at all would seem to indicate that things are okay, but I honestly
> have no ideas at this stage as to why you are seeing this behavior.
>
> Sorry I can't be of more help...
> Ralph
>
> On Ja
Hi ,
I am currently testing the process affinity capabilities of openmpi and I
would like to know if the rankfile behaviour I will describe below is normal
or not ?
cat hostfile.0
r011n002 slots=4
r011n003 slots=4
cat rankfile.0
rank 0=r011n002 slot=0
rank 1=r011n003 slot=1
--
> --
> mpirun noticed that the job aborted, but has no info as to the process
> that caused that situation.
>
e ranks in your rankfile must correspond to
> the eventual rank of each process in the cmd line.
>
> Unfortunately, that means you have to count ranks. In your case, you
> only have four, so that makes life easier. Your rankfile would look
> something like this:
>
> rank 0=r
Hi,
I am not sure it's a bug but I think we wait for something else when we kill
a proccess - by the way , the signal propagation works well.
I read an explanation on a previous thread - (
http://www.open-mpi.org/community/lists/users/2009/03/8514.php ) . .
It's not important but it could contrib
uot;us-ascii"; Format="flowed";
>DelSp="yes"
>
> Ah now, I didn't say it -worked-, did I? :-)
>
> Clearly a bug exists in the program. I'll try to take a look at it (if
> Lenny doesn't get to it first), but it won't be until later
; Format="flowed";
>DelSp="yes"
>
> Honestly haven't had time to look at it yet - hopefully in the next
> couple of days...
>
> Sorry for delay
>
>
> On Apr 20, 2009, at 2:58 AM, Geoffroy Pignot wrote:
>
> > Do you have any news abo
Hi Lenny,
Here is the basic mpirun command I would like to run :
mpirun -rf rankfile -n 1 -host r001n001 master.x options1 : -n 1 -host
r001n002 master.x options2 : -n 1 -host r001n001 slave.x options3 : -n 1
-host r001n002 slave.x options4
with cat rankfile
rank 0=r001n001 slot=0:*
rank 1=r001
. Any
> feedback would be appreciated.
>
> Ralph
>
>
> On Apr 14, 2009, at 7:57 AM, Ralph Castain wrote:
>
> Ah now, I didn't say it -worked-, did I? :-)
>
> Clearly a bug exists in the program. I'll try to take a look at it (if
> Lenny
> doesn't g
t line 1016
Ralph, could you tell me if my command syntax is correct or not ? if not,
give me the expected one ?
Regards
Geoffroy
2009/4/30 Geoffroy Pignot
> Immediately Sir !!! :)
>
> Thanks again Ralph
>
> Geoffroy
>
>
>
>>
>>
>> ---
m r2
> or greater...such as:
>
> http://www.open-mpi.org/nightly/trunk/openmpi-1.4a1r21142.tar.gz
>
> HTH
> Ralph
>
>
> On May 4, 2009, at 2:14 AM, Geoffroy Pignot wrote:
>
> > Hi ,
> >
> > I got the openmpi-1.4a1r21095.tar.gz tarball, but
appening there.
> >
> > I'll have to dig into the code that specifically deals with parsing the
> > results to bind the processes. Afraid that will take awhile longer -
> pretty
> > dark in that hole.
> >
> >
> >
> > On Mon, May 4, 2009 at 8:
ading, trying and deploying the next official
release
Regards
Geoffroy
2009/5/4 Geoffroy Pignot
> Hi Ralph
>
> Thanks for your extra tests. Before leaving , I just pointed out a problem
> coming from running plpa across different rh distribs (<=> different Linux
> kernels
23 matches
Mail list logo