t for each process separately.
Jody
On Mon, Jul 26, 2010 at 4:08 AM, Jack Bryan wrote:
> Dear All,
> I run a 6 parallel processes on OpenMPI.
> When the run-time of the program is short, it works well.
> But, if the run-time is long, I got errors:
> [n124:45521] *** Process receive
ct which ranks should open an xterm.
(Again check the man pages of mpirun)
Jody
On Mon, Jul 26, 2010 at 8:55 AM, Jack Bryan wrote:
> Thanks
> It can be installed on linux and work with gcc ?
> If I have many processes, such as 30, I have to open 30 terminal windows ?
> thanks
> Jack
Hi
@Ashley:
What is the exact semantics of an asynchronous barrier,
and is it part of the MPI specs?
Thanks
Jody
On Thu, Sep 9, 2010 at 9:34 PM, Ashley Pittman wrote:
>
> On 9 Sep 2010, at 17:00, Gus Correa wrote:
>
>> Hello All
>>
>> Gabrielle's que
Hi
I don't know if i correctly understand what you need, but have you
already tried MPI_Comm_spawn?
Jody
On Mon, Sep 20, 2010 at 11:24 PM, Mikael Lavoie wrote:
> Hi,
>
> I wanna know if it exist a implementation that permit to run a single host
> process on the master of the c
answered by trying, because it
depends strongly
on the volume of your messages and the quality of your hardware
(network and disk speed)
Jody
t looks like an OpenMPI-internal leak,
because it happens iinside PMPI_Send,
but then i am using the function ConnectorBase::send()
several times from other callers than TileConnector,
but these don't show up in valgrind's output.
Does anybody have an idea what is happening here?
Thank You
jody
this server i could then send commands which changed the state of the
master.
Jody
On Tue, Oct 12, 2010 at 6:14 AM, Mahesh Salunkhe
wrote:
>
> Hello,
> Could you pl tell me how to connect a client(not in any mpi group ) to a
> process in a mpi group.
> (i.e. just like
I had this leak with OpenMPI 1.4.2
But in my case, there is no accumulation - when i repeat the same call,
no additional leak is reported for the second call
Jody
On Mon, Oct 18, 2010 at 1:57 AM, Ralph Castain wrote:
> There is no OMPI 2.5 - do you mean 1.5?
>
> On Oct 17, 2010, a
But shouldn't something like this show up in the other processes as well?
I only see that in the master process, but the slave processes also
send data to each other and to the master.
On Mon, Oct 18, 2010 at 2:48 PM, Ralph Castain wrote:
>
> On Oct 18, 2010, at 1:41 AM, jody wrote:
Hi
I don't know the reason for the strange behaviour, but anyway,
to measure time in an MPI application you should use MPI_Wtime(), not clock()
regards
jody
On Wed, Oct 20, 2010 at 11:51 PM, Storm Zhang wrote:
> Dear all,
>
> I got confused with my recent C++ MPI program'
Hi Brandon
Does it work if you try this:
mpirun -np 2 hostfile hosts.txt ilk
(see http://www.open-mpi.org/faq/?category=running#simple-spmd-run)
jody
On Sat, Oct 23, 2010 at 4:07 PM, Brandon Fulcher wrote:
> Thank you for the response!
>
> The code runs on my own machine as we
Where is the option 'default-hostfile' described?
It does not appear in mpirun's man page (for v. 1.4.2)
and i couldn't find anything like that with googling.
Jody
On Wed, Oct 27, 2010 at 4:02 PM, Ralph Castain wrote:
> Specify your hostfile as the default one:
>
&
gcc.i386 zlib.i386
(gdb)
I am using OpenMPI 1.4.2
Has anybody got an idea how i could find the problem?
Thank You
Jody
Hi Jack
Usually MPI_ERR_TRUNCATE means that the buffer you use in MPI_Recv
(or MPI::COMM_WORLD.Recv) is too sdmall to hold the message coming in.
Check your code to make sure you assign enough memory to your buffers.
regards
Jody
On Mon, Nov 1, 2010 at 7:26 AM, Jack Bryan wrote:
> HI,
>
Hi
On a newly installed 64bit linux (2.6.32-gentoo-r7) with gcc version 4.4.4
i can't compile even simple Open-MPI applications (OpenMPI 1.4.2).
The message is:
jody@aim-squid_0 ~/progs $ mpiCC -g -o HelloMPI HelloMPI.cpp
/usr/lib/gcc/x86_64-pc-linux-gnu/4.4.4/../../../../x86_64-pc-linux-gn
to compile:
jody@aim-squid_0 ~/progs $ mpiCC -g -o HelloMPI HelloMPI.cpp
Cannot open configuration file
/opt/openmpi-1.4.2-64/share/openmpi/mpiCC-wrapper-data.txt
Error parsing data file mpiCC: Not found
So again, it looked into the original installation directory of the
64-bit installation for some
ise diagnosis.
jody
On Mon, Nov 1, 2010 at 6:41 PM, Jack Bryan wrote:
> thanks
> I use
> double* recvArray = new double[buffersize];
> The receive buffer size
> MPI::COMM_WORLD.Recv(&(recvDataArray[0]), xVSize, MPI_DOUBLE, 0, mytaskTag);
> delete [] recvArray ;
>
an correctly start up
totalview) concerns
the hostfile and rankfile parameters of mpirun: how can i start an
open mpi application with
totalview so that my application starts the processes on the correct
processors as
defined in hostfile and rankfile?
Thank You
Jody
irun -np 5 --rankfile `rankcreate.sh 5` myApplication
May be this is of use for you
jody
On Fri, Dec 10, 2010 at 11:50 PM, Eugene Loh wrote:
> David Mathog wrote:
>
>> Also, in my limited testing --host and -hostfile seem to be mutually
>> exclusive.
>>
> No. You can u
Hi
if i rememmber correctly, "omp.h" is a header file for OpenMP which is
not the same as Open MPI.
So it looks like you have to install OpenMP,
Then you can compile it with the compiler option -fopenmp (in gcc)
Jody
On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS
wrot
- is there a way to find this out?
Thank You
Jody
successfully; not
able to guarantee that all other processes were killed!
I think this is caused by the fact that on the 64Bit machine Open MPI
is also built as a 64 bit application.
How can i force OpenMPI to be built as a 32Bit application on a 64Bit machine?
Thank You
Jody
On Tue, Feb 1, 2011 at
Thaks all
I did the simple copying of the 32Bit applications and now it works.
Thanks
Jody
On Wed, Feb 2, 2011 at 5:47 PM, David Mathog wrote:
> jody wrote:
>
>> How can i force OpenMPI to be built as a 32Bit application on a 64Bit
> machine?
>
> THe easiest way is not
Hi Massimo
Just to make sure: usually the MPI_ERR_TUNCATE error is caused by
buffer sizes that are too small.
Can you verify that the buffers you are using are large enough to
hold the data they should receive?
Jody
On Sat, Feb 5, 2011 at 6:37 PM, Massimo Cafaro
wrote:
> Dear all,
>
&g
Hi
At a first glance i would say this is not a OpenMPI problem,
but a wrf problem (though io must admit i have no knowledge whatsoever ith wrf)
Have you tried running a single instance of wrf.exe?
Have you tried to run a simple application (like a "hello world") on your nodes?
Jody
O
work
But i do have xauth data (as far as i know):
On the remote (squid_0):
jody@squid_0 ~ $ xauth list
chefli/unix:10 MIT-MAGIC-COOKIE-1 5293e179bc7b2036d87cbcdf14891d0c
chefli/unix:0 MIT-MAGIC-COOKIE-1 146c7f438fab79deb8a8a7df242b6f4b
chefli.uzh.ch:0 MIT-MAGIC-COOKIE-1 146c7f438
Hi Ralph
No, after the above error message mpirun has exited.
But i also noticed that it is to ssh into squid_0 and open a xterm there:
jody@chefli ~/share/neander $ ssh -Y squid_0
Last login: Wed Apr 6 17:14:02 CEST 2011 from chefli.uzh.ch on pts/0
jody@squid_0 ~ $ xterm
xterm Xt error
ut with '-X' is till get those xauth warnings)
But the xterm option still doesn't work:
jody@chefli ~/share/neander $ mpirun -np 4 -host squid_0 -xterm 1,2
printenv | grep WORLD_RANK
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Warning: No xauth
Hi Ralph
Is there an easy way i could modify the OpenMPI code so that it would use
the -Y option for ssh when connecting to remote machines?
Thank You
Jody
On Thu, Apr 7, 2011 at 4:01 PM, jody wrote:
> Hi Ralph
> thank you for your suggestions. After some fiddling, i found that af
Hi
Unfortunately this does not solve my problem.
While i can do
ssh -Y squid_0 xterm
and this will open an xterm on m,y machiine (chefli),
i run into problems with the -xterm option of openmpi:
jody@chefli ~/share/neander $ mpirun -np 4 -mca plm_rsh_agent "ssh
-Y" -host squid_0
Hi Ralph
Thank you for your suggestions.
I'll be happy to help you.
I'm not sure if i'll get around to this tomorrow,
but i certainly will do so on Monday.
Thanks
Jody
On Thu, Apr 28, 2011 at 11:53 PM, Ralph Castain wrote:
> Hi Jody
>
> I'm not sure when I
Hi Ralph
I rebuilt open MPI 1.4.2 with the debug option on both chefli and squid_0.
The results are interesting!
I wrote a small HelloMPI app which basically calls usleep for a pause
of 5 seconds.
Now calling it as i did before, no MPI errors appear anymore, only the
display problems:
jody
l > 0 will open xterms, but with ' -mca
plm_base_verbose 0' there are again no xterms.
Thank You
Jody
On Mon, May 2, 2011 at 2:29 PM, Ralph Castain wrote:
>
> On May 2, 2011, at 2:34 AM, jody wrote:
>
>> Hi Ralph
>>
>> I rebuilt open MPI 1.4.2 with the de
Hi
Well, the difference is that one time i call the application
'HelloMPI' with the '--xterm' option,
whereas in my previous mail i am calling the application 'xterm'
(without the '--xterm' option)
Jody
On Mon, May 2, 2011 at 4:08 PM, Ralph Castain wro
30 PM, Ralph Castain wrote:
>
> On May 2, 2011, at 8:21 AM, jody wrote:
>
>> Hi
>> Well, the difference is that one time i call the application
>> 'HelloMPI' with the '--xterm' option,
>> whereas in my previous mail i am calling the application 'x
PI Datatype in order
to fill it up to the next multiple of 8 i could work around this problem.
(not very nice, and very probably not portable)
My question: is there a way to tell MPI to automatically use the
required padding?
Thank You
Jody
/deserialize
after receiving it.
Jody
On Wed, Jun 29, 2011 at 6:18 PM, Gus Correa wrote:
> jody wrote:
>>
>> Hi
>>
>> I have noticed on my machine that a struct which i have defined as
>>
>> typedef struct {
>> short iSpeciesID;
>> char
arned if i had read that chapter more carefully...
Fortunately, i don't have to send around a lot of these structs,
so i will do the padding (using the offsetof macro Dave recommended).
Thanks again
Jody
On Wed, Jun 29, 2011 at 9:52 PM, Gus Correa wrote:
> Hi Jody
>
> jody wrote:
http://www.open-mpi.org/faq/?category=running#run-prereqs)
Hope this helps
Jody
On Thu, Jul 7, 2011 at 8:44 AM, zhuangchao wrote:
> hello all :
>
> I installed the openmpi-1.4.3 on redhat as the following step :
>
> 1. ./configure --prefix=/data1/cluster/openmpi
&g
sessions on your nodes,
you can execute
mpirun --hostfile hostfile -np 4 printenv
and scan the output for PATH and LD_LIBRARY_PATH.
Hope this helps
Jody
On Sat, Jul 9, 2011 at 12:25 AM, Mohan, Ashwin wrote:
> Thanks Ralph.
>
>
>
> I have emailed the network admin on the
Hi
You also must make sure that all slaves can
connect via ssh to each other and to the master
node without ssh.
Jody
On Wed, Dec 21, 2011 at 3:57 AM, Shaandar Nyamtulga wrote:
> Can you clarify your answer please.
> I have one master node and other slave nodes. I created rsa key on my
Hi
I've got a really strange problem:
I've got an application which creates intercommunicators between a
master and some workers.
When i run it on our cluster with 11 processes it works,
when i run it with 12 processes it hangs inside MPI_Intercomm_create().
This is the hostfile:
squid_0.uzh.
Hi
Did you run your program with mpirun?
For example:
mpirun -np 4 ./a.out
jody
On Fri, Mar 16, 2012 at 7:24 AM, harini.s .. wrote:
> Hi ,
>
> I am very new to openMPI and I just installed openMPI 4.1.5 on Linux
> platform. Now am trying to run the examples in the folder got
for reading by the processes?
Thank You
Jody
ce for the creation of the large data block),
but unfortunately my main application is not well suited for OpenMP
parallelization..
I guess i'll have to take more detailed look at my problem to see if i
can restructure it in a good way...
Thank You
Jody
On Mon, Apr 16, 2012 at 11:16 PM, Bria
ot;count".
If you expect data of 160 bytes you have to allocate a buffer
with a size greater or equal to 160 and you have to set your
"count" parameter to the size you allocated.
If you want to receive data in chunks, you have to send it in chunks.
I hope this helps
Jody
O
plm_rsh_agent "ssh -Y"' i can't open windows
from the remote:
jody@boss /mnt/data1/neander $ mpirun -np 5 -hostfile allhosts
-mca plm_base_verbose 1 --leave-session-attached xterm -hold -e
./MPIStruct
xterm: Xt error: Can't open display:
xterm: DISPLAY is not set
xter
.
Deprecated parameter: plm_rsh_agent
--
for every process that starts...
My openmpi version is 1.6 (gentoo package sys-cluster/openmpi-1.6-r1)
jody
On Tue, Aug 28, 2012 at 2:38 PM, Ralph Castain wrote:
> Guess I'm confuse
Thanks Ralph
I renamed the parameter in my script,
and now there are no more ugly messages :)
Jody
On Tue, Aug 28, 2012 at 3:17 PM, Ralph Castain wrote:
> Ah, I see - yeah, the parameter technically is being renamed to
> "orte_rsh_agent" to avoid having users need to k
processes made
it to this point and which ones did not.
Hope this helps a bit
Jody
On Tue, Sep 25, 2012 at 8:20 AM, Richard wrote:
> I have 3 computers with the same Linux system. I have setup the mpi cluster
> based on ssh connection.
> I have tested a very simple mpi program, it works on th
It is better if you accept messages from all senders (MPI_ANY_SOURCE)
instead of particular ranks and then check where the
message came from by examining the status fields
(http://www.mpi-forum.org/docs/mpi22-report/node47.htm)
Hope this helps
Jody
On Mon, Feb 18, 2013 at 5:06 PM, Pradeep Jha
Hi
I think you should use the "--host" or "--hostfile" options:
http://www.open-mpi.org/faq/?category=running#simple-spmd-run
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope this helps
Jody
On Wed, Feb 26, 2014 at 8:31 AM, raha khalili wrote:
> De
=running
Jody
On Wed, Feb 26, 2014 at 10:38 AM, raha khalili wrote:
> Dear Jody
>
> Thank you for your reply. Based on hostfile examples you show me, I
> understand 'slots' is number of cpus of each node I mentioned in the file,
> am I true?
>
> Wishes
>
>
> On W
indows
over yet another ssh connection?
Thanks
Jody
Hi Tim
Thank You for your reply.
Unfortunately my workstation has died,
and even when i try to run openmpi application
in a simple way, i get errors:
jody@aim-nano_02 /home/aim-cari/jody $ mpirun -np 2 --hostfile hostfile ./a.out
bash: orted: command not found
[aim-nano_02:22145] ERROR: A
alent way) i now get errors even when i try
to run an openmpi application in a simple way:
jody@aim-nano_02 /home/aim-cari/jody $ mpirun -np 2 --hostfile hostfile ./a.out
bash: orted: command not found
[aim-nano_02:22145] ERROR: A daemon on node 130.60.49.129 failed to
start as expected.
[aim-na
Tim,
thanks for your suggestions.
There seems to be something wrong with the PATH:
jody@aim-nano_02 ~/progs $ ssh 130.60.49.128 printenv | grep PATH
PATH=/usr/bin:/bin:/usr/sbin:/sbin
which i don't understand. Logging via ssh into 130.60.49.128 i get:
jody@aim-nano_02 ~/progs
.
However, my plan failed since i am unable to create datatypes with holes in
front and at the end.
What function should i use to create the desired datatypes?
Thank You
Jody
_free(&dtWHoles);
MPI_Finalize();
}
On 7/10/07, George Bosilca wrote:
MPI_LB and MPI_UB is what you're looking for. Or better, for MPI-2
compliant libraries such as Open MPI and MPICH2, you can use
MPI_Type_create_resized. This will allow you to create the gap at the
beginning and/or
Rob, thanks for your info.
Do you know whether OpenMPI will use a newer version
of ROMIO sometimes soon?
Jody
On 7/13/07, Robert Latham wrote:
On Tue, Jul 10, 2007 at 04:36:01PM +, jody wrote:
> Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks
> [aim-nano_02
Brian,
I am using OpenMPI 1.2.2, so i am lagging a bit behind.
Should i update to 1.2.3 and do the test again?
Thanks for the info
Jody
On 7/16/07, Brian Barrett wrote:
Jody -
I usually update the ROMIO package before each major release (1.0,
1.1, 1.2, etc.) and then only within a major
Hi Robert
Thanks for the infos.
In the meantime I found a workaround.
Instead of resized datatypes with holes I use simple vectors
with appropriately calculated offsets in MPI_FILE_WRITE_AT.
Probably not as elegant, but seems to work OK.
Jody
On 7/18/07, Robert Latham wrote:
On Tue, Jul 10
Hi
I installed openmpi 1.2.2 on a quad core intel machine running fedora 6
(hostname plankton)
I set PATH and LD_LIBRARY in the .zshrc file:
$ echo $PATH
/opt/openmpi/bin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/jody/bin
$ echo $LD_LIBRARY_PATH
/opt/openmpi/lib:
When i
27;t change the output.
Does this message give any hints as to the problem?
Jody
On 8/14/07, Tim Prins wrote:
>
> Hi Jody,
>
> jody wrote:
> > Hi
> > I installed openmpi 1.2.2 on a quad core intel machine running fedora 6
> > (hostname plankton)
> > I set PATH an
dcast statement is rached,
i get an errorr message:
[nano_00][0,1,0][btl_tcp_endpoint.c:572:mca_btl_tcp_endpoint_complete_connect]
connect() failed with errno=113
Does this still agree with your firewall hypothesis?
Thanks
Jody
On 8/14/07, Tim Prins wrote:
> Jody,
>
> jody wrote:
> > H
Hi
I would like to contribute something as well.
I have about half a year of experience with OpenMPI,
and i used LAM MPI for some more than half a year before.
Jody
Hi Dino
Try
ssh saturn printenv | grep PATH
from your host sun to see what your environment variables are when
ssh is run without a shell.
On 9/27/07, Dino Rossegger wrote:
> Hi,
>
> I have a problem running a simple programm mpihello.cpp.
>
> Here is a excerp of the error and the command
> ro
on all my nodes
Jody
On 9/27/07, Dino Rossegger wrote:
> Hi Jody,
>
> Thanks for your help, it really is the case that either in PATH nor in
> LD_LIBRARY_PATH the path to the libs is set correctly. I'll try out,
> hope it works.
>
> jody schrieb:
> > Hi Dino
>
> hosts ranks
> ===
> node0 1,2,4
> node1 3,4,6
I guess there must be a typo:
You can't assign one rank (4) to two nodes
And ranks start from 0 not from 1.
Check this site,
http://www.open-mpi.org/faq/?category=running#mpirun-host
there might be some inforegarding your problem.
Jody
ation
with the --prefix option:
$mpirun -np 2 --prefix /opt/openmpi -H sun,saturn ./main
(assuming your Open MPI installation lies in /opt/openmpi
on both machines)
Jody
On 10/1/07, Dino Rossegger wrote:
> Hi Jodi,
> did the steps as you said, but it didn't work for me.
> I set LD_L
Hi Miguel
I don't know if it's a typo - but actually it should be
mpiexec -np 2 ./mpi-app config.ini
and not
> mpiexec -n 2 ./mpi-app config.ini
Jody
HI
I'm not sure if that is a problem,
but in MPI applications you shoud
use MPI_WTime() for time-measurements
Jody
On 10/25/07, 42af...@niit.edu.pk <42af...@niit.edu.pk> wrote:
> Hi all,
>I am a research assistant (RA) at NUST Pakistan in High Performance
> Scientific
Hi
Check out the FAQs
http://www.open-mpi.org/faq/?category=running#mpirun-host
and
http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
You'll find some examples for hostfiles as well.
Jody
them as doubles.
You either have to use several scatter commands or "fold" your
2D-Array into a one-dimensional array.
Hope this helps
Jody
On Jan 8, 2008 3:54 PM, Dino Rossegger wrote:
> Hi,
> I have a problem distributing a 2 dimensional array over 3 processes.
>
> I
from the remote machine nano_00.
When i run my program normally, it works ok:
[jody]:/mnt/data1/neander:$mpirun -np 4 --hostfile testhosts ./MPITest
[aim-plankton.unizh.ch]I am #0/4 global
[aim-plankton.unizh.ch]I am #1/4 global
[aim-nano_00]I am #2/4 global
[aim-nano_00]I am #3/4 global
But when
Y=plankton:0.0 printenv
This yields
DISPLAY=plankton:0.0
OMPI_MCA_orte_precondition_transports=4a0f9ccb4c13cd0e-6255330fbb0289f9
OMPI_MCA_rds=proxy
OMPI_MCA_ras=proxy
OMPI_MCA_rmaps=proxy
OMPI_MCA_pls=proxy
OMPI_MCA_rmgr=proxy
SHELL=/bin/bash
SSH_CLIENT=130.60.49.141 59524 22
USER=jody
LD_LIBRARY_PATH=
should
> stay open.
Unfortunately this didn't work either:
[jody]:/mnt/data1/neander:$mpirun -np 4 --debug-daemons --hostfile
testhosts -x DISPLAY=plankton:0.0 xterm -hold -e ../MPITest
Daemon [0,0,1] checking in as pid 19473 on host plankton.unizh.ch
Daemon [0,0,2] checking in as pid 26531 on host
ithout xterms:
$mpirun -np 5 -hostfile testhosts ./MPITest
Does anybody have an idea why that should happen?
Thanks
Jody
#x27; | gawk -F ":"
'{ print $1 }' | xargs ls -al
When i do
mpirun -np 5 -hostfile testhosts -x DISPLAY xterm -hold -e ./envliblist
all xterms (local & remote) display the contents of the openmpi/lib directory.
Another strange result:
I have a shell script for launching the de
ld be
large enough
to contain messages from *all* processes, and not just from the "far side"
Jody
.
Sorry!
That reply was intended to another post!
Jody
On Thu, Mar 13, 2008 at 8:21 AM, jody wrote:
> HI
> I think the recvcount argument you pass to MPI_Allgather should not be
> 1 but instead
> the number of MPI_INTs your buffer rem_rank_tbl can contain.
> As it stand
Could you explain what you mean by "comm accept/connect" ?
jody
On Tue, Mar 25, 2008 at 4:06 PM, George Bosilca wrote:
> There is a chapter in the MPI standard about this. Usually, people
> will use comm accept/connect to do such kind of things. No need to
> have you
-fanta4 slots
Is this a bug or a feature? ;)
Jody
_endpoint.c:572:mca_btl_tcp_endpoint_complete_connect]
connect() failed with errno=113
If i opnly use aim-plankton alone or aim-fanta4 alone everythiung runs
as expected.
BTW: i'm, using open MPI 1.2.2
Thanks
Jody
On Thu, Apr 10, 2008 at 12:40 PM, jody wrote:
> HI
> In my network i have som
iled with errno=113
Process 2 on (aim-plankton) displays the same message twice.
Any ideas?
Thanks Jody
On Thu, Apr 10, 2008 at 1:05 PM, jody wrote:
> Hi
> Using a more realistic application than a simple "Hello, world"
> even the --host version doesn't work correctly
]
connect() failed with errno=113
Does it give an idea what could be the problem?
Jody
On Thu, Apr 10, 2008 at 2:20 PM, Rolf Vandevaart
wrote:
>
> This worked for me although I am not sure how extensive our 32/64
> interoperability support is. I tested on Solaris using the TCP
> interc
Aurelien:
What is the cause of this performance penalty?
Jody
On Fri, Apr 11, 2008 at 1:44 AM, Aurélien Bouteiller
wrote:
> Open MPI can manage heterogeneous system. Though you prefer to avoid
> this because it has a performance penalty. I suggest you compile on
> the 32bit machin
use
the same format for hostfiles as MPICH.
See the FAQ for more info
http://www.open-mpi.org/faq/?category=running#mpirun-scheduling
If you don't use a hostfile, mpirun will start
all processes on the local machine.
jody
On Tue, Apr 22, 2008 at 8:56 AM, wrote:
> Dear all,
>
> I
required.
All you do is start your application with mpirun
mpirun --hostfile my_hostfile -np 4 my_parallel_application
jody
On Tue, May 13, 2008 at 7:07 PM, Rob Malpass wrote:
> Hi
>
> Could someone help me out with some documentation? I'm searched the faq
> and can't
pid 14927 on host aim-plankton.uzh.ch
(and nothing happens anymore)
On the remote host, i see the following three processes coming up
after i do the mpirun on the local machine:
30603 ?S 0:00 sshd: jody@notty
30604 ?Ss 0:00 bash -c PATH=/opt/openmpi/bin:$PATH ;
e
chne is a freshly installed fedora 8 (Intel Quadro).
All use a freshly installed open-mpi 1.2.5.
Before my fedora machine crashed it had fedora 6,
and everything worked great (with 1.2.2 on all machines).
Does anybody have a suggestion where i should look?
Thanks
Jody
On Tue, Jun 10, 2008 at
debug-daemons).
[jody@aim-plankton ~] $ mpirun -np 1 --debug-daemons --host
aim-nano1.uzh.ch MPITest
However, this action causes the creation of an orted process on the
other machine:
[jody@aim-nano1 ~] $ ps ax | grep orted
7680 ?Ss 0:00 /opt/openmpi/bin/orted --bootproxy 1 --name
debug-daemons).
[jody@aim-plankton ~] $ mpirun -np 1 --debug-daemons --host
aim-nano1.uzh.ch MPITest
However, this action causes the creation of an orted process on the
other machine:
[jody@aim-nano1 ~] $ ps ax | grep orted
7680 ?Ss 0:00 /opt/openmpi/bin/orted --bootproxy 1 --name
Hi
As the FAQ only contains explanations for a small subset of all MCA parameters,
I wondered whether there is a list explaining the meaning and use of them...
Is this perhaps something the documentation group is working on?
Thanks
Jody
Hi
>
> mpiexec.openmpi -n 3 hostname
>
Here you forgot to specify the hosts, so all processes run on the local machine;
see:
http://www.open-mpi.org/faq/?category=running#mpirun-host
Jody
Hi Ryan
The message "Lamnodes Failed!" seems to indicate that you still have a
LAM/MPI installation somewhere.
You should get rid of that first.
Jody
On Tue, Aug 12, 2008 at 9:00 AM, Rayne wrote:
> Hi, thanks for your reply.
>
> I did what you said, set up the password-les
Hi Ryan
Another thing:
Have you checked if the mpiexec you call is really the one from your
Open-MPI installation?
Try 'which mpiexec' to find out.
Jody
On Tue, Aug 12, 2008 at 9:36 AM, jody wrote:
> Hi Ryan
>
> The message "Lamnodes Failed!" seems to indicate th
ons.
This should also be the case on your other machines.
BTW, since it seems you haven't correctly set your PATH variable, i
suspect you have omitted
to set LD_LIBRARY_PATH as well...
see points 1,2 and 3 in
http://www.open-mpi.org/faq/?category=running
Jody
On Tue, Aug 12, 2008 at 11:10 A
before
it will look
in the directories it would have looked in anyway.
Jody
On Tue, Aug 12, 2008 at 11:59 AM, Rayne wrote:
> My .bash_profile and .bashrc on the server are exactly the same as that on my
> PC. However, I can run mpiexec without any problems just using my PC as a
> si
erent
executables are started
on different machines, but iguess the easiest way to get things going
would be to use
32 bit versions of your program on all your machines.
Jody
On Wed, Aug 13, 2008 at 4:52 AM, Rayne wrote:
> Thank you for all the replies.
>
> Here's what I have now
1 - 100 of 247 matches
Mail list logo