An application compiled for 64-bit *IS* different from a 32-bit.
But if both your 64-bit server and and 32-bit PC have compatible
processor types,
you can compile on the PC and run the program on the server.
(as i told you in my previous mail)
Jody
On Wed, Aug 13, 2008 at 10:15 AM, Rayne wrote
Hi Anughra
Why don't you check the FAQ first:
http://www.open-mpi.org/faq/
It answers may questions and also provides instruction to install
Open-MPI and build MPI applications.
And, yes, Open-MPI works with gcc.
Jody
On Fri, Aug 15, 2008 at 12:25 PM, Anugraha Sankaranarayanan
wrote:
>
Is your Open MPI bin directory listed in your PATH environment variable?
See the FAQ:
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
On Sun, Sep 14, 2008 at 6:15 AM, Shafagh Jafer wrote:
> HI,
> i installed openmpi-1.2.7 and ran the two examples ring_c and hello_c and
> they
Hi
Yuo must close the File using
MPI_File_close(MPI_File *fh)
before calling MPI_Finalize.
By the way i think you shouldn't do
strcat(argv[1], ".bz2");
This would overwrite any following arguments.
Jody
On Wed, Sep 17, 2008 at 5:13 AM, Davi Vercillo C. Garcia (デビッド)
wrote
your
application with.
Jody
On Thu, Sep 25, 2008 at 1:26 PM, Ali Copey wrote:
> Hello,
>
> We have created a piece of software that is designed to work under a variety
> of conditions, one of which is using MPI.
>
> This application will preferably us a single executable
It's difficult to tell what is going on without seeing the source code,
but the error message seems to indicate that you wrote
#include "ompi.h"
instead of
#include "mpi.h"
Jody
On Thu, Oct 2, 2008 at 9:07 AM, Anugraha Sankaranarayanan
wrote:
>>Thank yo
And, yes, 1.2.7 is a rather old version - the current one is 1.3.2. It
would be good
if you could update your version to a newer one.
Jody
On Mon, Jun 29, 2009 at 7:00 AM, Ashika Umanga
Umagiliya wrote:
> Hi Vipin ,
> Thanks alot for the reply.
> I went through the FAQ and it also answe
Hi
Are you also sure that you have the same version of Open-MPI
on every machine of your cluster, and that it is the mpicxx of this
version that is called when you run your program?
I ask because you mentioned that there was an old version of Open-MPI
present... die you remove this?
Jody
On Mon
Hi Alexey
I don't know how this error messgae comes about,
but have you ever considered using a newer version of Open MPI?
1.2.4 is quite ancient, the current version is 1.3.3
http://www.open-mpi.org/software/ompi/v1.3/
Jody
On Wed, Jul 22, 2009 at 9:17 AM, Alexey Sokolov wrote:
iResult will have the value 0.
Jody
On Thu, Jul 23, 2009 at 1:36 PM, vipin kumar wrote:
>
>
> On Thu, Jul 23, 2009 at 3:03 PM, Ralph Castain wrote:
>>
>> It depends on which network fails. If you lose all TCP connectivity, Open
>> MPI should abort the job as the ou
re just lucky that a 0-character
followed our string. The problem may appear again anytime if you don't
increase your message length to strlen(chdata[i])+1.
Jody
On Mon, Jul 27, 2009 at 9:57 AM, Alexey Sokolov wrote:
> Hi
>
> Thank you for advising, but my problem disappeared afte
Hi Jacob
Did you set the PATH and LD_LIBRARY_PATH according to
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
Jody
On Mon, Jul 27, 2009 at 5:35 PM, jacob Balthazor wrote:
>
> Hey,
> Please help me out as I cannot figure out from all the online
> document
Hi
I guess "task-farming" could give you a certain amount of the kind of
fault-tolerance you want.
(i.e. a master process distributes tasks to idle slave processors -
however, this will only work
if the slave processes don't need to communicate with each other)
Jody
On Mon, Aug
Hi
When i use a rankfile, i get an error message which i don't understand:
[jody@plankton tests]$ mpirun -np 3 -rf rankfile -hostfile testhosts ./HelloMPI
--
Rankfile claimed host plankton that was not allocat
Hi Lenny
Thanks - using the full names makes it work!
Is there a reason why the rankfile option treats
host names differently than the hostfile option?
Thanks
Jody
On Mon, Aug 17, 2009 at 11:20 AM, Lenny
Verkhovsky wrote:
> Hi
> This message means
> that you are trying to use host
osts (i.e. plankton instead of plankton.uzh.ch) in
the host file...
However, I encountered a new problem:
if the rankfile lists all the entries which occur in the host file
there is an error message.
In the following example, the hostfile is
[jody@plankton neander]$ cat th_02
nano_00.uzh.ch slots=2 ma
-mpi.org/faq/?category=running#mpirun-scheduling
but i couldn't find any explanation. (furthermore, in the FAQ it says
"max-slots"
in one place, but "max_slots" in the other one)
Thank You
Jody
On Mon, Aug 17, 2009 at 3:29 PM, Lenny
Verkhovsky wrote:
> can you try n
Hi
I had a similar problem.
Following a suggestion from Lenny,
i removed the "max-slots" entries from
my hostsfile and it worked.
It seems that there still are some minor bugs in the rankfile mechanism.
See the post
http://www.open-mpi.org/community/lists/users/2009/08/10384.php
Jod
Hi
I'm not sure if i completely understand your requirements,
but have you tried MPI_WTime?
Jody
On Fri, Sep 11, 2009 at 7:54 AM, amjad ali wrote:
> Hi all,
> I want to get the elapsed time from start to end of my parallel program
> (OPENMPI based). It should give same time for th
Did you also change the "&buffer" to buffer in your MPI_Send call?
Jody
On Tue, Sep 22, 2009 at 1:38 PM, Everette Clemmer wrote:
> Hmm, tried changing MPI_Irecv( &buffer) to MPI_Irecv( buffer...)
> and still no luck. Stack trace follows if that's help
Hi
Have look at the Open MPI FAQ:
http://www.open-mpi.org/faq/
It gives you all the information you need to start working with your cluster.
Jody
On Wed, Sep 30, 2009 at 8:25 AM, ankur pachauri wrote:
> dear all,
>
> i am new to openmpi, all that i need is to set up the cluster of
that is where i put the application.
To start your application, follow the instructions in the FAQ:
http://www.open-mpi.org/faq/?category=running
If you want to use host files, read about how to use them in the FAQ:
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope that helps
Jody
Hi
Just curious:
Is there a particular reason why you want version 1.2?
The current version is 1.3.3!
Jody
On Tue, Oct 20, 2009 at 2:48 PM, Sangamesh B wrote:
> Hi,
>
> Its required here to install Open MPI 1.2 on a HPC cluster with - Cent
> OS 5.2 Linux, Mellanox IB card, swi
Sorry, i can't help you here.
I have no experience with neither intel compilers nor IB
Jody
On Wed, Oct 21, 2009 at 4:14 AM, Sangamesh B wrote:
>
>
> On Tue, Oct 20, 2009 at 6:48 PM, jody wrote:
>>
>> Hi
>> Just curious:
>> Is there a particular reason
int you to an MPI primer or tute.
>
Have a look at the Open MPI FAQ:
http://www.open-mpi.org/faq/?category=running
It shows you how to run a Open-MPI program on single or multiple machines
Jody
environment variable in order to
display their xterms with gdb on my workstation.
Another negative point would be the need to change the argv parameters
every time one switches between debugging and normal running.
Has anybody got some hints on how to debug spawned processes?
Thank You
Jody
Thanks for your reply
That sounds good. I have Open-MPI version 1.3.2, and mpirun seems not
to recognize the --xterm option.
[jody@plankton tileopt]$ mpirun --xterm -np 1 ./boss 9 sample.tlf
--
mpirun was unable to launch the
s the -xterm option, then that option gets
applied to the dynamically spawned procs too"
Does this passing on also apply to the -x options?
Thanks
Jody
On Wed, Dec 16, 2009 at 3:42 PM, Ralph Castain wrote:
> It is in a later version - pretty sure it made 1.3.3. IIRC, I added it at
&
Hi Ralph
I finally got around to install version 1.4.
The xterm works fine.
And in order to get gdb going on the spawned processes, i need to add
an argument "--args"
in the argument list of the spawner so that the parameters of the
spawned processes are getting through gdb.
Thanks ag
-f77
--disable-mpi-f90 --with-threads
and afterwards made a soft link
ln -s /opt/openmpi-1.4 /opt/openmpi
This is on fedora fc8, but i have the same problem on my gentoo
machines (2.6.29-gentoo-r5)
Does anybody know how to get replace the old man files with the new ones?
Thank You
Jody
Thanks, that did it!
BTW, in the man page for mpirun you should perhaps mention the "!"
option in xterm - the one that keeps the xterms open after the
application exits.
Thanks
Jody
On Mon, Dec 21, 2009 at 3:25 PM, Ralph Castain wrote:
> Is your MANPATH set to point to /op
e PS3 and a is your PS3 host,
and app_dell is your application compiled on the dell, and b is your dell host.
Check the MPI FAQs
http://www.open-mpi.org/faq/?category=running#mpmd-run
http://www.open-mpi.org/faq/?category=running#mpirun-host
Hope this helps
Jody
On Thu, Jan 28, 2010 at 3:
; count = 0
> end if
>
> if (count .gt. 0) then
> allocate(temp(count))
> temp(1) = 2122010.0d0
> end if
In C/C++ something like this would almost certainly lead to a crash,
but i don't know if this would be the case in Fortran...
jody
On Wed, Feb 24, 2010 at
Hi Gabriele
you could always pipe your output through grep
my_app | grep "MPI_ABORT was invoked"
jody
On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati
wrote:
> Hi Nadia,
>
> thanks for quick reply.
>
> But i suppose that parameter is 0 by default. Suppose
Hi
I can't answer your question about the array q offhand,
but i will try to translate your program to C and see if
it fails the same way.
Jody
On Wed, Feb 24, 2010 at 7:40 PM, w k wrote:
> Hi Jordy,
>
> I don't think this part caused the problem. For fortran, it does
Required statement
stop
end program test_MPI_write_adv2
===
Regards
jody
On Thu, Feb 25, 2010 at 2:47 AM, Terry Frankcombe wrote:
> On Wed, 2010-02-24 at 13:40 -0500, w k wrote:
>> H
I'm not sure if this is the cause of your problems:
You define the constant BUFFER_SIZE, but in the code you use a constant
called BUFSIZ...
Jody
On Fri, Mar 26, 2010 at 10:29 PM, Jean Potsam wrote:
> Dear All,
> I am having a problem with openmpi . I have installed op
@Trent
> the 1024 RSA has already been cracked.
Yeah but unless you've got 3 guys spending 100 hours varying the
voltage of your processors
it is still safe... :)
On Tue, Apr 6, 2010 at 11:35 AM, Reuti wrote:
> Hi,
>
> Am 06.04.2010 um 09:48 schrieb Terry Frankcombe:
>
>>> 1. Run the following
inter to the start of the array
(however, i can't exactly explain
why it worked with the hard-coded string))
Jody
On Mon, Apr 19, 2010 at 6:31 PM, Andrew Wiles wrote:
> Hi all Open MPI users,
>
> I write a simple MPI program to send a text message to another process. The
> code
I once got different results when running on a 64-Bit platform instead of
a 32 bit platform - if i remember correctly, the reason was that on the
32-bit platform 80bit extended precision floats were used but on the 64bit
platform only 64bit floats.
On Sun, Apr 25, 2010 at 3:39 AM, Fabian Hänsel
on
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/fpu_wp.pdf
but i think AMD Opteron does not.
But i am no expert in this area - i only found out about this when i
mentioned to someone
the differences in the results obtained from a 32Bit platform and a
64bit platform. Sorry.
Jo
Just to be sure:
Is there a copy of the shared library on the other host (hpcnode1) ?
jody
On Mon, May 10, 2010 at 5:20 PM, Prentice Bisbal wrote:
> Are you runing thee jobs through a queuing system like PBS, Torque, or SGE?
>
> Prentice
>
> Miguel Ángel Vázquez wrote:
&g
minal window for the process you are
interested in.
Jody
On Thu, May 20, 2010 at 1:28 AM, Sang Chul Choi wrote:
> Hi,
>
> I am wondering if there is a way to run a particular process among multiple
> processes on the console of a linux cluster.
>
> I want to see the screen ou
Hi
I am really no python expert, but it looks to me as if you were
gathering arrays filled with zeroes:
a = array('i', [0]) * n
Shouldn't this line be
a = array('i', [r])*n
where r is the rank of the process?
Jody
On Thu, May 20, 2010 at 12:00 AM, Battalgazi YILDI
ef, TaskType, 1, idMaster, MPI_ANY_TAG, &st);
if (st.MPI_TAG == TAG_STOP) {
go_on=false;
} else {
result=workOnTask(TaskDef, TaskLen);
MPI_Send(a, MPI_INT, 1, idMaster, TAG_RESULT);
MPI_Send(result, resultType, 1, idMaster, TAG_RESULT_CONTENT);
}
}
I hope t
)
and react accordingly
Jody
On Tue, Jul 6, 2010 at 7:41 AM, David Zhang wrote:
> if the master receives multiple results from the same worker, how does the
> master know which result (and the associated tag) arrive first? what MPI
> commands are you using exactly?
>
> On Mon, Jul
buffer you passed to MPI_Recv.
As Zhang suggested: try to reduce your code to isolate the offending codes.
Can you create a simple application with two processes exchanging data which has
the MPI_ERR_TRUNCATE problem?
Jody
On Thu, Jul 8, 2010 at 5:39 AM, Jack Bryan wrote:
> thanks
>
_buf, send_message_size, MPI_INT, RECEIVER,
TAG_DATA, MPI_COMM_WORLD);
/* clean up */
free(send_buf);
}
MPI_Finalize();
}
I hope this helps
Jody
On Sat, Jul 10, 2010 at 7:12 AM, Jack Bryan wrote:
> Dear All:
> How to find the buffer size of OpenMPI ?
> I need to t
Hi Brian
When you spawn processes with MPI_Comm_spawn(), one of the arguments
will be set to an intercommunicator of thes spawner and the spawnees.
You can use this intercommunicator as the communicator argument
in the MPI_functions.
Jody
On Fri, Jul 9, 2010 at 5:56 PM, Brian Budge wrote:
>
e to extend the -output-filename option i
such a way that it
would also combine job-id and rank withe the output file:
work_out.1.0
for the master's output, and
work_out.2.0
work_out.2.1
work_out.2.2
...
for the worker's output?
Thank You
Jody
yes, i'm using 1.4.2
Thanks
Jody
On Mon, Jul 12, 2010 at 10:38 AM, Ralph Castain wrote:
>
> On Jul 12, 2010, at 2:17 AM, jody wrote:
>
>> Hi
>>
>> I have a master process which spawns a number of workers of which i'd
>> like to save the output
ugh...
Perhaps there is a boost forum you can check out if the problem persists
Jody
On Sun, Jul 11, 2010 at 10:13 AM, Jack Bryan wrote:
> thanks for your reply.
> The message size is 72 bytes.
> The master sends out the message package to each 51 nodes.
> Then, after doing their local w
will call mpirun or mpiexec. But somewhere you have to tell OpenMPI
what to run on how many processors etc.
I'd suggest you take a look at the "MPI-The Complete Reference" Vol I and II
Jody
On Mon, Jul 12, 2010 at 5:07 PM, Brian Budge wrote:
> Hi Jody -
>
> Thanks for the reply.
Thanks for the patch - it works fine!
Jody
On Mon, Jul 12, 2010 at 11:38 PM, Ralph Castain wrote:
> Just so you don't have to wait for 1.4.3 to be released, here is the patch.
> Ralph
>
>
>
>
> On Jul 12, 2010, at 2:44 AM, jody wrote:
>
>> yes, i'm using
that the output does
indeed say 1.10.2)
Password-less ssh is enabled on both machines in both directions.
When i start mpirun form one machine (kraken) with a hostfile specifying
the other machine ("triops slots=8 max-slots=8),
it works:
-
jody@kraken ~ $ mpirun -np 3 --hostfile triopshosts u
ly is caused by:
...
--
-
Again, i can call mpirun on triops from kraken und all squid_XX without a
problem...
What could cause this problem?
Thank You
Jody
On Tue, May 3, 2016 at 2:54 PM, Jeff Squyres (jsquyres)
wrote:
&
line from triops' rules, restarted iptables and now
communication works in all directions!
Thank You
Jody
On Tue, May 3, 2016 at 7:00 PM, Jeff Squyres (jsquyres)
wrote:
> Have you disabled firewalls between these machines?
>
> > On May 3, 2016, at 11:26 AM, jody wrote:
Perhaps this post in the Open-MPI archives can help:
http://www.open-mpi.org/community/lists/users/2008/01/4898.php
Jody
On Sun, Oct 26, 2008 at 4:30 AM, Davi Vercillo C. Garcia (ダヴィ)
wrote:
> Anybody !?
>
> On Thu, Oct 23, 2008 at 12:41 AM, Davi Vercillo C. Garcia (ダヴィ)
>
f the host name, but i think
on most systems it must be less than 255.
Jody
On Mon, Nov 17, 2008 at 10:31 AM, Sun, Yongqi (E F ES EN 72)
wrote:
> Hello,
>
> I still have no clue how to use the local machine by default.
>
> My /etc/hosts file and the result of ifconfig are atta
Hi Sun
i forgot to add that once you've called gethostname(), you can
determine the length of the name by using strlen() on your array
'name'.
Jody
On Mon, Nov 17, 2008 at 10:45 AM, jody wrote:
> Hi Sun
>
> AFAIK, the second parameter (len) in gethostname is an input
you forgot the "mpirun"
mpirun -mca btl_openib_warn_default_gid_prefix 0
jody
On Mon, Dec 8, 2008 at 4:00 PM, Yasmine Yacoub wrote:
> Thank you for your response, but still my problem remains, I have used this
> command:
>
> -mca btl_openib_warn_default_gid_prefi
Hi Jim
If all of your workers can mount a directory on your head node,
all can access the data there.
Jody
On Sat, Jan 3, 2009 at 4:13 PM, Jim Kress wrote:
> I need to use openMPI in a mode where the input and output data reside
> on one node of my cluster while all the other nodes ar
What does FTFBS stand for?
I googled for it, and checked the acronymfinder, but found no explanation...
Jody
On Tue, Jan 6, 2009 at 10:33 PM, Adam C Powell IV wrote:
> On Tue, 2009-01-06 at 12:25 -0600, Dirk Eddelbuettel wrote:
>> I noticed that openmpi is now owner of a FTFBS ag
Hi Gupta
One way to do it is to run your application in a directory to which
all nodes have access via NFS.
And if "./" is not in your $PATH you may want to write ./a.out instead
of just a.out.
Jody
On Thu, Jan 8, 2009 at 8:02 AM, gaurav gupta <1989.gau...@googlemail.com> wr
Without any details it's difficult to make a diagnosis,
but it looks like one of your processes crashes, perhaps from a
segmentation fault .
Have you run it with a debugger?
Jody
On Thu, Jan 15, 2009 at 9:39 AM, Hana Milani wrote:
> please tell me how to get rid of the message and ho
Under 1.2.8 i could check
OMPI_MCA_ns_nds_vpid
to find out the process rank.
Under 1.3 that variable does not seem to exist anymore.
Is there an equivalent to hat variable in 1.3?
Have any other environment variables changed?
Thank You
Jody
mething special about $HOSTNAME and how or when it is set?
Jody
On Tue, Jan 20, 2009 at 3:26 PM, Ralph Castain wrote:
> That was never an envar for public use, but rather one that was used
> internal to OMPI and therefore subject to change (which it did). There are a
> number of such vari
s?
I could imagine that could be done by wrapper applications which
redirect the output over a TCP
socket to a server application.
But perhaps there is an easier way, or something like this alread does exist?
Thank You
Jody
did i misunderstand its usage?
I quickly glanced at the code - i guess
orte_iof.pull(&target_proc, stream, 1)
is the heart of the matter. But i was unable to fnd where this
orte-iof struct fas actually defined. COUld you give me a hint?
Thanks
Jody
On Thu, Jan 22, 2009 at 2:33 PM, Ralp
ve you any help whatsoever.
Jody
On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
wrote:
> Hello Jeff,
>
> I don't understand what you mean by "A _detailed_ description of what is
> failing".
> The problem is a dead lock in MPI_Finalize() function. All
MPI_Recvs.
Jody
On Fri, Jan 23, 2009 at 11:08 AM, Bernard Secher - SFME/LGLS
wrote:
> Thanks Jody for your answer.
>
> I launch 2 instances of my program on 2 processes each instance, on the same
> machine.
> I use MPI_Publish_name, MPI_Lookup_name to create a global communic
it may be easier
to pinpoint the problem.
Good luck
Jody
On Fri, Jan 23, 2009 at 12:00 PM, Bernard Secher - SFME/LGLS
wrote:
> No i didn't run this program whith Open-MPI 1.2.X because one said to me
> there were many changes between 1.2.X version and 1.3 version abo
king on my "complicated" way, i.e.
wrappers redirecting output via sockets to a server.
Jody
On Sun, Jan 25, 2009 at 1:20 PM, Ralph Castain wrote:
> For those of you following this thread:
>
> I have been impressed by the various methods used to grab the output from
> processes. Si
he help to ensure things
> are working across as many platforms as possible before we put it in the
> official release!
I'll be happy to test these new features!
Jody
>> Hi
>> I have written some shell scripts which ease the output
>> to an xterm for each processor
Typo there: "xceren" stands for "screen" - sorry :)
On Mon, Jan 26, 2009 at 9:20 PM, jody wrote:
> Hi Brian
>
>>
>> I would rather not have mpirun doing an xhost command - I think that is
>> beyond our comfort zone. Frankly, if someone wants to
That's cool then - i have written a shellscript
which automatically does the xhost stuff for all
nodes in my hostfile :)
On Mon, Jan 26, 2009 at 9:25 PM, Ralph Castain wrote:
>
> On Jan 26, 2009, at 1:20 PM, jody wrote:
>
>> Hi Brian
>>
>>>
>>> I
terms
- that did work for the remoties, too)
If a '-1' is given instead of a list of ranks, it fails (locally &
with remotes):
[jody@localhost neander]$ mpirun -np 4 --xterm -1 ./MPITest
--
Hi Ralph
one more thing i noticed while trying out orte_iof again.
The option --report-pid crashes mpirun:
[jody@localhost neander]$ mpirun -report-pid -np 2 ./MPITest
[localhost:31146] *** Process received signal ***
[localhost:31146] Signal: Segmentation fault (11)
[localhost:31146] Signal code
Hi Ralph
Thanks for the fixes and the "!".
--xterm:
The "!" works, but i still don't have any xterms from my remote nodes
even with all my xhost+ and -x DISPLAY tricks explained below :(
--output-filename
It creates files, but only for the local processes:
[jody@localhos
Hi Ralph
>>
>> --output-filename
>> It creates files, but only for the local processes:
>> [jody@localhost neander]$ mpirun -np 8 -hostfile testhosts
>> --output-filename gnana ./MPITest
>> ... output ...
>> [jody@localhost neander]$ ls -l gna*
>&g
, send_to, tag, MPI_COMM_WORLD);
MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
MPI_COMM_WORLD, &request);
?
Jody
On Thu, Feb 5, 2009 at 11:37 AM, Gabriele Fatigati wrote:
> Dear OpenMPI developer,
> i have found a very strange behaviour of MPI_Test. I'm using OpenM
I have to admit, that this wasn't a theoretically well founded suggestion.
Perhaps it really doesn't (or shouldn't) matter...
I'll try both versions with MPI 1.3 and tell you the results
Jody
On Thu, Feb 5, 2009 at 11:48 AM, Gabriele Fatigati wrote:
> Hi Jody,
> th
Hi Gabriele
In OpenMPI 1.3 it doesn't matter:
[jody@aim-plankton ~]$ mpirun -np 4 mpi_test5
aim-plankton.uzh.ch: rank 0 : MPI_Test # 0 ok. [3...3]
aim-plankton.uzh.ch: rank 1 : MPI_Test # 0 ok. [0...0]
aim-plankton.uzh.ch: rank 2 : MPI_Test # 0 ok. [1...1]
aim-plankton.uzh.ch: ra
a previous mail:
- call 'xhost +' for all nodes in my hostfile
- export DISPLAY=:0.0
- call
mpirun -np 8 -x DISPLAY --hostfile testhosts --xterm
--ranks=2,3,4,5! ./MPITest
combining --xterm with --output-filename also worked.
Thanks again!
Jody
On Tue, Feb 3, 2009 at 11:03
Hi
In my application i use MPI_PROC_NULL
as an argument in MPI_Sendrecv to simplify the
program (i.e. no special cases for borders)
With 1.3 it works, but under 1.3.1a0r20520
i get the following error:
[jody@localhost 3D]$ mpirun -np 2 ./sr
[localhost.localdomain:29253] *** An error occurred in
Yes, it was doing no sensible work -
It was only intended to show the error message.
I now downloaded the latest nightly tarball and installed it,
and used your version of the test programm. It works -
*if* is use the entire path to mpirun:
[jody@localhost 3D]$ /opt/openmpi-1.3.1a0r20534/bin
Forgot to add.
i have /opt/openmpi/bin in my $PATH
I tried around some more and found that it
also works without errors if use
/opt/openmpi/bin/mpirun -np 2 ./sr
I don't understand this, because 'mpirun' alone should be the same thing:
[jody@localhost 3D]$ which mpirun
/opt/ope
Well all i do seems to verify that only one version is running:
[jody@localhost 3D]$ ls -ld /opt/openmp*
lrwxrwxrwx 1 root root 26 2009-02-13 14:09 /opt/openmpi ->
/opt/openmpi-1.3.1a0r20534
drwxr-xr-x 7 root root 4096 2009-02-12 22:19 /opt/openmpi-1.3.1a0r20432
drwxr-xr-x 7 root root 4096 2
parently i hadn't properly removed an old openmpi version which
had been put there by fedora...
Thanks!
Jody
On Fri, Feb 13, 2009 at 2:39 PM, Jeff Squyres wrote:
> What is your PATH / LD_LIBRARY_PATH when you rsh/ssh to other nodes?
>
> ssh othernode which mpirun
> ssh othernode e
I got this ssh message when my workstation wasn't allowed access because of the
settings in the files /etc/hosts.allow and /etc/hosts.deny on your ssh server.
Jody
On Mon, Feb 16, 2009 at 10:36 PM, Gabriele Fatigati
wrote:
> Dear OpenMPI developers,
> i'm trying to use Ope
A tu etabli que $PATH et $LD_LIBRARY_PATH contiennent les values correctes
quand tu fais une connexion ssh sans login?
Essaye:
2009/2/19 Abderezak MEKFOULDJI :
> Bonjour,
> mon cluster est composé (pour l'instant) de 2 machines amd64 contenant le
> système debian 2.6 "version etch", le compilateu
3/lib
Jody
2009/2/19 jody :
> A tu etabli que $PATH et $LD_LIBRARY_PATH contiennent les values correctes
> quand tu fais une connexion ssh sans login?
> Essaye:
>
>
> 2009/2/19 Abderezak MEKFOULDJI :
>> Bonjour,
>> mon cluster est composé (pour l'instant) de
k You
Jody
bufcount is 0.
Jody
On Mon, Feb 23, 2009 at 9:55 PM, Eugene Loh wrote:
> I think the question is about passing NULL as a buffer pointer. E.g.,
>
> MPI_Send(NULL, 0, mytype,dst, tag,comm);
>
> vs
>
> MPI_Send(&dummy,0,mytype,dst,tag,comm);
>
> George Bosilca wrote:
&g
perhaps you could use the Open-MPI environment variables
OMPI_COMM_WORLD_RANK
OMPI_COMM_WORLD_LOCAL_RANK
to construct your own environment variables?
(for versions >= 1.3)
Jody
On Fri, Feb 27, 2009 at 8:36 PM, Nicolas Deladerriere
wrote:
> Matt,
>
> Thanks for your solution, b
Hi
I don't understand why it is a problem to copy a single script to your nodes -
wouldn't the following shell-script work?
#!/bin/sh
for num in `seq 128`
do
scp new_script username@host_$num:path/to/workdir/
done
jody
On Mon, Mar 2, 2009 at 10:02 AM, Nicolas Deladerri
d i think you should add a "-n 4" (for 4 processors)
Furthermore, if you want to specify a host, you have to add "-host hostname1"
if you want to specify several hosts you have to add "-host
hostname1,hostname2,hostname3" (no spaces around the commas)
Jody
On Tue, Apr 7, 20
Hi
I don't understand the error messages, but it seems to me that your
open-MPI version (1.2.5) is rather old.
This might also explain the discrepancies you found in the documentation.
If you can do so, i would suggest you update your Open-MPI.
Jody
On Fri, Apr 17, 2009 at 11:38 PM,
r all nodes.
Does anybody have experience in profiling parallel applications?
Is there a way to have profile data for each node separately?
If not, is there another profiling tool which can?
Thank You
Jody
ng subject would be worthy for a FAQ entry...
Thanks
Jody
On Thu, Apr 23, 2009 at 9:12 AM, Daniel Spångberg wrote:
> I have used vprof, which is free, and also works well with openmpi:
> http://sourceforge.net/projects/vprof/
>
> One might need slight code modifications to ge
101 - 200 of 247 matches
Mail list logo