On Tue, 2009-04-21 at 19:19 -0700, Ross Boylan wrote:
> I'm using Rmpi (a pretty thin wrapper around MPI for R) on Debian Lenny
> (amd64). My set up has a central calculator and a bunch of slaves to
> wich work is distributed.
>
> The slaves wait like this:
> mpi.send(as.double(0), double
I'm using Rmpi (a pretty thin wrapper around MPI for R) on Debian Lenny
(amd64). My set up has a central calculator and a bunch of slaves to
wich work is distributed.
The slaves wait like this:
mpi.send(as.double(0), doubleType, root, requestCode, comm=comm)
request <- request+1
On Apr 21, 2009, at 12:14 PM, Katz, Jacob wrote:
So, sm will never be chosen in this case in the current
implementation, correct?
Correct. This is mainly a limitation of our current implementation.
There have been some ideas kicked around on how to fix it, and I think
there's even bee
No, we do not expose such kind of information to the upper layer. If
you really want I can tell you how to do it in a dirty way, but only
if you really need to know...
george.
On Apr 21, 2009, at 12:14 , Katz, Jacob wrote:
So, sm will never be chosen in this case in the current
implemen
Hi Ankush
Ankush Kaul wrote:
@Eugene
they are ok but we wanted something better, which would more clearly
show de diff in using a single pc and the cluster.
@Prakash
i had prob with running de programs as they were compiling using mpcc n
not mpicc
@gus
we are tryin 2 figure out de hpl conf
It depends on how you configured Open MPI (i.e., ran the "configure"
script). If you don't specify, Open MPI will install itself into /usr/
local/bin. Or you can specify where to install it via the --prefix
parameter to configure. For example:
./configure --prefix=/opt/openmpi-1.3.1
hey Thanks a lot,
well, I build the open-mpi package on the Desktop of RHEL 4.7 and then I
followed the instruction to put the path, which I believed were written as
/etc/openmpi/bin and /etc/openmpi/lib, which there's no such a path on my
Linux installation.. I'm wondering if there's a tutorial th
So, sm will never be chosen in this case in the current implementation, correct?
Is there an API or another method to find out what BTL is currently used
(either inside the application code or externally)?
Thanks.
Jacob M. Katz | jacob.k...@intel.com | Work: +972-
On Apr 20, 2009, at 9:29 PM, ESTEBAN MENESES ROJAS wrote:
Hello.
Is there any way to automatically checkpoint/restart an
application in OpenMPI? This is, checkpointing the application
without using the command ompi-checkpoint, perhaps via a function
call in the application's code it
Ankush Kaul wrote:
@Eugene
they are ok but we wanted something better, which would more clearly
show de diff in using a single pc and the cluster.
Another option is the NAS Parallel Benchmarks. They are older, but
well known, self-verifying, report performance, and relatively small
and acce
I'm working on it - the code was not written for multiple app_contexts, and
I have to fix a few compensating errors as well.
Hope to have it in the next couple of days.
On Tue, Apr 21, 2009 at 8:24 AM, Geoffroy Pignot wrote:
> Hi Lenny,
>
> Here is the basic mpirun command I would like to run
@Eugene
they are ok but we wanted something better, which would more clearly show de
diff in using a single pc and the cluster.
@Prakash
i had prob with running de programs as they were compiling using mpcc n not
mpicc
@gus
we are tryin 2 figure out de hpl config, its quite complicated, also de
l
Dear all
I tried to increase speed of a program with openmpi-1.1.3 by adding
following 4 parameters into openmpi-mca-params.conf file.
mpi_leave_pinned=1
btl_openib_eager_rdma_num=128
btl_openib_max_eager_rdma=128
btl_openib_eager_limit=1024
and then, I ran my program twice(124 processes on 31 n
With few exceptions, Open MPI will choose the best BTL. There are two
exceptions I know about:
1. sm - we didn't figure out a clean way to do it, nor we spent too
much time trying to
2. elan - the initialization of the device is a global operation, and
we cannot guarantee that all nodes are i
Hi,
Please I did as mentioned into the FAQ for SSH password-less but the
mpirun still requesting me the password ?
-bash-3.2$ mpirun -d -v -hostfile chosts -np 16 ./hello
[cluster-srv0.logti.etsmtl.ca:31929] procdir: /tmp/openmpi-sessions-AH72000@cluster-srv0.logti.etsmtl.ca_0
/41688/0/0
[cl
Hi,
Please someone can answer me which can be this problem ?
daemon INVALID arch ffc91200
the debug output:
[[41704,1],14] node[4].name cluster-srv4 daemon INVALID arch ffc91200
[cluster-srv3:09684] [[41704,1],13] node[0].name cluster-srv0 daemon 0
arch ffc91200
[cluster-srv3:09684] [[41704
jody wrote:
Hi
I wanted to profile my application using gprof, and proceeded like
when profiling a normal application:
- compile everything with option -pg
- run application
- call gprof
This returns a normal-looking output, but i don't know
whether this is the data for node 0 only or accumulate
Hi Lenny,
Here is the basic mpirun command I would like to run :
mpirun -rf rankfile -n 1 -host r001n001 master.x options1 : -n 1 -host
r001n002 master.x options2 : -n 1 -host r001n001 slave.x options3 : -n 1
-host r001n002 slave.x options4
with cat rankfile
rank 0=r001n001 slot=0:*
rank 1=r001
Hi
I wanted to profile my application using gprof, and proceeded like
when profiling a normal application:
- compile everything with option -pg
- run application
- call gprof
This returns a normal-looking output, but i don't know
whether this is the data for node 0 only or accumulated for all nodes
These kinds of messages are symptomatic that you compiled your
applications with one version of Open MPI and ran with another. You
might want to ensure that your examples are compiled against the same
version of Open MPI that you're running with.
On Apr 17, 2009, at 5:38 PM, Grady Laksmono
It's something in the basis, right,
I tried to investigate it yesterday and saw that for some reason
jdata->bookmark->index is 2 instead of 1 ( in this example ).
[dellix7:28454] [ ../../../../../orte/mca/rmaps/rank_file/rmaps_rank_file.c
+417 ] node->index = 1, jdata->bookmark->index=2
[dellix7:
Hi,
In a dynamically connected client/server-style application, where the server
uses MPI_OPEN_PORT/MPI_COMM_ACCEPT and the client uses MPI_COMM_CONNECT, what
will be the communication method (BTL) chosen by OMPI? Will the communication
thru the resultant inter-communicator use TCP, or will OMP
On Apr 20, 2009, at 11:08 AM, Ankush Kaul wrote:
i try to run mpicc-vt -c hello.c -o hello
but it gives a error
bash: mpicc-vt: command not found
It sounds like your Open MPI installation was not built with
VampirTrace support. Note that OMPI only included VT in Open MPI v1.3
and later.
Santolo
The MPI standard defines reduction operations where the operand/operation
pair has a meaningful semantic. I cannot picture a well defined semantic
for:
999.0 BXOR 0.009. Maybe you can but it is
not an error that the MPI standard leaves out BXOR on floatin
I'm not quite sure what you're asking. MPI_BXOR is valid on a variety
of Fortran and C integer types; see MPI-2.1 p162 for the full table.
http://www.mpi-forum.org/docs/mpi21-report.pdf
On Apr 19, 2009, at 3:46 PM, Santolo Felaco wrote:
I mean the bitwise xor. Pardon for standard the
25 matches
Mail list logo