I think its a real good way to use MPI_Irecv/MPI_Test on the receiver side
to avoid any blocks which sender might run in to. But I'm a bit curious on
the fact, Can't we use a special message beforehand between the
sender/receivers to let the receivers know how many messages to expect ?
This way the
Took a deeper look into this, and I think that your first guess was
correct.
When we changed hostfile and -host to be per-app-context options, it
became necessary for you to put that info in the appfile itself. So
try adding it there. What you would need in your appfile is the
following:
Hi all,
I want to tracing my program using vampir, having untar vampir and
license, but when I ran vampir, it return a error "can't find
libXp.so.6", but I do really find this lib in /usr/lib, and I also set
the ld configuration and LD_LIBARY_PATH, but they all don't work. Does
anyone get this situ
On Jul 14, 2009, at 9:03 PM, Klymak Jody wrote:
On 14-Jul-09, at 5:14 PM, Robert Kubrick wrote:
Jody,
Just to make sure, you did set processor affinity during your test
right?
I'm not sure what that means in the context of OS X.
By setting processor affinity you can force execution of
On 14-Jul-09, at 5:14 PM, Robert Kubrick wrote:
Jody,
Just to make sure, you did set processor affinity during your test
right?
I'm not sure what that means in the context of OS X.
Hyperthreading was turned on.
Cheers, Jody
On Jul 13, 2009, at 9:28 PM, Klymak Jody wrote:
Hi Robert,
Jody,
Just to make sure, you did set processor affinity during your test
right?
On Jul 13, 2009, at 9:28 PM, Klymak Jody wrote:
Hi Robert,
I got inspired by your question to run a few more tests. They are
crude, and I don't have actual cpu timing information because of a
library misma
Shaun Jackman wrote:
For my MPI application, each process reads a file and for each line
sends a message (MPI_Send) to one of the other processes determined by
the contents of that line. Each process posts a single MPI_Irecv and
uses MPI_Request_get_status to test for a received message. If a
Does use of 1.3.3 require recompilation of applications that were compiled
using 1.3.2?
Jim
-Original Message-
From: announce-boun...@open-mpi.org [mailto:announce-boun...@open-mpi.org]
On Behalf Of Ralph Castain
Sent: Tuesday, July 14, 2009 2:11 PM
To: OpenMPI Announce
Subject: [Open MPI
Hi,
For my MPI application, each process reads a file and for each line sends a message
(MPI_Send) to one of the other processes determined by the contents of that line. Each
process posts a single MPI_Irecv and uses MPI_Request_get_status to test for a received
message. If a message has been
Again, thanks to some other posts, I think I've found a reasonable, if not
elegant, solution to the dlopen() issue with python and openmpi. Here is what
I'm doing. First I make a python binding to the following function in module
mpi_dl.py:
void mpi_dlopen()
{
void *handle;
handle = dl
From the Announce mailing list:
The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.3.3. This release is mainly a bug fix release over the v1.3.3
release, but there are few new features, including supp
No, it's not working as I expect , unless I expect somthing wrong .
( sorry for the long PATH, I needed to provide it )
$LD_LIBRARY_PATH=/hpc/home/USERS/lennyb/work/svn/ompi/trunk/build_x86-64/install/lib/
/hpc/home/USERS/lennyb/work/svn/ompi/trunk/build_x86-64/install/bin/mpirun
-np 2 -H witch1,
Run it without the appfile, just putting the apps on the cmd line -
does it work right then?
On Jul 14, 2009, at 10:04 AM, Lenny Verkhovsky wrote:
additional info
I am running mpirun on hostA, and providing hostlist with hostB and
hostC.
I expect that each application would run on hostB and
additional info
I am running mpirun on hostA, and providing hostlist with hostB and hostC.
I expect that each application would run on hostB and hostC, but I get all
of them running on hostA.
dellix7$cat appfile
-np 1 hostname
-np 1 hostname
dellix7$mpirun -np 2 -H witch1,witch2 -app appfile
dellix
Well, my previous solution has a major flaw, the python dl module is not
available on AMD64. So, I'm not really sure what to do except drop support
for openmpi. Any ideas would be greatly appreciated. Thanks.
Tom
--
Tom Evans
Radiation Transport and Shielding
Nuclear Science and Technology Divi
Hi Vipin
I have added support for these features to the OMPI trunk repository.
They are only accessible via MPI_Comm_spawn or
MPI_Comm_spawn_multiple, specified as MPI Info keys "add-host" and
"add-hostfile". Check the man pages for those functions to see how
they are used.
Quick summar
All,
Thanks for info. I've looked at a bunch of different options, but have decided
on the following course. Basically, my first attempt was to use SWIG (we use
swig to assemble our python bindings) to put the following code in all of our
py modules:
%pythoncode %{
import sys, dl
sys.setdlop
Strange - let me have a look at it later today. Probably something
simple that another pair of eyes might spot.
On Jul 14, 2009, at 7:43 AM, Lenny Verkhovsky wrote:
Seems like connected problem:
I can't use rankfile with app, even after all those fixes ( working
with trunk 1.4a1r21657).
Th
Seems like connected problem:
I can't use rankfile with app, even after all those fixes ( working with
trunk 1.4a1r21657).
This is my case :
$cat rankfile
rank 0=+n1 slot=0
rank 1=+n0 slot=0
$cat appfile
-np 1 hostname
-np 1 hostname
$mpirun -np 2 -H witch1,witch2 -rf rankfile -app appfile
---
19 matches
Mail list logo