Thanks.
Moving one step foreword, server 1, my compile server, has a number
of commercial C++ compilers (Pathscale and Intel). I'd like to
compile different version of the binary with each compiler and then
run these binaries on the Server 2 g++-compiled OMPI environments.
The FAQ says "not
Thanks guys!
I finally fixed my problem!!!
apellegr@m45-039:~$ mpirun -prefix ~/openmpi -machinefile
$OAR_FILE_NODES -mca pls_rsh_assume_same_shell 0 -mca pls_rsh_agent
"oarsh" -np 2 /n/poolfs/z/home/apellegr/mpi_test/hello_world.x86
Warning: Permanently added '[m45-039.pool]:6667' (RSA) to the
Thanks for the OAR explanation!
Sorry - I should have been clearer in my comment. I was trying to
indicate that the cmd starting with "set" is indicating a bash syntax
error, and that is why the launch fails.
The rsh launcher uses a little "probe" technique to try and guess the
remote she
OMPI assumes (for faster startup) that your local shell is the same as
your remote shell. If that's not the case, try setting
pls_rsh_assume_same_shell to 0.
On Nov 6, 2008, at 3:31 PM, George Bosilca wrote:
OAR is the batch scheduler used on the Grid5K platform. As far as I
know, set is
OAR is the batch scheduler used on the Grid5K platform. As far as I
know, set is a basic shell internal command, and it is understood by
all shells. The problem here seems to be that somehow we're using
bash, but with a tcsh shell code (because setenv is definitively not
something that bash
I have no idea what "oar" is, but it looks to me like the rsh launcher
is getting confused about the remote shell it will use - I don't
believe that the "set" cmd shown below is proper bash syntax, and that
is the error that is causing the launch to fail.
What remote shell should it fine? I
Hi all,
I'm trying to run an openmpi application on a oar cluster. I think the
cluster is configured correctly but I still have problems when I run
mpirun:
apellegr@m45-037:~$ mpirun -prefix /n/poolfs/z/home/apellegr/openmpi
-machinefile $OAR_FILE_NODES -mca pls_rsh_agent "oarsh" -np 10
/n/p
George is right -- you *can* do this, but it is *not advised* (you'll
likely run out of memory or other resources pretty quickly -- if you
can run at all!). :-)
Try mpi_leave_pinned, and check out those FAQ sections that I sent,
particularly the OpenFabrics section, for how to specifically
Have a look at the FAQ; we discuss quite a few of these kinds of issues:
- http://www.open-mpi.org/faq/?category=tuning
- http://www.open-mpi.org/faq/?category=openfabrics
More specifically, what Eugene is saying is correct -- OMPI has made
tradeoffs for various, complicated reasons. One of t
In order to get good performance out of your test application, the
whole message has to be send in just one fragment. The reason is that
as long as there is no progress thread for the MPI library (internal
to the library), there is no way to make progress.
Now, I can explain how to do this,
vladimir marjanovic wrote:
In
order to overlap communication and computation I don't want to use
MPI_Wait.
Right. One thing to keep in mind is that there are two ways of
overlapping communication and computation. One is you start a send
(MPI_Isend), you do a bunch of computation
From: Eugene Loh
To: Open MPI Users
Sent: Thursday, 6 November, 2008 18:08:26
Subject: Re: [OMPI users] Progress of the asynchronous messages
vladimir marjanovic wrote:
I am new user of Open MPI, I've used MPICH before.
There is performance bug with the f
vladimir marjanovic wrote:
I am new user of Open MPI, I've used MPICH before.
There is performance bug with the following scenario:
proc_B: MPI_Isend(...,proc_A,..,&request)
do{
sleep(1);
MPI_Test(..,&flag,&request);
Hi,
I am new user of Open MPI, I've used MPICH before.
There is performance bug with the following scenario:
proc_B: MPI_Isend(...,proc_A,..,&request)
do{
sleep(1);
MPI_Test(..,&flag,&request);
count++
}whi
OMPI itself uses AC/AM to build itself, but our configure.ac and some
of our Makefile.am's are fairly complex -- I wouldn't use these as
starting points.
You probably want to start with some general AC/AM tutorials (the AM
documentation reads somewhat like a tutorial -- you might want to lo
As long as you compiled OMPI with support for OFED, yes. You will
need to have OFED installed on server 1 (even if you have no
OpenFabrics-capable devices) to build OMPI's OpenFabrics support.
FWIW, I do this kind of thing all the time: build OMPI on one machine
and NFS export it to all th
Hi Peter
Given how long it takes to hit the problem, have you checked your file
and disk quotas? Could be that the file is simply getting too big.
I'm also a tad curious how you got valgrind to work on OSX - I was
unaware it supported that environment?
If all that looks okay, then the nex
Hi all,
I'm not sure if this is relevant to this mailing list, but I'm trying to
get autoconf/automake working with an Open MPI program I am writing (in
C++) but unfortunately, I don't know where to begin. I'm new to both
tools but have it working well enough for a non-MPI program. When I
According to this FAQ, one should be able to compile on a computer
and then run the OMPI program on different hardware, as far as the c+
+ compiler and OMPI versions are the same: http://www.open-mpi.org/
faq/?category=sysadmin#new-openmpi-version
I have the following situation:
Server 1
Fab
19 matches
Mail list logo