A simple program at my 4-node ROCKS cluster runs fine with command:
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./sort_mpi6
Another bigger programs runs fine on the head node only with command:
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/sort_mpi6
But with the command:
cd /sphere
Check that your LD_LIBRARY_PATH is getting set properly on your remote node
- it likely is missing the path to this libgdal. You might need to add the
path to your default shell profile (e.g., .bashrc)
On Mon, Dec 2, 2013 at 9:23 PM, 胡杨 <781578...@qq.com> wrote:
> A simple program at my 4-node
how to add add the path to your default shell profile (e.g., .bashrc)?and here
is my LD_LIBRARY_PATH path on complier export on rocks cluster,include
frontend and compute nodes
declare -x
LD_LIBRARY_PATH="/usr/lib:/home/hushjian/software/gdal/lib:/opt/openmpi/lib"
and I also add
LD
Hello everyone.
I've installed nightly 1.7.4 recently and it's Java API is completely
different from mpiJava.
Is there guide or something to new Open MPI Java bindings? Or will it
come out later (in some resonable time)?
Specifically, I'm interested in how to port existing sources from old
mpiJava
Using the latest nightly snapshot (1.7.4) and only Apple compilers/tools (no
macports), I configure/build with the following:
./configure --prefix=/opt/trunk/apple-only-1.7.4 --enable-shared
--disable-static --enable-debug --disable-io-romio
--enable-contrib-no-build=vt,libtrace --enable-mpirun
Using openmpi-1.7.4, no macports, only apple compilers/tools:
mpirun -np 2 --mca btl sm,self hello_c
This hangs, also in MPI_Init().
Here’s the back trace from the debugger:
bash-4.2$ lldb -p 4517
Attaching to process with:
process attach -p 4517
Process 4517 stopped
Executable module set t
Hmmm...are you connected to a network, or at least have a network active,
when you do this? It looks a little like the system is trying to open a
port between the process and mpirun, but is failing to do so.
On Tue, Dec 3, 2013 at 4:51 AM, Meredith, Karl
wrote:
> Using openmpi-1.7.4, no macport
The FAQ is your friend, for this and many other questions :-)
http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
On Tue, Dec 3, 2013 at 1:40 AM, 胡杨 <781578...@qq.com> wrote:
>
> how to add add the path to your default shell profile (e.g.,
> .bashrc)?and here is my LD_LIBRARY_P
I disconnected for our corporate network (ethernet connection) and tried
running again: same result, it stalls.
Then, I also disconnected from our local wifi network and tried running again:
it worked!
bash-4.2$ mpirun -np 2 --mca btl sm,self hello_c
Hello, world, I am 0 of 2, (Open MPI v1.7.
The Java bindings were revised because (a) the old ones had significantly
bit-rotted, (b) we wanted to improve performance by making the code truly a
"binding" instead of just a wrapper around MPI calls, and (c) we wanted to
extend coverage to all of the current MPI standard. Hence the changes.
Th
Best guess I can offer is that they are blocking loopback on those networks
- i.e., they are configured such that you can use them to connect to a
remote machine, but not to a process on your local machine. I'll take a
look at the connection logic and see if I can get it to failover to the
loopback
thanks ...
-- --
??: "Ralph Castain";;
: 2013??12??3??(??) 9:03
??: "Open MPI Users";
: Re: [OMPI users]?? can you help me please ?thanks
The FAQ is your friend, for this and many other questions :-)
http://www.open-
Ok, I think we're chasing the same thing in multiple threads -- this looks like
a similar result as what you replied to Ralph with.
Let's keep the other thread (with Ralph) going; this looks like some kind of
networking issue that we haven't seen before (e.g., unable to open ports to the
local
Okay. I’ll keep my responses limited to the other thread.
Thanks,
Karl
On Dec 3, 2013, at 9:54 AM, Jeff Squyres (jsquyres) wrote:
> Ok, I think we're chasing the same thing in multiple threads -- this looks
> like a similar result as what you replied to Ralph with.
>
> Let's keep the other
Ralph --
Quick question: ORTE should be using local named sockets for connections to the
orted, right?
I guess what I'm asking is: if there's a
single-server-only-and-it-happens-to-be-the-local-server job, shouldn't it only
be using local named sockets, not IP sockets?
On Dec 3, 2013, at 8:
Hello Ivan,
From: Ivan Borisov <68van...@mail.ru>
Subject: [OMPI users] Several questions about new Java bindings
Date: December 3, 2013 5:22:29 AM EST
To: Open MPI users list
Reply-To: Open MPI Users
Hello everyone.
I've installed nightly 1.7.4 recently and it's Java API is completely
diffe
Christophfer --
I somewhat dropped off email starting right before SC, and am finally plowing
through all the backlog.
Did you get your Java issues sorted out?
On Nov 19, 2013, at 5:17 AM, Christoffer Hamberg christoffer.hamb...@gmail.com
wrote:
> I see, I'm running:
>
> Ubuntu 13.04 (GNU/
There should never be more than one orted per MPI job on each server.
Do you see this happening with any specific pattern? Are you able to run
simple MPI jobs without problems (e.g., hello world and ring -- see the
examples/ subdirectory in your OMPI source tree)?
On Nov 23, 2013, at 12:23 AM
18 matches
Mail list logo