In the latest versions of libtool, the runtime library path is encoded with
a statement like:
LD_RUN_PATH="/scr_multipole/cary/facetsall/physics/uedge/par/uecxxpy/.libs:/contrib/babel-1.4.0-r6662p1-shared/lib:/scr_multipole/cary/facetsall/physics/uedge/par/uecxxpy:/scr_multipole/cary/volatile/ued
The only thing that changes is the required connectivity. It sounds to me
like you may have a firewall issue here, where cloud3 is blocking
connectivity from cloud6, but cloud6 is allowing connectivity from cloud3.
Is there a firewall in operation, per chance?
Ralph
On Mon, Aug 3, 2009 at 11:08
On Mon, Aug 3, 2009 at 9:47 AM, Ralph Castain wrote:
> You are both correct. If you simply type "mpirun pvserver", then we will
> execute pvserver on whatever machine is local.
>
> However, if you type "mpirun -n 1 -H host1 pvserver", then we will start
> pvserver on the specified host. Note that
Task-farm or manager/worker recovery models typically depend on
intercommunicators (i.e., from MPI_Comm_spawn) and a resilient MPI
implementation. William Gropp and Ewing Lusk have a paper entitled
"Fault Tolerance in MPI Programs" that outlines how an application
might take advantage of th
Is that kind of approach possible within an MPI framework? Perhaps a
grid approach would be better. More experienced people, speak up,
please?
(The reason I say that is that I too am interested in the solution of
that kind of problem, where an individual blade of a blade server
fails and correcting
You are both correct. If you simply type "mpirun pvserver", then we will
execute pvserver on whatever machine is local.
However, if you type "mpirun -n 1 -H host1 pvserver", then we will start
pvserver on the specified host. Note that mpirun will still be executing on
your local machine - but pvse
> I'm a newbie, so forgive me if I ask something stupid:
>
> why are You running ssh command before mpirun command? I'm interested in
> setting up a paraview server on a LAN to pos-tprocess OpenFOAM
> simulation data.
>
> Just a total newbish comment: doesn't the mpirun in fact call for the
> ssh a
David Doria wrote:
> I have three machines: mine (daviddoria) and two identical remote
> machines (cloud3 and cloud6). I can password-less ssh between any pair.
> The machines are all 32bit running Fedora 11. OpenMPI was installed
> identically on each. The .bashrc is identical on each. /etc/hosts
Hi
I guess "task-farming" could give you a certain amount of the kind of
fault-tolerance you want.
(i.e. a master process distributes tasks to idle slave processors -
however, this will only work
if the slave processes don't need to communicate with each other)
Jody
On Mon, Aug 3, 2009 at 1:24
I have three machines: mine (daviddoria) and two identical remote machines
(cloud3 and cloud6). I can password-less ssh between any pair. The machines
are all 32bit running Fedora 11. OpenMPI was installed identically on each.
The .bashrc is identical on each. /etc/hosts is identical on each.
I w
Thank you Dominik for all your help!!
I've solved the problem: execute : printenv > ~/.ssh/environment
edit /etc/ssh/sshd_config and set PermitUserEnvironment to "yes" and
check that UseLogin is set to "no"
scp hostname ~/.ssh/environment user@hostname:~/.ssh/environment
edit sshd_config on the
Hi all,
Thanks Durga for your reply.
Jeff, once you wrote code for Mandelbrot set to demonstrate fault tolerance
in LAM-MPI. i. e. killing any slave process doesn't
affect others. Exact behaviour I am looking for in Open MPI. I attempted,
but no luck. Can you please tell how to write such program
On Mon, Aug 3, 2009 at 6:13 PM, Lenny
Verkhovsky wrote:
> Hi,
> you can find a lot of useful information under FAQ section
> http://www.open-mpi.org/faq/
> http://www.open-mpi.org/faq/?category=tuning#paffinity-defs
> Lenny.
> On Mon, Aug 3, 2009 at 11:55 AM, Lee Amy wrote:
>>
>> Hi,
>>
>> Dose Op
Hi,
you can find a lot of useful information under FAQ section
*http://www.open-mpi.org/faq/*
http://www.open-mpi.org/faq/?category=tuning#paffinity-defs
Lenny.
On Mon, Aug 3, 2009 at 11:55 AM, Lee Amy wrote:
> Hi,
>
> Dose OpenMPI has the processors binding like command "taskset"? For
> example,
Hi,
Dose OpenMPI has the processors binding like command "taskset"? For
example, I started 16 MPI processes then I want to bind them with
specific processor.
How to do that?
Thank you very much.
Best Regards,
Amy
15 matches
Mail list logo