Yes, it seems to begin to run with firewall off (gui), initiates the
number of called mpirun processes, then terminates with no errors.
On 1/24/2014 7:48 PM, Ralph Castain wrote:
Have you tried just turning the firewall "off"? It would at least let
you know if things work.
On Jan 24, 2014, at
Have you tried just turning the firewall "off"? It would at least let you know
if things work.
On Jan 24, 2014, at 3:48 PM, Dan Hsu wrote:
> Ralph, thanks. I checked, and 'remote login' has been on.
>
> It's frustrating, like pulling-out-hair time.
>
>
> On 1/24/2014 1:11 PM, Ralph Castain
Ralph, thanks. I checked, and 'remote login' has been on.
It's frustrating, like pulling-out-hair time.
On 1/24/2014 1:11 PM, Ralph Castain wrote:
The procs attempt to open a socket back to mpirun for communication,
so the firewall has to allow TCP communication. I usually turn on the
"remote
Greg and I are chatting off list; there's something definitely weird going on
in his setup.
We'll report back to the list when we figure it out.
On Jan 24, 2014, at 1:26 PM, Gus Correa
wrote:
> On 01/24/2014 12:50 PM, Fischer, Greg A. wrote:
>> Yep. That was the problem. It works beautifully
It is generally a bad idea to install OMPI in system directories like
/usr/local as multiple versions can wind up intermingled with each other, thus
causing these kind of problems. I would suggest taking a 1.6.5 tarball and
building it with a prefix in your own home directory area. You should th
The procs attempt to open a socket back to mpirun for communication, so the
firewall has to allow TCP communication. I usually turn on the "remote login"
feature in the "sharing" area in preferences.
On Jan 23, 2014, at 4:34 PM, Dan Hsu wrote:
> Hi All
>
> Am trying to run a parallel molecul
You are right. The problem was solved put the entire path of one mpi
version:
/home/myuser/openmpi-x/bin/mpirun -hostfile machines -np 2 ./hello
Thanks,
Edson
Em 24-01-2014 16:00, Ralph Castain escreveu:
Looks to me like you are picking up a different OMPI installation on
the remote node -
On 01/24/2014 12:50 PM, Fischer, Greg A. wrote:
Yep. That was the problem. It works beautifully now.
Thanks for prodding me to take another look.
With regards to openmpi-1.6.5, the system that I'm compiling and running on,
SLES10, contains some pretty dated software (e.g. Linux 2.6.x, python 2
Actually, please disregard the "Can this be done safely with.." part,
because I don't want to have to use a condition variable; I want it all to
happen by inter process communication through OpenMPI
On Fri, Jan 24, 2014 at 11:28 AM, Kenneth Adam Miller <
kennethadammil...@gmail.com> wrote:
> I h
On Jan 24, 2014, at 12:50 PM, "Fischer, Greg A."
wrote:
> Yep. That was the problem. It works beautifully now.
Great!
> Thanks for prodding me to take another look.
I'd be embarrassed to admit how many times I make the same mistake. And I've
been working on Open MPI for over 10 years. :-)
Looks to me like you are picking up a different OMPI installation on the remote
node - check that your path and ld_library_path on the remote host are being
set correctly
On Jan 24, 2014, at 9:41 AM, etcamargo wrote:
> Hi, All!
>
> Please, I have a problem to run a simple "hello world" program
Yep. That was the problem. It works beautifully now.
Thanks for prodding me to take another look.
With regards to openmpi-1.6.5, the system that I'm compiling and running on,
SLES10, contains some pretty dated software (e.g. Linux 2.6.x, python 2.4, gcc
4.1.2). Is it possible there's simply an
Hi, All!
Please, I have a problem to run a simple "hello world" program on
different hosts. The hosts are virtual machines located in the same net.
The program works fine only on one host, the ssh is ok between the
machines and nfs is ok, sharing the executable files between the
machines.
I have a specific use case that I want to describe, and I'm brand new to
MPI. It's rather complex, so I want to be careful that I design it so that
there are no race conditions.
Pool A is a buffer (of type 1) handle manager, that feeds buffer handles
into thread set 1, and receives old handles fro
Hmm... It looks like CMAKE was somehow finding openmpi-1.6.5 instead of
openmpi-1.4.3, despite the environment variables being set otherwise. This is
likely the explanation. I'll try to chase that down.
>-Original Message-
>From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Jef
Hi,
I've done an operating system up-grade to OpenSUSE 13.1 and I've up-graded
OpenFOAM from 2.2.1 to 2.2.2.
Bevor, OpenMPI worked well.
Now, it does not work at all.
First Step
--
After decomposing the domain, I've tried to start parallel computation:
mpirun -np 8 simpleFoam -
Ok. I only mention this because the "mca_paffinity_linux.so: undefined symbol:
mca_base_param_reg_int" type of message is almost always an indicator of two
different versions being installed into the same tree.
On Jan 24, 2014, at 11:26 AM, "Fischer, Greg A."
wrote:
> Version 1.4.3 and 1.6.
Version 1.4.3 and 1.6.5 were and are installed in separate trees:
1003 fischega@lxlogin2[~]> ls
/tools/casl_sles10/vera_clean/gcc-4.6.1/toolset/openmpi-1.*
/tools/casl_sles10/vera_clean/gcc-4.6.1/toolset/openmpi-1.4.3:
bin etc include lib share
/tools/casl_sles10/vera_clean/gcc-4.6.1/toolset
On Jan 22, 2014, at 10:21 AM, "Fischer, Greg A."
wrote:
> The reason for deleting the openmpi-1.6.5 installation was that I went back
> and installed openmpi-1.4.3 and the problem (mostly) went away. Openmpi-1.4.3
> can run the simple tests without issue, but on my "real" program, I'm getting
19 matches
Mail list logo