On Mon, Aug 3, 2009 at 1:41 PM, Ralph Castain wrote:
> The only thing that changes is the required connectivity. It sounds to me
> like you may have a firewall issue here, where cloud3 is blocking
> connectivity from cloud6, but cloud6 is allowing connectivity from cloud3.
>
> Is there a firewall
On Mon, Aug 3, 2009 at 9:47 AM, Ralph Castain wrote:
> You are both correct. If you simply type "mpirun pvserver", then we will
> execute pvserver on whatever machine is local.
>
> However, if you type "mpirun -n 1 -H host1 pvserver", then we will start
> pvserver on the specified host. Note that
> I'm a newbie, so forgive me if I ask something stupid:
>
> why are You running ssh command before mpirun command? I'm interested in
> setting up a paraview server on a LAN to pos-tprocess OpenFOAM
> simulation data.
>
> Just a total newbish comment: doesn't the mpirun in fact call for the
> ssh a
I have three machines: mine (daviddoria) and two identical remote machines
(cloud3 and cloud6). I can password-less ssh between any pair. The machines
are all 32bit running Fedora 11. OpenMPI was installed identically on each.
The .bashrc is identical on each. /etc/hosts is identical on each.
I w
> Check ompi_info --param oob tcp for info on those (and other) params.
>
> Ralph
>
> On Jul 29, 2009, at 2:46 PM, David Doria wrote:
>
>
Machine 125 had the default fedora firewall turned on. I turned it off and
it works now with simply
mpirun -H 10.1.2.126,10.1.2.122,10.1.2.1
On Wed, Jul 29, 2009 at 4:15 PM, Ralph Castain wrote:
> Using direct can cause scaling issues as every process will open a socket
> to every other process in the job. You would at least have to ensure you
> have enough file descriptors available on every node.
> The most likely cause is either (a
On Wed, Jul 29, 2009 at 3:42 PM, Ralph Castain wrote:
> It sounds like perhaps IOF messages aren't getting relayed along the
> daemons. Note that the daemon on each node does have to be able to send TCP
> messages to all other nodes, not just mpirun.
>
> Couple of things you can do to check:
>
>
I wrote a simple program to display "hello world" from each process.
When I run this (126 - my machine, 122, and 123), everything works fine:
[doriad@daviddoria MPITest]$ mpirun -H 10.1.2.126,10.1.2.122,10.1.2.123
hello-mpi
>From process 1 out of 3, Hello World!
>From process 2 out of 3, Hello Wor
On Thu, Jul 23, 2009 at 5:47 PM, Ralph Castain wrote:
> I doubt those two would work together - however, a combination of 1.3.2 and
> 1.3.3 should.
>
> You might look at the ABI compatibility discussion threads (there have been
> several) on this list for the reasons. Basically, binary compatibilit
Is OpenMPI backwards compatible? I.e. If I am running 1.3.1 on one
machine and 1.3.3 on the rest, is it supposed to work? Or do they all
need exactly the same version?
When I add this wrong version machine to the machinelist, with a
simple "hello world from each process type program", I see no out
10 matches
Mail list logo