Prentice Bisbal wrote:
Ashley Pittman wrote:
This smacks of a firewall issue, I thought you'd said you weren't using one but
now I read back your emails I can't see anywhere where you say that. Are you
running a flrewall or any iptables rules on any of the nodes? It looks to me
like you may
Ashley Pittman wrote:
> This smacks of a firewall issue, I thought you'd said you weren't using one
> but now I read back your emails I can't see anywhere where you say that. Are
> you running a flrewall or any iptables rules on any of the nodes? It looks
> to me like you may have some setup f
This smacks of a firewall issue, I thought you'd said you weren't using one but
now I read back your emails I can't see anywhere where you say that. Are you
running a flrewall or any iptables rules on any of the nodes? It looks to me
like you may have some setup from on the worker nodes.
Ash
Rolf vandeVaart wrote:
Ethan:
Can you run just "hostname" successfully? In other words, a non-MPI
program.
If that does not work, then we know the problem is in the runtime. If
it does works, then
there is something with the way the MPI library is setting up its
connections.
Interesting.
Ethan:
Can you run just "hostname" successfully? In other words, a non-MPI
program.
If that does not work, then we know the problem is in the runtime. If
it does works, then
there is something with the way the MPI library is setting up its
connections.
Is there more than one interface on
Prentice Bisbal wrote:
I'm assuming you already tested ssh connectivity and verified everything
is working as it should. (You did test all that, right?)
Yes. I am able to log in remotely to all nodes from the master, and to each node from each node
without a password. Each node mounts the sa
Hello,
the InfiniBand architecture has a LMC feature to assign mutiple virtual
LIDs to one port and so provides multiple paths between two ports. Is
there a methode in openmpi to enable message-striping over these paths
to increase bandwidth or avoid congestions?
(I don't mean the multirail fe
Prentice Bisbal wrote:
Ethan Deneault wrote:
All,
I am running Scientific Linux 5.5, with OpenMPI 1.4 installed into the
/usr/lib/openmpi/1.4-gcc/ directory. I know this is typically
/opt/openmpi, but Red Hat does things differently. I have my PATH and
LD_LIBRARY_PATH set correctly; because th
Ethan Deneault wrote:
> All,
>
> I am running Scientific Linux 5.5, with OpenMPI 1.4 installed into the
> /usr/lib/openmpi/1.4-gcc/ directory. I know this is typically
> /opt/openmpi, but Red Hat does things differently. I have my PATH and
> LD_LIBRARY_PATH set correctly; because the test program
Hello,
In January, I reported a problem with Open MPI 1.4.1 and PathScale 3.2
about a simple Hello World that hung on initialization
( http://www.open-mpi.org/community/lists/users/2010/01/11863.php ).
Open MPI 1.4.2 does not show this problem.
However, now we are having trouble with the 1.4.2, P
On 21 Sep 2010, at 09:54, Mikael Lavoie wrote:
> Hi,
>
> Sorry, but i get lost in what i wanna do, i have build a small home cluster
> with Pelican_HPC, that user openMPI, and i was trying to find a way to get a
> multithreaded program work in a multiprocess way without taking the time to
> l
Hi,
Am 21.09.2010 um 10:54 schrieb Mikael Lavoie:
> Sorry, but i get lost in what i wanna do, i have build a small home cluster
> with Pelican_HPC, that user openMPI, and i was trying to find a way to get a
> multithreaded program work in a multiprocess way without taking the time to
> learn M
Am 21.09.2010 um 10:19 schrieb Ashley Pittman:
> On 20 Sep 2010, at 22:24, Mikael Lavoie wrote:
>> I wanna know if it exist a implementation that permit to run a single host
>> process on the master of the cluster, that will then spawn 1 process per -np
>> X defined thread at the host specified
Hi,
Sorry, but i get lost in what i wanna do, i have build a small home cluster
with Pelican_HPC, that user openMPI, and i was trying to find a way to get a
multithreaded program work in a multiprocess way without taking the time to
learn MPI. And my vision was a sort of wrapper that take C posix
On 20 Sep 2010, at 22:24, Mikael Lavoie wrote:
> I wanna know if it exist a implementation that permit to run a single host
> process on the master of the cluster, that will then spawn 1 process per -np
> X defined thread at the host specified in the host list. The host will then
> act as a syn
Hi
I don't know if i correctly understand what you need, but have you
already tried MPI_Comm_spawn?
Jody
On Mon, Sep 20, 2010 at 11:24 PM, Mikael Lavoie wrote:
> Hi,
>
> I wanna know if it exist a implementation that permit to run a single host
> process on the master of the cluster, that will
16 matches
Mail list logo