Hi justa tester tester
Is your p2p1 interface an Infiniband port, or is it Ethernet?
If it is Ethernet, try removing "--mca btl_openib_if_include p2p1"
from your mpiexec command line, because it would conflict with
the other mca parameter you chose "--mca btl openib,sm,self".
Simpler may be bett
I see you are running this via "sudo". Unfortunately, that has a peculiar
behavior - it resets the environmental variables to something appropriate for
root. So I suspect that the library path is not correct as far as mpirun is
concerned.
You can check it easily enough - just do "sudo printenv"
Hi Castain,
You have some major problems with confused installations of MPIs. First, you cannot
compile an application against>MPICH and expect to run it with OMPI - the two are
not binary compatible. You need to compile against the MPI>installation you intend
to run against.
I did this, sr
Hi Markus
You have some major problems with confused installations of MPIs. First, you
cannot compile an application against MPICH and expect to run it with OMPI -
the two are not binary compatible. You need to compile against the MPI
installation you intend to run against.
Second, your errors
On 11/24/2011 10:08 PM, MM wrote:
Hi
I get the same error while linking against home built 1.5.4 openmpi libs on
win32.
I didn't get this error against the prebuilt libs.
I see you use Suse. There probably is a openmpi.rpm or openmpi.dpkg already
available for Suse which contains the libraries
On 11/24/2011 10:08 PM, MM wrote:
Hi
I get the same error while linking against home built 1.5.4 openmpi libs on
win32.
I didn't get this error against the prebuilt libs.
I see you use Suse. There probably is a openmpi.rpm or openmpi.dpkg already
available for Suse which contains the libraries
Hi
I get the same error while linking against home built 1.5.4 openmpi libs on
win32.
I didn't get this error against the prebuilt libs.
I see you use Suse. There probably is a openmpi.rpm or openmpi.dpkg already
available for Suse which contains the libraries and you could link against
those and
You'll need to be a bit more specific. What describe should work fine.
foo.h:
extern MPI_Datatype mydtype;
foo.cc:
#inlclude "foo.h"
MPI_Datatype mydtype;
bar.cc:
#include "foo.h"
void bogus(void) {
MPI_Datatype foo = mydtype;
}
On Oct 9, 2011, at 4:10 PM, Jack Bryan wrote:
> Hi,
>
> I nee
it ?
> Any help is appreciated.
> thanks
> Jack
> july 7 2010
>
>
> From: solarbik...@gmail.com
> Date: Wed, 7 Jul 2010 17:32:27 -0700
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] Open MPI error MPI_ERR_TRUNCATE: message trunca
iated.
thanks
Jack
july 7 2010
From: solarbik...@gmail.com
List-Post: users@lists.open-mpi.org
Date: Wed, 7 Jul 2010 17:32:27 -0700
To: us...@open-mpi.org
Subject: Re: [OMPI users] Open MPI error MPI_ERR_TRUNCATE: message truncated
This error typically occurs when the received message is bigger tha
This error typically occurs when the received message is bigger than the
specified buffer size. You need to narrow your code down to offending
receive command to see if this is indeed the case.
On Wed, Jul 7, 2010 at 8:42 AM, Jack Bryan wrote:
> Dear All:
>
> I need to transfer some messages f
Thank you Ralph,
I found the problem. it is because I wrongly configured the second node's
selinux property (which is set to be enforced).
After it is disabled, the parallel-hello works fine.
regards,
-andria
On Tue, Mar 17, 2009 at 8:08 PM, Ralph Castain wrote:
> Hi Andria
>
> The problem is
Hi Andria
The problem is a permissions one - your system has been setup so that
only root has permission to open a TCP socket. I don't know what
system you are running - you might want to talk to your system admin
or someone knowledgeable on that operating system to ask them how to
revise
Have you tried this:
http://www.open-mpi.org/faq/?category=openfabrics#v1.2-use-early-completion
On Feb 2, 2009, at 2:52 PM, c.j@exxonmobil.com wrote:
I am using openmpi to run a job on 4 nodes, 2 processors per node.
Seems
like 5 out of the 8 processors executed the app successfu
Hi Prakash
I can't really test this solution as the Torque dynamic host allocator
appears to be something you are adding to that system (so it isn't part of
the released code). However, the attached code should cleanly add any nodes
to any existing allocation known to OpenRTE.
I hope to resume wo
On Apr 2, 2007, at 12:53 PM, Prakash Velayutham wrote:
prakash@wins04:~/thesis/CS/Samples>mpirun -np 4 --bynode --hostfile
machinefile ./parallel.laplace
[wins01:17699] *** An error occurred in MPI_Comm_spawn
[wins01:17699] *** on communicator MPI_COMM_WORLD
[wins01:17699] *** MPI_ERR_ARG: in
Thanks Ralph. I will wait for your Torque dynamic host addition solution.
Prakash
>>> r...@lanl.gov 04/02/07 1:00 PM >>>
Hi Prakash
This is telling you that you have an error in the comm_spawn command itself.
I am no expert there, so I'll have to let someone else identify it for you.
There are
Hi Prakash
This is telling you that you have an error in the comm_spawn command itself.
I am no expert there, so I'll have to let someone else identify it for you.
There are no limits to launching on nodes in a hostfile - they are all
automatically considered "allocated" when the file is read. If
Hello,
Thanks for the patch. I still do not know the internals of Open MPI, so can't
test this right away. But here is another test I ran and that failed too.
I have now removed Torque from the equation. I am NOT requesting nodes through
Torque. I SSH to a compute node and start up the code as
No offense, but I would definitely advise against that path. There are
other, much simpler solutions to dynamically add hosts.
We *do* allow dynamic allocation changes - you just have to know how to do
them. Nobody asked before... ;-) Future variations will include an even
simpler, single API sol
Ralph Castain a écrit :
> The runtime underneath Open MPI (called OpenRTE) will not allow you to spawn
> processes on nodes outside of your allocation. This is for several reasons,
> but primarily because (a) we only know about the nodes that were allocated,
> so we have no idea how to spawn a proc
Thanks for the info, Ralph. It is as I thought, but was hoping wouldn't
be that way.
I am requesting more nodes from the resource manager from inside of my
application code using the RM's API. when I know they are available
(allocated by the RM), I am trying to split the application data across
the
The runtime underneath Open MPI (called OpenRTE) will not allow you to spawn
processes on nodes outside of your allocation. This is for several reasons,
but primarily because (a) we only know about the nodes that were allocated,
so we have no idea how to spawn a process anywhere else, and (b) most
OK. Figured that it was wrong number of arguments to the code.
Thanks,
Prakash
Jeff Squyres (jsquyres) wrote:
I'm assuming that this is during the startup shortly after mpirun,
right? (i.e., during MPI_INIT)
It looks like MPI processes were unable to connect back to the
rendezvous point (mpir
I'm assuming that this is during the startup shortly after mpirun,
right? (i.e., during MPI_INIT)
It looks like MPI processes were unable to connect back to the
rendezvous point (mpirun) during startup. Do you have any firewalls or
port blocking running in your cluster?
> -Original Message
25 matches
Mail list logo