Hi,
I have just install OpenMPI-1.7.1 and cannot get it running.
Here is the error messages:
[cmy@gLoginNode1 test_nbc]$ mpirun -n 4 -host gnode100 ./hello
[gnode100:31789] Error: unknown option "--tree-spawn"
input in flex scanner failed
[gLoginNode1:14920] [[62542,0],0] ORTE_ERROR_LOG: A messa
Hi all,
Jeff -- I am not sure want do you mean by STL but currently I am using
mpich-3.0.4 with gcc and I don't have any problem. Is there a way to know
if C++ still works on Mac or not? I am sure that on Mac I use C++ but i
haven't try to use it before.
Gus-- I tried to use CXX=g++ but the confi
You aren't setting the path correctly on your backend machines, and so they are
picking up an older version of OMPI.
On Jun 14, 2013, at 2:08 AM, Zehan Cui wrote:
> Hi,
>
> I have just install OpenMPI-1.7.1 and cannot get it running.
>
> Here is the error messages:
>
> [cmy@gLoginNode1 test_
Gus picked up the issue properly - you're setting CXX to gcc; it needs to be
g++.
If configure fails with g++, then you have a busted C++ compiler (that's a
guess; I haven't seen the output from failed configure when uu specify
CXX=g++). You can disable OMPI's use of C++ with. --disable-MPI-CXX
I think the PATH setting is ok. I forgot to mention that it run well on
local machine.
The PATH setting on the local machine is
[cmy@gLoginNode1 ~]$ echo $PATH
/home/cmy/clc/benchmarks/nasm-2.09.10:*/home3/cmy/czh/opt/ompi-1.7.1/bin/*
:/home3/cmy/czh/opt/autoconf-2.69/bin/:/home3/cmy/czh/opt/mvap
Check the PATH you get when you run non-interactively on the remote machine:
ssh gnode100 env | grep PATH
On Jun 14, 2013, at 10:09 AM, Zehan Cui wrote:
> I think the PATH setting is ok. I forgot to mention that it run well on local
> machine.
>
> The PATH setting on the local machine is
>
Thanks.
That's exactly the problem. When add prefix to the mpirun command,
everything goes fine.
- Zehan Cui
On Fri, Jun 14, 2013 at 10:25 PM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
> Check the PATH you get when you run non-interactively on the remote
> machine:
>
> ssh gnode10
I'd like to bump this question. I also wanted to ask: I've been
searching the archives, and it seems that in past versions of OMPI,
only MPI_THREAD_SINGLE was available from the default configuration of
OMPI. It seems that as long as calls to MPI were serialized, however,
there were no issues.
Hi!
I use OpenMPI 1.7.1 from MacPorts (+threads +gcc47). When compiling a simple
hello world program calling
MPI_Init_thread(&argc, &argv, MPI_THREAD_SERIALIZED, &provided);
the program hangs if run on more than one process. All works fine if I
- either use MPI_THREAD_SINGLE
- or use OpenMPI
I have no idea how MacPorts configures OMPI - did you check the output of
ompi_info to see if threading was even enabled?
On Jun 14, 2013, at 3:12 PM, Hans Ekkehard Plesser
wrote:
>
> Hi!
>
> I use OpenMPI 1.7.1 from MacPorts (+threads +gcc47). When compiling a simple
> hello world program
On Jun 14, 2013, at 9:46 AM, Brian Budge wrote:
> I'd like to bump this question. I also wanted to ask: I've been
> searching the archives, and it seems that in past versions of OMPI,
> only MPI_THREAD_SINGLE was available from the default configuration of
> OMPI. It seems that as long as cal
On Feb 4, 2013, at 9:09 PM, Roland Schulz wrote:
>
>
>
> On Mon, Jan 28, 2013 at 9:20 PM, Brian Budge wrote:
> I believe that yes, you have to compile enable-mpi-thread-multiple to
> get anything other than SINGLE.
>
> I just tested that compiling with enable-opal-multi-threads also makes
Hello Following problem is solved by recompiling and reinstall Open MPI
for each nodes.
Thank you for your coorpolation.
-
I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
MPI_REDUCE
Hi,
OpenMPI-1.7.1 is announce support MPI-3 functionality such as non-blocking
collectives.
I have test MPI_Iallgatherv on a 8-node cluster, however, I got bad
performance. The MPI_Iallgatherv block the program for even longer time
than traditional MPI_Allgatherv.
Following is the test pseudo-co
14 matches
Mail list logo