; /usr/local/cuda. So you should either not set --with-cuda or set
> --with-cuda $CUDA_HOME (no include).
>
> Sylvain
> On 10/27/2016 03:23 PM, Craig tierney wrote:
>
> Hello,
>
> I am trying to build OpenMPI 1.10.3 with CUDA but I am unable to build the
> library that will all
45 AM, Martin Siegert wrote:
> On Fri, Nov 09, 2012 at 11:05:23AM -0700, Craig Tierney wrote:
>> I just built OpenMPI 1.6.3 with ifort 12.1.4. When running ifort I am
>> getting the warning:
>>
>> ifort: command line remark #10010: open '-pthread' is depreciated
I just built OpenMPI 1.6.3 with ifort 12.1.4. When running ifort I am
getting the warning:
ifort: command line remark #10010: open '-pthread' is depreciated and
will be removed in a future release. See '-help deprecated'.
Is -pthread really needed? Is there a configure option to change this
or
Jody Klymak wrote:
>
> On Mar 30, 2010, at 11:12 AM, Cristobal Navarro wrote:
>
>> i just have some questions,
>> Torque requires moab, but from what i've read on the site you have to
>> buy moab right?
>
> I am pretty sure you can download torque w/o moab. I do not use moab,
> which I think i
I am trying to run WRF on 1024 cores with OpenMPI 1.3.3 and
1.4. I can get the code to run with 512 cores, but it crashes
at startup on 1024 cores. I am getting the following error message:
[n172][[43536,1],0][connect/btl_openib_connect_oob.c:463:qp_create_one] error
creating qp errno says Can
To use tuned collectives, do all I have to do is add --mca coll tuned?
I am trying to run with:
# mpirun -np 8 --mca coll tuned --mca orte_base_help_aggregate 0 ./wrf.exe
But all the processes fail with the folling message:
--
quot;--mca
coll_hierarch_priority 100 --mca coll_sm_priority 100". We would be
very interested in any results you get (failures, improvements,
non-improvements).
Adding these two options causes the code to segfault at startup.
Craig
thanks,
--td
Message: 4
Date: Thu, 06 Aug 2009 17:03:08 -0600
From
ps.
>
I will look into this. Thanks for the ideas.
Craig
> Gus Correa
> -
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
>
t work'
in most cases.
I will try the above options and get back to you.
Craig
thanks,
--td
Message: 4
Date: Thu, 06 Aug 2009 17:03:08 -0600
From: Craig Tierney
Subject: Re: [OMPI users] Performance question about OpenMPI and
MVAPICH2 onIB
To: Open MPI Users
Message-ID: <4
>
I will look into this. Thanks for the ideas.
Craig
> Gus Correa
> -
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> ----
, the Openmpi version is not scaling as well. Any
ideas on why that might be the case?
Thanks,
Craig
Craig Tierney wrote:
> I am running openmpi-1.3.3 on my cluster which is using
> OFED-1.4.1 for Infiniband support. I am comparing performance
> between this version of OpenMPI and Mvapic
ince ompi_info isn't
complaining and the libraries are available, I am thinking that
is isn't a problem. I could be wrong.
Thanks,
Craig
--
Craig Tierney (craig.tier...@noaa.gov)
ework was renamed "plm". Secondly, the gridgengine plm was
> folded into the rsh/ssh one.
>
Rolf,
Thanks for the quick reply. That solved the problem.
Craig
> A few more details at
> http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge
>
> Rolf
>
I have built OpenMPI 1.3.3 without support for SGE.
I just want to launch jobs with loose integration right
now.
Here is how I configured it:
./configure CC=pgcc CXX=pgCC F77=pgf90 F90=pgf90 FC=pgf90
--prefix=/opt/openmpi/1.3.3-pgi --without-sge
--enable-io-romio --with-openib=/opt/hjet/ofed/1
Reuti wrote:
Am 14.10.2008 um 23:39 schrieb Craig Tierney:
Reuti wrote:
Am 14.10.2008 um 23:18 schrieb Craig Tierney:
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that way
on our systems. Not sure if there is something done locally to make
it work.
Also
Reuti wrote:
Am 14.10.2008 um 23:18 schrieb Craig Tierney:
Ralph Castain wrote:
I -think- there is...at least here, it does seem to behave that way
on our systems. Not sure if there is something done locally to make
it work.
Also, though, I have noted that LD_LIBRARY_PATH does seem to be
in wrote:
You might consider using something like "module" - we use that
system for exactly this reason. Works quite well and solves the
multiple compiler issue.
Ralph
On Oct 14, 2008, at 12:56 PM, Craig Tierney wrote:
George Bosilca wrote:
The option to expand the remote LD_LI
BRARY_PATH that is used in
the current environment is not passed to orted for its use (just to
orted to pass the launching executable).
Craig
Ralph
On Oct 14, 2008, at 12:56 PM, Craig Tierney wrote:
George Bosilca wrote:
The option to expand the remote LD_LIBRARY_PATH, in such a way that
would see that this would even more support the
passing of LD_LIBRARY_PATH for OpenMPI tools to support a heterogeneous
configuration as you described.
Thanks,
Craig
On Oct 14, 2008, at 12:11 PM, Craig Tierney wrote:
George Bosilca wrote:
Craig,
This is a problem with the Intel libraries a
er to just build the OpenMPI tools statically? We also
use other compilers (PGI, Lahey) so I need a solution that works for
all of them.
Thanks,
Craig
On Oct 10, 2008, at 6:17 PM, Craig Tierney wrote:
I am having problems launching openmpi jobs on my system. I support
multiple versions
pirun launches
orted to each node. Orted will pass the LD_LIBRARY_PATH to the specified
application. Mpirun does not pass LD_LIBRARY_PATH to Orted, so it doesn't
launch.
Craig
--
Craig Tierney (craig.tier...@noaa.gov)
/bin/orted: error while loading shared libraries: libintlc.so.5: cannot open shared object file: No
such file or directory
How do others solve this problem?
Thanks,
Craig
--
Craig Tierney (craig.tier...@noaa.gov)
once
for each compiler: the C compiler comes at the beginning, all other ones
near the end of the script.
Cheers,
Ralf
--
Craig Tierney (craig.tier...@noaa.gov)
.
What is the correct way for the configure process
to know that if the compiler is lf95, to use
--shared when compiling objects?
Thanks,
Craig
--
Craig Tierney (craig.tier...@noaa.gov)
. My code does complete
successfully because both nodes are connected by both meshes.
My question is, how can I tell mpirun that I only want to use of
of the ports? I specifically want to use either port 1 or port 2, but
not bond both together.
Can this be done?
Thanks,
Craig
--
Craig Tierney (
25 matches
Mail list logo