Joe,
Here is how I am ./configuring to build OpenMPI
setenv CC icc
setenv CXX icc
setenv F77 ifort
setenv FC ifort
./configure --prefix=/opt/openmpi-1.3
I am trying some different options, like setting CCAS, but I will not report
results in this post.
Sorry for the delay. Weather isn't exac
Here is how I am ./configuring to build OpenMPI
setenv CC icc
setenv CXX icc
setenv F77 ifort
setenv FC ifort
./configure --prefix=/opt/openmpi-1.3
I am trying some different options, like setting CCAS, but I will not report
results in this post.
Sorry for the delay. Weather isn't exactly grea
I'm using R on a Ubuntu 8.10 machine, and, in particular, quite a lot of
papply calls to analyse data. I'm currently using the LAM implementation, as
it's the only one I've got to work properly. However, while it works fine on
one PC, it fails with the error message
Error in mpi.comm.spawn(slave =
Dear OpenMPI Developer,
i have a doubt regards mpi_leave_pinned parameter. Suppose i have a simple for:
for( int i=0; i< 100; i++)
MPI_Reduce(a, b, ...)
My question is: if i set mpi_leave_pinned= 1, buffer memories are
pinned in the entire process, or just into the for cycle?
When the cycl
I'm trying to use the compiler_args field in the wrappers script to deal
with 32 bit compiles on our cluster.
I'm using portland group compilers and use the following for 32 bit
builds: -tp p7 (I actually tried to use -tp x32 but it does not compile
correctly. I think it has something to do wit
Hi,
I am trying to run with Open MPI 1.3 on a cluster using PBS Pro:
pbs_version = PBSPro_9.2.0.81361
However, after compiling with these options:
../configure
--prefix=/home_nfs/parma/x86_64/UNITE/packages/openmpi/1.3-intel10.1-64bit-dynamic-threads
CC=/opt/intel/cce/10.1.015/bin/icc CXX=/op
Tony,
Well, I was hoping to get you a good data point, but instead I
reconfirmed the error you got:
tar -xzf openmpi-1.3.tar.gz
setenv CC icc
setenv CXX icc
setenv F77 ifort
setenv FC ifort
cd openmpi-1.3
setenv LD_LIBRARY_PATH /opt/intel/cc/10.1.012/lib
./configure --prefix=/scrat
Hi Ray,
If you look at the Intel Series 11 compilers there are warnings
about mixing various
types of compilers although the Series 11 C++ Release Notes do talk about
Eclipse Integration and C/C++ Development tools. I think that I will get in
touch with Intel before I do much more.
Can y
I will be out of the office starting 01/28/2009 and will not return until
02/09/2009.
I will respond to your message when I return.
Hi All,
I'm doing some tests on a small cluster with gigabit and infiniband
interconnects with openmpi and I'm running into the same problem as
described in the following thread:
http://www.open-mpi.org/community/lists/users/2007/04/3082.php
Basically even if I run my test with:
mpirun --mca btl
Joe,
Intel recommends to set all of the compile flags, like CCFLAGS, to -O2.
Other than than, we are doing nothing different than what Intel recommends.
When I set CCAS=ias, ./configure does not make it through the Assembler stage.
When I set CCAS=ias and CASFLAGS= (I am setting it to nothin
It is quite likely that you have IPoIB on your system. In that case,
the TCP BTL will pickup that interface and use it.
If you have a specific interface you want to use, try -mca
btl_tcp_if_include eth0 (or whatever that interface is). This tell the
TCP BTL to only use the specified interfa
Tony,
I had tried just the compile line using -O0, but it did not help. The
last
assembly I actually wrote was for a Cray Y-MP I think, so I don't intend
on
delving into that.
Sorry I couldn't be more help.
I do have access to what I call a Frankenstein Cluster of Itaniums (
different
On a Torque system, your job is typically started on a backend node.
Thus, you need to have the Torque libraries installed on those nodes -
or else build OMPI static, as you found.
I have never tried --enable-mca-static, so I have no idea if this
works or what it actually does. If I want st
I have not seen this before. I assume that for some reason, the shared
memory transport layer cannot create the file it uses for communicating
within a node. Open MPI then selects some other transport (TCP, openib)
to communicate within the node so the program runs fine.
The code has not c
Hi Ralph,
* Ralph Castain [01/29/2009 14:27]:
> It is quite likely that you have IPoIB on your system. In that case, the
> TCP BTL will pickup that interface and use it.
>
> If you have a specific interface you want to use, try -mca
> btl_tcp_if_include eth0 (or whatever that interface is). Thi
What does your machinefile look like? Just curious.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Jan 29, 2009, at 3:18 PM, Daniel De Marco wrote:
Hi Ralph,
* Ralph Castain [01/29/2009 14:27]:
It is quite likely that you have IPoIB on yo
Daniel De Marco wrote:
Hi Ralph,
* Ralph Castain [01/29/2009 14:27]:
It is quite likely that you have IPoIB on your system. In that case, the
TCP BTL will pickup that interface and use it.
If you have a specific interface you want to use, try -mca
btl_tcp_if_include eth0 (or whatever that i
* Brock Palen [01/29/2009 15:24]:
> What does your machinefile look like? Just curious.
c0-0
c0-1
Daniel.
* Joe Landman [01/29/2009 15:32]:
> ifconfig ib0
> what does it respond with?
ib0: error fetching interface information: Device not found
Daniel.
Can you send the full output described here (including all network
setup stuff):
http://www.open-mpi.org/community/help/
On Jan 29, 2009, at 3:18 PM, Daniel De Marco wrote:
Hi Ralph,
* Ralph Castain [01/29/2009 14:27]:
It is quite likely that you have IPoIB on your system. In that
c
Jeff,
I put most of the info at:
http://www.bartol.udel.edu/~ddm/ompi_debug.tgz
The tar file contains the config.log, the ifconfig for the two nodes and
the output of ompi_info --all.
As I said I was running with:
mpirun --mca btl tcp,self --prefix /share/apps/openmpi-1.3/gcc_ifort/
22 matches
Mail list logo