Sam,
openmpi-devel-1.7.3-1.fc20 rpm provides
/usr/lib64/openmpi/bin/mpicc
this is the mpicc you want to use to build mpi4py
of course, you can download and install a recent Open MPI version from
open-mpi.org.
if you decide to go this way, i recommend you download 1.10.3 from
https://www.o
"Mahdi, Sam" writes:
> To dave, from the installation guide I found, it seemed I couldnt just
> directly download it from the package list, but rather Id need to use the
> mpicc wrapper to compile and install.
That makes no sense to a maintainer of some openmpi Fedora packages, and
I actually ha
Greetings Sean.
Yes, you are correct - when you build from the tarball, you should not need the
GNU autotools.
When tarball builds fail like this, it *usually* means that you are building in
a network filesystem, and the time is not well synchronized between the machine
on which you are buildi
Greetings, Jeff.
Sure, I could see that. But I'm trying to run on a locally mounted
filesystem in this case. I may need to run make in debug mode and see what
it thinks is out of date. See if you guys can help me track down the
dependency problem.
-Sean
--
Sean Ahern
Computational Engineering In
That's odd -- I've never seen this kind of problem happen on a locally-mounted
filesystem.
Just to make sure: you're *not* running autogen.pl, right? You're just
basically doing this:
-
$ tar xf openmpi-2.0.0.tar.bz2
$ cd openmpi-2.0.0
$ ./configure ...
$ make ...
-
Right?
> On Sep
Yep, that's it.
-Sean
--
Sean Ahern
Computational Engineering International
919-363-0883
On Thu, Sep 1, 2016 at 1:04 PM, Jeff Squyres (jsquyres)
wrote:
> That's odd -- I've never seen this kind of problem happen on a
> locally-mounted filesystem.
>
> Just to make sure: you're *not* running aut
Ok, weird. Try running the process again (untar, configure, make) but use
"make -d" and capture the entire output so that you can see what file(s)
is(are) triggering Automake to invoke aclocal during the build (it will be a
*LOT* of output).
> On Sep 1, 2016, at 1:20 PM, Sean Ahern wrote:
>
Okay, I think I figured it out. The short answer is that version control
systems can mess up
relative
file system timestamps.
While I was basically doing:
tar xzf openmpi-2.0.0.tar.gz
cd openmpi-2.0.0
./configure …
make
In actuality, I stored off the source in our "third party" repo before
On Sep 1, 2016, at 2:05 PM, Sean Ahern wrote:
>
> In actuality, I stored off the source in our "third party" repo before I
> built it.
>
> svn add openmpi-2.0.0
> svn commit
>
> When I grabbed that source back on the machine I wanted to build on, the
> relative timestamps weren't the same as
On Thu, Sep 1, 2016 at 3:14 PM, Jeff Squyres (jsquyres)
wrote:
>
> FWIW, we usually store the tarballs themselves in VCSs if we want to
> preserve specific third-party tarballs. It's a little gross (i.e., storing
> a big binary tarball in a VCS), but it works. Depends on your tolerance
> level
Hola,
I'm new to MPI and OpenMPI. Relatively new to HPC as well.
I've just installed a SLURM cluster and added OpenMPI for the users to take
advantage of.
I'm just discovering that I have missed a vital part - the networking.
I'm looking over the networking options and from what I can tell we o
Hi,
FCoE is for storage, Ethernet is for the network.
I assume you can ssh into your nodes, which means you have a TCP/IP, and
it is up and running.
i do not know the details of Cisco hardware, but you might be able to
use usnic (native btl or via libfabric) instead of the plain TCP/IP netw
Hello Lachlan. I think Jeff Squyres will be along in a short while! HE is
of course the expert on Cisco.
In the meantime a quick Google turns up:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/usnic/c/deployment/2_0_X/b_Cisco_usNIC_Deployment_Guide_For_Standalone_C-SeriesServers.html
13 matches
Mail list logo