I found the problem. romio added a new flag to detect extern32. This
flag was never explicity set by romio but instead by the use of calloc
in the mpich romio glue. The Open MPI romio glue still used malloc. I
fixed this on trunk and cmr'd to 1.7.4.
-Nathan
On Fri, Jan 17, 2014 at 01:40:43PM -080
Thanks.
For what it is worth, it looks like I now have a successful build of
open-mpi plus hdf5. With the caveat (see pasted note below from the HDF5
support desk) about make check-p from hdf5.
For anyone else trying to get hdf5 going with openmpi on mavericks, here
are the configure combinatio
We did update ROMIO at some point in there, so it is possible this is a ROMIO
bug that we have picked up. I've asked someone to check upstream about it.
On Jan 17, 2014, at 12:02 PM, Ronald Cohen wrote:
> Sorry, too many entries in this thread, I guess. My general goal is to get a
> working
Sorry, too many entries in this thread, I guess. My general goal is to get
a working parallel hdf5 with openmpi on Mac OS X Mavericks. At one point
in the saga I had romio disabled, which naturally doesn't work for hdf5
(which is trying to read/write files in parallel). So the hdf5 tests would
o
Can you specify exactly which issue you're referring to?
- test failing when you had ROMIO disabled
- test (sometimes) failing when you had ROMIO disabled
- compiling / linking issues
?
On Jan 17, 2014, at 1:50 PM, Ronald Cohen wrote:
> Hello Ralph and others, I just got the following back fr
Hello Ralph and others, I just got the following back from the HDF-5
support group, suggesting an ompi bug. So I should either try 1.7.3 or a
recent nightly 1.7.4.Will likely opt for 1.7.3, but hopefully someone
at openmpi can look at the problem for 1.7.4. In short, the challenge is
to get
I figured that.
On Fri, Jan 17, 2014 at 10:26 AM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
> On Jan 17, 2014, at 1:17 PM, Jeff Squyres (jsquyres)
> wrote:
>
> > 3. --enable-shared is *not* implied by --enable-static. So if you
> --enable-static without --disable-shared, you're buil
On Jan 17, 2014, at 1:17 PM, Jeff Squyres (jsquyres) wrote:
> 3. --enable-shared is *not* implied by --enable-static. So if you
> --enable-static without --disable-shared, you're building both libmpi.so and
> libmpi.a (both of which will have the plugins slurped up -- no DSOs). Which
> is no
Very helpful, thanks.
On Fri, Jan 17, 2014 at 10:17 AM, Jeff Squyres (jsquyres) <
jsquy...@cisco.com> wrote:
> Ok, thanks. A few notes:
>
> 1. --enable-static implies --disable-dlopen. Specifically:
> --enable-static does two things:
>
> 1a. Build libmpi.a (and friends)
> 1b. Slurp all the OMP
Good suggestions, and thanks! But since I haven't been able to get the
problem to recur and I'm stuck now on other issues related to getting
parallel hdf5 to pass its make check, I will likely not follow up on this
particular (non-recurring) issue (except maybe I should forward your
comments to t
Ok, thanks. A few notes:
1. --enable-static implies --disable-dlopen. Specifically: --enable-static
does two things:
1a. Build libmpi.a (and friends)
1b. Slurp all the OMPI plugins into libmpi.a (and friends), vs. building them
as standalone dynamic shared object (DSO) files (this is half of
I'm looking at your code, and I'm not actually an expert in the MPI IO sutff...
but do you have a race condition in the file close+delete and the open with
EXCL?
I'm asking because I don't know offhand if the the file close+delete is
supposed to be collective and not return until the file is gu
Thanks, I've just gotten an email with some suggestions (and promise of
more help) from the HDF5 support team. I will report back here, as it may
be of interest to others trying to build hdf5 on mavericks.
On Fri, Jan 17, 2014 at 9:08 AM, Ralph Castain wrote:
> Afraid I have no idea, but hope
Afraid I have no idea, but hopefully someone else here with experience with
HDF5 can chime in?
On Jan 17, 2014, at 9:03 AM, Ronald Cohen wrote:
> Still a timely response, thank you.The particular problem I noted hasn't
> recurred; for reasons I will explain shortly I had to rebuild openmp
Still a timely response, thank you.The particular problem I noted
hasn't recurred; for reasons I will explain shortly I had to rebuild
openmpi again, and this time Sample_mpio.c compiled and ran successfully
from the start.
But now my problem is trying to get parallel HDF5 to run. In my first
sorry for delayed response - just getting back from travel. I don't know why
you would get that behavior other than a race condition. Afraid that code path
is foreign to me, but perhaps one of the folks in the MPI-IO area can respond
On Jan 15, 2014, at 4:26 PM, Ronald Cohen wrote:
> Update:
Update: I reconfigured with enable_io_romio=yes, and this time -- mostly --
the test using Sample_mpio.c passes. Oddly the very first time I tried I
got errors:
% mpirun -np 2 sampleio
Proc 1: hostname=Ron-Cohen-MBP.local
Testing simple C MPIO program with 2 processes accessing file ./mpitest.d
Aha. I guess I didn't know what the io-romio option does. If you look
at my config.log you will see my configure line included
--disable-io-romio.Guess I should change --disable to --enable.
You seem to imply that the nightly build is stable enough that I should
probably switch to that rat
Oh, a word of caution on those config params - you might need to check to
ensure I don't disable romio in them. I don't normally build it as I don't
use it. Since that is what you are trying to use, just change the "no" to
"yes" (or delete that line altogether) and it will build.
On Wed, Jan 15,
You can find my configure options in the OMPI distribution at
contrib/platform/intel/bend/mac. You are welcome to use them - just
configure --with-platform=intel/bend/mac
I work on the developer's trunk, of course, but also run the head of the
1.7.4 branch (essentially the nightly tarball) on a fa
Ralph,
I just sent out another post with the c file attached.
If you can get that to work, and even if you can't can you tell me what
configure options you use, and what version of open-mpi? Thanks.
Ron
On Wed, Jan 15, 2014 at 10:36 AM, Ralph Castain wrote:
> BTW: could you send me your sa
I neglected in my earlier post to attach the small C code that the hdf5
folks supplied; it is attached here.
On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen wrote:
> I have been struggling trying to get a usable build of openmpi on Mac OSX
> Mavericks (10.9.1). I can get openmpi to configure
BTW: could you send me your sample test code?
On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain wrote:
> I regularly build on Mavericks and run without problem, though I haven't
> tried a parallel IO app. I'll give yours a try later, when I get back to my
> Mac.
>
>
>
> On Wed, Jan 15, 2014 at 10
I regularly build on Mavericks and run without problem, though I haven't
tried a parallel IO app. I'll give yours a try later, when I get back to my
Mac.
On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen wrote:
> I have been struggling trying to get a usable build of openmpi on Mac OSX
> Maverick
24 matches
Mail list logo