> On Feb 23, 2015, at 10:20 , Harald Servat wrote:
>
> Hello list,
>
> we have several questions regarding calls to collectives using
> intercommunicators. In man for MPI_Bcast, there is a notice for the
> inter-communicator case that reads the text below our questions.
>
> If an I is an i
I am setting up Openmpi 1.8.4. The first time I compiled, I had the following:
version=1.8.4.I1404211913
./configure \
--disable-vt \
--prefix=/apps/share/openmpi/$version \
--disable-shared \
--enable-static \
--with-verbs \
--enable-mpirun-prefix-by-default \
--with
Hello,
Im not sure if I have my OrangeFS (2.8.8) and OpenMPI (1.8.4) set up corectly.
One short questin?
Is it needed to have OrangeFS mounted through kernel module, if I want use
MPIIO?
My simple MPIIO hello world program doesnt work, If i havent mounted OrangeFS.
When I mount OrangeFS, it
I recently upgraded my CentOS kernel and am running 2.6.32-504.8.1.el6.x86_64,
in this upgrade I also decided to upgrade my intel/openmpi codes.
I upgraded from:
intel version 13.1.2, with openmpi 1.6.5
intel 15.0.2, with openmpi 1.8.4
Previously a command of "mpirun -np NP -machinefile MACH ex
The --disable-dlopen option actually snips out some code from the Open MPI code
base: it disables a feature (and the code that goes along with it).
Hence, it makes sense that the resulting library would be a different size:
there's actually less code compiled in it.
> On Feb 24, 2015, at 2:45
Ah, now that’s a “feature” :-)
Seriously, it *is* actually a new feature of the 1.8 series. We now go out and
actually sense the number of cores on the system and set the number of slots to
that value unless you tell us otherwise. It was something people continually
nagged us about, and so we m
On 02/24/2015 02:00 PM, vithanousek wrote:
Hello,
Im not sure if I have my OrangeFS (2.8.8) and OpenMPI (1.8.4) set up corectly.
One short questin?
Is it needed to have OrangeFS mounted through kernel module, if I want use
MPIIO?
nope!
My simple MPIIO hello world program doesnt work,
Did you mean --disable-shared instead of --disable-dlopen?
And I am still confused. With "--disable-shared" I get a bigger static library
than without it?
thanks
From: users on behalf of Jeff Squyres (jsquyres)
Sent: Tuesday, February 24, 2015 3:5
Thank you sir, that fixed the first problem, hopefully the second is as easy!
I still get the second error when trying to farm out on a “large” number of
processors:
machine file (“mach_burn_24s”):
tebow
tebow121 slots=24
tebow122 slots=24
tebow123 slots=24
tebow124 slots=24
tebow125 slots=24
te
On Feb 24, 2015, at 4:09 PM, Tom Wurgler wrote:
>
> Did you mean --disable-shared instead of --disable-dlopen?
Ah, sorry -- my eyes read one thing, and my brain read another. :-)
> And I am still confused. With "--disable-shared" I get a bigger static
> library than without it?
I see that L
I think the error may be due to a new architecture change (brought on perhaps
by the intel compilers?). Bad wording here, but I’m really stumbling. As I
add processors to the mpirun hostname call, at ~100 processors I get the
following error, which may be informative to more seasoned eyes. Ad
I don't know the reasoning for requiring --with-cma to enable CMA but I
am looking at auto-detecting CMA instead of requiring Open MPI to be
configured with --with-cma. This will likely go into the 1.9 release
series and not 1.8.
-Nathan
On Thu, Feb 19, 2015 at 09:31:43PM -0500, Eric Chamberland
It looks to me like some of the nodes don’t have the required numactl packages
installed. Why don’t you try launching the job without binding, just to see if
everything works?
Just add “—bind-to none” to your cmd line and see if things work
> On Feb 24, 2015, at 2:21 PM, Galloway, Jack D wrot
13 matches
Mail list logo