I think Maxime's suggestion is sane and reasonable. Just in case
you're taking ha'penny's worth from the groundlings. I think I would
prefer not to have capability included that we won't use.
-- bennet
On Wed, May 14, 2014 at 7:43 PM, Maxime Boissonneault
wrote:
> For the scheduler issue, I
Good point - will see what we can do about it.
On May 14, 2014, at 4:43 PM, Maxime Boissonneault
wrote:
> For the scheduler issue, I would be happy with something like, if I ask for
> support for X, disable support for Y, Z and W. I am assuming that very rarely
> will someone use more than o
For the scheduler issue, I would be happy with something like, if I ask
for support for X, disable support for Y, Z and W. I am assuming that
very rarely will someone use more than one scheduler.
Maxime
Le 2014-05-14 19:09, Ralph Castain a écrit :
Jeff and I have talked about this and are app
Just sniffing around the web, I found that this is a problem caused by newer
versions of gcc. One reporter stated that they resolved the problem by adding
"-fgnu89-inline" to their configuration:
"add the compiler flag "-fgnu89-inline" (because of an issue where old glibc
libraries aren't compa
Jeff and I have talked about this and are approaching a compromise. Still more
thinking to do, perhaps providing new configure options to "only build what I
ask for" and/or a tool to support a menu-driven selection of what to build - as
opposed to today's "build everything you don't tell me to
On May 14, 2014, at 3:21 PM, Jeff Squyres (jsquyres) wrote:
> On May 14, 2014, at 6:09 PM, Ralph Castain wrote:
>
>> FWIW: I believe we no longer build the slurm support by default, though I'd
>> have to check to be sure. The intent is definitely not to do so.
>
> The srun-based support buil
On May 14, 2014, at 6:09 PM, Ralph Castain wrote:
> FWIW: I believe we no longer build the slurm support by default, though I'd
> have to check to be sure. The intent is definitely not to do so.
The srun-based support builds by default. I like it that way. :-)
PMI-based support is a differen
Indeed, a quick review indicates that the new policy for scheduler support was
not uniformly applied. I'll update it.
To reiterate: we will only build support for a scheduler if the user
specifically requests it. We did this because we are increasingly seeing
distros include header support for
FWIW: I believe we no longer build the slurm support by default, though I'd
have to check to be sure. The intent is definitely not to do so.
The plan we adjusted to a while back was to *only* build support for schedulers
upon request. Can't swear that they are all correctly updated, but that was
Here's a bit of our rational, from the README file:
Note that for many of Open MPI's --with- options, Open MPI will,
by default, search for header files and/or libraries for . If
the relevant files are found, Open MPI will built support for ;
if they are not found, Open MPI will s
Hi Gus,
Oh, I know that, what I am refering to is that slurm and loadleveler
support are enabled by default, and it seems that if we're using
Torque/Moab, we have no use for slurm and loadleveler support.
My point is not that it is hard to compile it with torque support, my
point is that it i
I am having the same compile time failure mentioned on the devel group here:
http://www.open-mpi.org/community/lists/devel/2014/02/14221.php
In short, the compilation of romio fails with errors about redefinition of
lstat64, I am attaching the requested configuration and make outputs. Can
anyone
On 05/14/2014 04:25 PM, Maxime Boissonneault wrote:
Hi,
I was compiling OpenMPI 1.8.1 today and I noticed that pretty much every
single scheduler has its support enabled by default at configure (except
the one I need, which is Torque). Is there a reason for that ? Why not
have a single scheduler
Hi,
I was compiling OpenMPI 1.8.1 today and I noticed that pretty much every
single scheduler has its support enabled by default at configure (except
the one I need, which is Torque). Is there a reason for that ? Why not
have a single scheduler enabled and require to specify it at configure
ti
Just committed a potential fix to the trunk - please let me know if it worked
for you
On May 14, 2014, at 11:44 AM, Siegmar Gross
wrote:
> Hi Ralph,
>
>> Hmmm...well, that's an interesting naming scheme :-)
>>
>> Try adding "-mca oob_base_verbose 10 --report-uri -" on your cmd line
>> and le
Hi Ralph,
> Hmmm...well, that's an interesting naming scheme :-)
>
> Try adding "-mca oob_base_verbose 10 --report-uri -" on your cmd line
> and let's see what it thinks is happening
tyr fd1026 105 mpiexec -np 3 --host tyr,sunpc1,linpc1 --mca oob_base_verbose 10
--report-uri - hostname
[tyr.in
Hmmm...well, that's an interesting naming scheme :-)
Try adding "-mca oob_base_verbose 10 --report-uri -" on your cmd line and let's
see what it thinks is happening
On May 14, 2014, at 9:06 AM, Siegmar Gross
wrote:
> Hi Ralph,
>
>> What are the interfaces on these machines?
>
> tyr fd1026
Hi Ralph,
> What are the interfaces on these machines?
tyr fd1026 111 ifconfig -a
lo0: flags=2001000849 mtu 8232
index 1
inet 127.0.0.1 netmask ff00
bge0: flags=1000843 mtu 1500 index 2
inet 193.174.24.39 netmask ffe0 broadcast 193.174.24.63
tyr fd1026 112
tyr fd1026
Our initial thinking was first half of June, but that is subject to change
depending on severity of reported errors. FWIW: I don't believe we made any
romio changes between 1.8.1 and the current 1.8.2 state, so using 1.8.1 should
be a valid test.
On May 14, 2014, at 8:16 AM, Bennet Fauber wro
Is there an ETA for 1.8.2 general release instead of snapshot?
Thanks, -- bennet
On Wed, May 14, 2014 at 10:17 AM, Ralph Castain wrote:
> You might give it a try with 1.8.1 or the nightly snapshot from 1.8.2 - we
> updated ROMIO since the 1.6 series, and whatever fix is required may be in
> t
What are the interfaces on these machines?
On May 14, 2014, at 7:45 AM, Siegmar Gross
wrote:
> Hi,
>
> I just installed openmpi-1.8.2a1r31742 on my machines (Solaris 10
> Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with
> Sun C5.12 and still have the following problem.
>
> tyr
Hi,
I just installed openmpi-1.8.2a1r31742 on my machines (Solaris 10
Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with
Sun C5.12 and still have the following problem.
tyr fd1026 102 which mpiexec
/usr/local/openmpi-1.8.2_64_cc/bin/mpiexec
tyr fd1026 103 mpiexec -np 3 --host tyr,sunp
Hi,
I just installed openmpi-1.9a1r31750 on my machines (Solaris 10
Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1 x86_64) with
Sun C5.12 and still have the following problem.
tyr fd1026 102 which mpiexec
/usr/local/openmpi-1.9_64_cc/bin/mpiexec
tyr fd1026 103 mpiexec -np 3 --host tyr,sunpc1,l
We also fixed a similar bug in OMPIO roughly one year back, so I would
hope that it should work with OMPIO as well.
Thanks
Edga
On 5/14/2014 9:17 AM, Ralph Castain wrote:
> You might give it a try with 1.8.1 or the nightly snapshot from 1.8.2 - we
> updated ROMIO since the 1.6 series, and whatev
You might give it a try with 1.8.1 or the nightly snapshot from 1.8.2 - we
updated ROMIO since the 1.6 series, and whatever fix is required may be in the
newer version
On May 14, 2014, at 6:52 AM, CANELA-XANDRI Oriol
wrote:
> Hello,
>
> I am using MPI IO for writing/reading a block cyclic
Hello,
I am using MPI IO for writing/reading a block cyclic distribution matrix into
a file.
It works fine except when there is some MPI threads with no data (i.e. when the
matrix is small enough, or the block size is big enough that some processes in
the grid do not have any matrix block). I
What version are you talking about?
On May 13, 2014, at 11:13 PM, Hamed Mortazavi wrote:
> Hi all,
>
> in make check for openmpi on a mac I see following error message, has anybody
> ever run to this error? any solutions?
>
> Best,
>
> Hamed,
> raw extraction in 1 microsec
>
> Example 3.
Hi all,
in make check for openmpi on a mac I see following error message, has
anybody ever run to this error? any solutions?
Best,
Hamed,
raw extraction in 1 microsec
Example 3.26 type1 correct
Example 3.26 type1 correct
Example 3.26 type2 correct
type3 correct
hindexed ok
indexed ok
hve
28 matches
Mail list logo