Because we’ve screwed up in the past? I think the ompi_message_null was me,
and I was in a hurry to prototype for the MPI Forum. And then it stuck.
Brian
On 2/1/23, 3:16 AM, "users on behalf of Jeff Hammond via users"
mailto:users-boun...@lists.open-mpi.org> on
behalf of users@lists.open-mpi
a users
Sent: Tuesday, November 29, 2022 3:36 AM
To: Gestió Servidors via users
Cc: Gilles Gouaillardet
Subject: Re: [OMPI users] Question about "mca" parameters
Hi,
Simply add
btl = tcp,self
If the openib error message persists, try also adding
osc_rdma_btls = ugni,uct,ucp
or
Hi,
Simply add
btl = tcp,self
If the openib error message persists, try also adding
osc_rdma_btls = ugni,uct,ucp
or simply
osc = ^rdma
Cheers,
Gilles
On 11/29/2022 5:16 PM, Gestió Servidors via users wrote:
Hi,
If I run “mpirun --mca btl tcp,self --mca allow_ib 0 -n 12
./my_prog
This appears to be a legit bug with the use of MPI_T in the test/example
monitoring app, so I'm going to move the discussion to the Github issue so that
we can track it properly:
https://github.com/open-mpi/ompi/issues/9260
To answer Jong's question: ob1 is one of Open MPI's point-to-point mess
Thank you for the information. I don't know what ob1 is and possible other
choices are. Is there any way for me to check?
Anyhow, I tried a few things but got the same error. Here ia bit more
verbose output:
shell$ mpirun -n 1 --allow-run-as-root --mca pml_base_verbose 10 --mca
mtl_base_verbose 10
You need to enable the monitoring PML in order to get access to the
pml_monitoring_messages_count MPI_T. For this you need to know what PML you
are currently using and add monitoring to the pml MCA variable. As an
example if you use ob1 you should add the following to your mpirun command
"--mca pml
o output) when
> I specified btl_base_verbose 100.
> >
> > I will try using the CIDR for the below hosts as an experiment.
> >
> > Regards,
> > Vipul
> >
> >
> >
> > From: Jeff Squyres (jsquyres) [mailto:jsquy...@cisco.com]
> > Sent: Tues
gt; Vipul
>
>
>
> From: Jeff Squyres (jsquyres) [mailto:jsquy...@cisco.com]
> Sent: Tuesday, June 23, 2020 1:36 PM
> To: Open MPI User's List
> Cc: Kulshrestha, Vipul
> Subject: Re: [OMPI users] Question about virtual interface
>
> https://www.open-mpi.
Vipul
Subject: Re: [OMPI users] Question about virtual interface
https://www.open-mpi.org/faq/?category=tcp#ip-virtual-ip-interfaces is
referring to interfaces like "eth0:0", where the Linux kernel will have the
same index for both "eth0" and "eth0:0". This will cause
https://www.open-mpi.org/faq/?category=tcp#ip-virtual-ip-interfaces is
referring to interfaces like "eth0:0", where the Linux kernel will have the
same index for both "eth0" and "eth0:0". This will cause Open MPI to get
confused (because it identifies Ethernet interfaces by their kernel indexes
> On Mar 13, 2020, at 9:33 AM, Jeffrey Layton via users
> wrote:
>
> Good morning,
>
> I've compiled a hello world MPI code and when I run it, I get some messages
> I'm not familiar with. The first one is,
>
> --
> WARNIN
All:
Whoops.
My apologies to everybody. Accidentally pressed the wrong combination of
buttons on the keyboard and sent this email out prematurely.
Please disregard.
Thank you.
Sincerely,
Ewen
From: users on behalf of Ewen Chan via users
Sent: July 25, 2019
Yes, yes, and yes. I built everything, both openmpi using icc for the C
compiler, icpc for C++, and ifort for Fortran. All point to the same
installation. My application is built using the installed openmpi front ends,
mpicc, mpicxx, mpifort, which all report they use the intel versions.
John W
On Aug 2, 2018, at 4:40 PM, Grove, John W wrote:
>
> I am compiling an application using openmpi 3.1.1. The application is mixed
> Fortran/C/C++. I am using the intel compiler on a mac pro running OS 10.13.6.
> When I try to use the mpi_f08 interface I get unresolved symbols at load
> time, sp
There is already a nice solution for the useful special case of ABI
portability where one wants to use more than one MPI library with an
application binary, but only one MPI library for a given application
invocation:
https://github.com/cea-hpc/wi4mpi
They document support for the Intel MPI and O
Don't forget that there's a lot more to "binary portability" between MPI
implementations than just the ABI (wire protocols, run-time interfaces,
...etc.). This is the main (set of) reasons that ABI standardization of the
MPI specification never really took off -- so much would need to be
stand
On 09/20/17 23:39, Jeff Hammond wrote:
I assume that anyone who is using Fortran 2003 or later has the good sense to
never use compiler flags to change the size of the INTEGER type, because this
is evil.
Actually, only changing INTEGER size without adjusting REAL size is evil (i.e.
breaks assu
This discussion started getting into an interesting question: ABI
standardization for portability by language. It makes sense to have ABI
standardization for portability of objects across environments. At the same
time it does mean that everyone follows the exact same recipe for low level
implement
On Wed, Sep 20, 2017 at 5:55 AM, Dave Love
wrote:
> Jeff Hammond writes:
>
> > Please separate C and C++ here. C has a standard ABI. C++ doesn't.
> >
> > Jeff
>
> [For some value of "standard".] I've said the same about C++, but the
> current GCC manual says its C++ ABI is "industry standard",
On Wed, Sep 20, 2017 at 6:26 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond
> wrote:
>
> > Fortran is a legit problem, although if somebody builds a standalone
> Fortran
> > 2015 implementation of the MPI interface, it would be dec
On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond wrote:
> Fortran is a legit problem, although if somebody builds a standalone Fortran
> 2015 implementation of the MPI interface, it would be decoupled from the MPI
> library compilation.
Is this even doable without making any assumptions ?
For exam
Jeff Hammond writes:
> Intel compilers support GOMP runtime interoperability, although I don't
> believe it is the default. You can use the Intel/LLVM OpenMP runtime with
> GCC such that all three OpenMP compilers work together.
For what it's worth, it's trivial to make a shim with a compatible
Jeff Hammond writes:
> Please separate C and C++ here. C has a standard ABI. C++ doesn't.
>
> Jeff
[For some value of "standard".] I've said the same about C++, but the
current GCC manual says its C++ ABI is "industry standard", and at least
Intel document compatibility with recent GCC on GNU/
Intel compilers support GOMP runtime interoperability, although I don't
believe it is the default. You can use the Intel/LLVM OpenMP runtime with
GCC such that all three OpenMP compilers work together.
Fortran is a legit problem, although if somebody builds a standalone
Fortran 2015 implementation
OMP is yet another source of incompatibility between GNU and Intel
environments. So compiling say Fortran OMP code into a library and trying
to link it with Intel Fortran codes just aggravates the problem.
Michael
On Mon, Sep 18, 2017 at 7:35 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com
Hello OpenMPI team,
Thank you for the insightful feedback. I am not claiming in any way that it
is a meaningful practice to build the OpenMPI stack with one compiler and
then just try to convince / force it to use another compilation environment
to build MPI applications. There are occasions thoug
Please separate C and C++ here. C has a standard ABI. C++ doesn't.
Jeff
On Mon, Sep 18, 2017 at 5:39 PM Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Even if i do not fully understand the question, keep in mind Open MPI
> does not use OpenMP, so from that point of view, Open MPI
Even if i do not fully understand the question, keep in mind Open MPI
does not use OpenMP, so from that point of view, Open MPI is
independant of the OpenMP runtime.
Let me emphasize on what Jeff already wrote : use different installs
of Open MPI (and you can use modules or lmod in order to choose
I think Jeff squires summed it up.
Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
Original message From: Michael Thomadakis
Date: 9/18/17 4:57 PM (GMT-05:00) To: Open MPI
Users Cc: n8tm Subject: Re: [OMPI
users] Question concerning compatibilit
Thanks for the note. How about OMP runtimes though?
Michael
On Mon, Sep 18, 2017 at 3:21 PM, n8tm via users
wrote:
> On Linux and Mac, Intel c and c++ are sufficiently compatible with gcc and
> g++ that this should be possible. This is not so for Fortran libraries or
> Windows.
>
>
>
>
>
>
> S
FWIW, we always encourage you to use the same compiler to build Open MPI and
your application.
Compatibility between gcc and Intel *usually* works for C and C++, but a)
doesn't work for Fortran, and b) there have been bugs in the past where C/C++
compatibility broke in corner cases. My $0.02:
On Linux and Mac, Intel c and c++ are sufficiently compatible with gcc and g++
that this should be possible. This is not so for Fortran libraries or Windows.
Sent via the Samsung Galaxy S8 active, an AT&T 4G LTE smartphone
Original message From: Michael Thomadakis
Date: 9
Siegmar,
a noticeable difference is hello_1 does *not* sleep, whereas
hello_2_slave *does*
simply comment out the sleep(...) line, and performances will be identical
Cheers,
Gilles
On 7/31/2017 9:16 PM, Siegmar Gross wrote:
Hi,
I have two versions of a small program. In the first one th
Thank you for the explanation! I understand what is going on now: there
is a process list for each node whose order is dependent on the mapping
policy, and the ranker, when using "slot," walks through that list.
Makes sense.
Thank you again!
David
On 11/30/2016 04:46 PM, r...@open-mpi.org wro
“slot’ never became equivalent to “socket”, or to “core”. Here is what happened:
*for your first example: the mapper assigns the first process to the first node
because there is a free core there, and you said to map-by core. It goes on to
assign the second process to the second core, and the th
Hello Ralph,
I do understand that "slot" is an abstract term and isn't tied down to
any particular piece of hardware. What I am trying to understand is how
"slot" came to be equivalent to "socket" in my second and third example,
but "core" in my first example. As far as I can tell, MPI ranks s
I think you have confused “slot” with a physical “core”. The two have
absolutely nothing to do with each other.
A “slot” is nothing more than a scheduling entry in which a process can be
placed. So when you --rank-by slot, the ranks are assigned round-robin by
scheduler entry - i.e., you assign
Additionally:
- When Open MPI migrated to github, we only brought over relevant open Trac
tickets to Github. As such, many old 1.10 and 1.8 (and earlier) issues were
not brought over.
- Trac is still available in a read-only manner at
https://svn.open-mpi.org/trac/ompi/report.
> On Oct 5, 20
Edwin,
changes are summarized in the NEWS file
we used to have two github repositories, and they were "merged" recently
with github, you can list the closed PR for a given milestone
https://github.com/open-mpi/ompi-release/milestones?state=closed
then you can click on a milestone, and list
What kind of system was this on? ssh, slurm, ...?
> On Jul 28, 2016, at 1:55 PM, Blosch, Edwin L wrote:
>
> I am running cases that are starting just fine and running for a few hours,
> then they die with a message that seems like a startup type of failure.
> Message shown below. The messag
We already do that as a check, but it came after the 1.6 series - and so you
get the old error message if you mix with the 1.6 series or older versions.
> On May 16, 2016, at 8:22 AM, Gilles Gouaillardet
> wrote:
>
> or this could be caused by a firewall ...
> v1.10 and earlier uses tcp for o
or this could be caused by a firewall ...
v1.10 and earlier uses tcp for oob,
from v2.x, unix sockets are used
detecting consistent version is a good idea,
printing them (mpirun, orted and a.out) can be a first step.
my idea is
mpirun invokes orted with '--ompi_version=x.y.z'
orted checks it is
Ralph Castain writes:
> This usually indicates that the remote process is using a different OMPI
> version. You might check to ensure that the paths on the remote nodes are
> correct.
That seems quite a common problem with non-obvious failure modes.
Is it not possible to have a mechanism that c
the internet and I also performe the following command to find mpirun
path and add it to .bashcr file. However, the results with no effect.
[user@localhost ~]$ which mpirun
/usr/lib64/openmpi/bin/mpirun
Any idea and thanks in advance!
*Subject:* Re: [OMPI users] Question about mpirun
path and add
it to .bashcr file. However, the results with no effect.
[user@localhost ~]$ which mpirun/usr/lib64/openmpi/bin/mpirun
Any idea and thanks in advance!
Subject: Re: [OMPI users] Question about mpirun mca_oob_tcp_recv_handler error.
From: Ralph Castain (rhc_at_[hidden])
Date: 2016-05-10
This usually indicates that the remote process is using a different OMPI
version. You might check to ensure that the paths on the remote nodes are
correct.
On Tue, May 10, 2016 at 8:46 AM, lzfneu wrote:
> Hi everyone,
>
> I have a problem to consult you, when I cd to the /examples folder
> cont
.org>>
Subject: Re: [OMPI users] Question on MPI_Comm_spawn timing
I honestly don’t think anyone has been concerned about the speed of
MPI_Comm_spawn, and so there hasn’t been any effort made to optimize it
On Apr 3, 2016, at 2:52 AM, Gilles Gouaillardet
mailto:gilles.gouaillar..
I honestly don’t think anyone has been concerned about the speed of
MPI_Comm_spawn, and so there hasn’t been any effort made to optimize it
> On Apr 3, 2016, at 2:52 AM, Gilles Gouaillardet
> wrote:
>
> Hi,
>
> performance of MPI_Comm_spawn in the v1.8/v1.10 series is known to be poor,
> es
Hi,
performance of MPI_Comm_spawn in the v1.8/v1.10 series is known to be poor,
especially compared to v1.6
generally speaking, I cannot recommend v1.6 since it is no more maintained.
that being said, if performance is critical, you might want to give it a
try.
I did not run any performance meas
I forgot to include a link to the official announcement of the change,
and that info might be helpful in navigating the different versions and
backwards compatibility:
https://www.open-mpi.org/community/lists/announce/2015/06/0069.php
Thanks,
David
On 02/26/2016 10:43 AM, David Shrader wrote:
Hey Edwin,
The versioning scheme changed with 2.x. Prior to 2.x the "Minor" version
had a different definition and did not mention backwards compatibility
at all (at least in my 1.6.x tarballs). As it turned out for 1.8.x and
1.6.x, 1.8.x was not backwards compatible with 1.6.x, so the behavio
Sure:
$ ompi_info --param hwloc all -l 9
…..
MCA hwloc: parameter "hwloc_base_cpu_set" (current value: "",
data source: default, level: 9 dev/all, type:
string)
Comma-separated list of ranges specifying lo
Thank you and one last question. Is it possible to avoid a core and
instruct OMPI to use only the other cores?
On Mon, Dec 22, 2014 at 2:08 PM, Ralph Castain wrote:
>
> On Dec 22, 2014, at 10:45 AM, Saliya Ekanayake wrote:
>
> Hi Ralph,
>
> Yes the report bindings show the correct binding as ex
> On Dec 22, 2014, at 10:45 AM, Saliya Ekanayake wrote:
>
> Hi Ralph,
>
> Yes the report bindings show the correct binding as expected for the
> processes. The doubt I am having is, say I spawn a thread within my process.
> If I don't specify affinity for it, is it possible for it to get sche
Hi Ralph,
Yes the report bindings show the correct binding as expected for the
processes. The doubt I am having is, say I spawn a thread within my
process. If I don't specify affinity for it, is it possible for it to get
scheduled to run in a core outside that of the process?
Second question is,
FWIW: it looks like we are indeed binding to core if PE is set, so if you are
seeing something different, then we may have a bug somewhere.
If you add —report-bindings to your cmd line, you should see where we bound the
procs - does that look correct?
> On Dec 22, 2014, at 9:49 AM, Ralph Casta
They will be bound to whatever level you specified - I believe by default we
bind to socket when mapping by socket. If you want them bound to core, you
might need to add —bind-to core.
I can take a look at it - I *thought* we had reset that to bind-to core when
PE=N was specified, but maybe tha
Ah, yes - so here is what is happening. When no slot info is provided, we use
the number of detected cores on each node as the #slots. So if you want to
loadbalance across the nodes, you need to set —map-by node
Or add slots=1 to each line of your host file to override the default behavior
> On
I'm not aware of any way to tell using ompi_info, I'm afraid. I'd have to
ponder a bit as to how we could do so since it's a link to a library down below
the one we directly use.
On Jul 21, 2014, at 3:00 PM, Blosch, Edwin L wrote:
> In making the leap from 1.6 to 1.8, how can I check whether
Open MPI is distributed under the modified BSD license. Here’s a link to the
v1.8 LICENSE file:
https://svn.open-mpi.org/trac/ompi/browser/branches/v1.8/LICENSE
As long as you abide by the terms of that license, you are fine.
On Jun 17, 2014, at 4:41 AM, Victor Vysotskiy
wrote:
> Dear
rg] on behalf of Ralph Castain
[r...@open-mpi.org]
Sent: Friday, May 16, 2014 2:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Question about scheduler support
On May 16, 2014, at 1:03 PM, Fabricio Cannini
mailto:fcann...@gmail.com>> wrote:
Em 16-05-2014 10:06, Jeff Squyres (jsqu
gt; >> Nada, zilch, nothing on standard OS X install. I do not want to put an
> >> extra
> requirement on my users. Nor do I want something as simple-minded as CMake.
> autotools works great for me.
> >>
> >> -Nathan
> >>
> >>
From: users [users-boun...@open-mpi.org] on behalf of Ralph Castain
[r...@open-mpi.org]
Sent: Friday, May 16, 2014 2:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Question about scheduler support
On May 16, 2014, at 1:03 PM, Fabricio Cannini
mailto:fca
Em 16-05-2014 17:07, Ralph Castain escreveu:
FWIW, simply for my own curiosity's sake, if someone could confirm
deny whether cmake:
1. Supports the following compiler suites: GNU (that's a given, I
assume), Clang, OS X native (which is variants of GNU and Clang),
Absoft, PGI, Intel, Cray, HP-UX,
want something as simple-minded as CMake.
> autotools works great for me.
>
> -Nathan
>
>
> From: users [users-boun...@open-mpi.org] on behalf of Ralph Castain
> [r...@open-mpi.org]
> Sent: Friday, May 16, 2014 2:07 PM
> To: Open M
.
autotools works great for me.
-Nathan
From: users [users-boun...@open-mpi.org] on behalf of Ralph Castain
[r...@open-mpi.org]
Sent: Friday, May 16, 2014 2:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] Question about scheduler support
On May 16, 2014
On May 16, 2014, at 1:03 PM, Fabricio Cannini wrote:
> Em 16-05-2014 10:06, Jeff Squyres (jsquyres) escreveu:
>> On May 15, 2014, at 8:00 PM, Fabricio Cannini
>> wrote:
>>
Nobody is disagreeing that one could find a way to make CMake
work - all we are saying is that (a) CMake has iss
Em 16-05-2014 10:06, Jeff Squyres (jsquyres) escreveu:
On May 15, 2014, at 8:00 PM, Fabricio Cannini
wrote:
Nobody is disagreeing that one could find a way to make CMake
work - all we are saying is that (a) CMake has issues too, just
like autotools, and (b) we have yet to see a compelling reas
Le 2014-05-16 09:06, Jeff Squyres (jsquyres) a écrit :
On May 15, 2014, at 8:00 PM, Fabricio Cannini wrote:
Nobody is disagreeing that one could find a way to make CMake work - all we are
saying is that (a) CMake has issues too, just like autotools, and (b) we have
yet to see a compelling re
On May 15, 2014, at 8:00 PM, Fabricio Cannini wrote:
>> Nobody is disagreeing that one could find a way to make CMake work - all we
>> are saying is that (a) CMake has issues too, just like autotools, and (b) we
>> have yet to see a compelling reason to undertake the transition...which
>> woul
Em 15-05-2014 20:48, Ralph Castain escreveu:
Nobody is disagreeing that one could find a way to make CMake work - all we are
saying is that (a) CMake has issues too, just like autotools, and (b) we have
yet to see a compelling reason to undertake the transition...which would have
to be a *very
Nobody is disagreeing that one could find a way to make CMake work - all we are
saying is that (a) CMake has issues too, just like autotools, and (b) we have
yet to see a compelling reason to undertake the transition...which would have
to be a *very* compelling one.
On May 15, 2014, at 4:45 PM
Em 15-05-2014 20:15, Maxime Boissonneault escreveu:
Le 2014-05-15 18:27, Jeff Squyres (jsquyres) a écrit :
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
Alright, but now I'm curious as to why you decided against it.
Could please elaborate on it a bit ?
OMPI has a long, deep history wi
On May 15, 2014, at 4:15 PM, Maxime Boissonneault
wrote:
> Le 2014-05-15 18:27, Jeff Squyres (jsquyres) a écrit :
>> On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
>>
>>> Alright, but now I'm curious as to why you decided against it.
>>> Could please elaborate on it a bit ?
>> OMPI has
Le 2014-05-15 18:27, Jeff Squyres (jsquyres) a écrit :
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
Alright, but now I'm curious as to why you decided against it.
Could please elaborate on it a bit ?
OMPI has a long, deep history with the GNU Autotools. It's a very long,
complicated
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
> Alright, but now I'm curious as to why you decided against it.
> Could please elaborate on it a bit ?
OMPI has a long, deep history with the GNU Autotools. It's a very long,
complicated story, but the high points are:
1. The GNU Autotools
Em 15-05-2014 18:40, Ralph Castain escreveu:
On May 15, 2014, at 2:34 PM, Fabricio Cannini wrote:
Em 15-05-2014 07:29, Jeff Squyres (jsquyres) escreveu:
I think Ralph's email summed it up pretty well -- we unfortunately have (at
least) two distinct groups of people who install OMPI:
a) tho
On May 15, 2014, at 2:34 PM, Fabricio Cannini wrote:
> Em 15-05-2014 07:29, Jeff Squyres (jsquyres) escreveu:
>> I think Ralph's email summed it up pretty well -- we unfortunately have (at
>> least) two distinct groups of people who install OMPI:
>>
>> a) those who know exactly what they want
On Thu, May 15, 2014 at 06:34:20PM -0300, Fabricio Cannini wrote:
> Em 15-05-2014 07:29, Jeff Squyres (jsquyres) escreveu:
> >I think Ralph's email summed it up pretty well -- we unfortunately have (at
> >least) two distinct groups of people who install OMPI:
> >
> >a) those who know exactly what
Please allow me to chip in my $0.02 and suggest to not reinvent the
wheel, but instead consider to migrate the build system to cmake :
http://www.cmake.org/
I agree that menu-wise, CMake does a pretty good job with ccmake, and is
much, much easier to create than autoconf/automake/m4 stuff (
Em 15-05-2014 07:29, Jeff Squyres (jsquyres) escreveu:
I think Ralph's email summed it up pretty well -- we unfortunately have (at
least) two distinct groups of people who install OMPI:
a) those who know exactly what they want and don't want anything else
b) those who don't know exactly what th
Hi Gus
The issue is that you have to work thru all the various components (leafing
thru the code base) to construct a list of all the things you *don't* want to
build. By default, we build *everything*, so there is no current method to
simply "build only what I want".
For those building static
Hi List
Sorry, but I confess I am having a hard time to understand
all the fuss about this.
At least in OMPI 1.6.5 there are already
two configure options that just knock out support for slurm and
loadleveler if they are set to "no", hopefully for the joy of everybody
that want lean and mean OMP
These are all good points -- thanks for the feedback.
Just to be clear: my point about the menu system was to generate file that
could be used for subsequent installs, very specifically targeted at those who
want/need scriptable installations.
One possible scenario could be: you download OMPI
I’m not sure how this would apply to other options, but for the scheduler, what
about no scheduler-related options defaults to everything enabled (like
before), but having any explicit scheduler enable option disables by default
all the other schedulers? Multiple explicit enable options would en
A file would do the trick, but from my experience of building programs,
I always prefer configure options. Maybe just an option
--disable-optional
that disables anything that is optional and non-explicitely requested.
Maxime
Le 2014-05-15 08:22, Bennet Fauber a écrit :
Would a separate file t
Would a separate file that contains each scheduler option and is
included by configure do the trick? It might read
include-slurm=YES
include-torque=YES
etc.
If all options are set to default to YES, then the people who want no
options are satisfied, but those of us who would like to change the
c
Le 2014-05-15 06:29, Jeff Squyres (jsquyres) a écrit :
I think Ralph's email summed it up pretty well -- we unfortunately have (at
least) two distinct groups of people who install OMPI:
a) those who know exactly what they want and don't want anything else
b) those who don't know exactly what th
I think Ralph's email summed it up pretty well -- we unfortunately have (at
least) two distinct groups of people who install OMPI:
a) those who know exactly what they want and don't want anything else
b) those who don't know exactly what they want and prefer to have everything
installed, and hav
I think Maxime's suggestion is sane and reasonable. Just in case
you're taking ha'penny's worth from the groundlings. I think I would
prefer not to have capability included that we won't use.
-- bennet
On Wed, May 14, 2014 at 7:43 PM, Maxime Boissonneault
wrote:
> For the scheduler issue, I
Good point - will see what we can do about it.
On May 14, 2014, at 4:43 PM, Maxime Boissonneault
wrote:
> For the scheduler issue, I would be happy with something like, if I ask for
> support for X, disable support for Y, Z and W. I am assuming that very rarely
> will someone use more than o
For the scheduler issue, I would be happy with something like, if I ask
for support for X, disable support for Y, Z and W. I am assuming that
very rarely will someone use more than one scheduler.
Maxime
Le 2014-05-14 19:09, Ralph Castain a écrit :
Jeff and I have talked about this and are app
Jeff and I have talked about this and are approaching a compromise. Still more
thinking to do, perhaps providing new configure options to "only build what I
ask for" and/or a tool to support a menu-driven selection of what to build - as
opposed to today's "build everything you don't tell me to
On May 14, 2014, at 3:21 PM, Jeff Squyres (jsquyres) wrote:
> On May 14, 2014, at 6:09 PM, Ralph Castain wrote:
>
>> FWIW: I believe we no longer build the slurm support by default, though I'd
>> have to check to be sure. The intent is definitely not to do so.
>
> The srun-based support buil
On May 14, 2014, at 6:09 PM, Ralph Castain wrote:
> FWIW: I believe we no longer build the slurm support by default, though I'd
> have to check to be sure. The intent is definitely not to do so.
The srun-based support builds by default. I like it that way. :-)
PMI-based support is a differen
Indeed, a quick review indicates that the new policy for scheduler support was
not uniformly applied. I'll update it.
To reiterate: we will only build support for a scheduler if the user
specifically requests it. We did this because we are increasingly seeing
distros include header support for
FWIW: I believe we no longer build the slurm support by default, though I'd
have to check to be sure. The intent is definitely not to do so.
The plan we adjusted to a while back was to *only* build support for schedulers
upon request. Can't swear that they are all correctly updated, but that was
Here's a bit of our rational, from the README file:
Note that for many of Open MPI's --with- options, Open MPI will,
by default, search for header files and/or libraries for . If
the relevant files are found, Open MPI will built support for ;
if they are not found, Open MPI will s
Hi Gus,
Oh, I know that, what I am refering to is that slurm and loadleveler
support are enabled by default, and it seems that if we're using
Torque/Moab, we have no use for slurm and loadleveler support.
My point is not that it is hard to compile it with torque support, my
point is that it i
On 05/14/2014 04:25 PM, Maxime Boissonneault wrote:
Hi,
I was compiling OpenMPI 1.8.1 today and I noticed that pretty much every
single scheduler has its support enabled by default at configure (except
the one I need, which is Torque). Is there a reason for that ? Why not
have a single scheduler
1 - 100 of 250 matches
Mail list logo