Jeff and I have talked about this and are approaching a compromise. Still more thinking to do, perhaps providing new configure options to "only build what I ask for" and/or a tool to support a menu-driven selection of what to build - as opposed to today's "build everything you don't tell me to not-build"
Tough set of compromises as it depends on the target audience. Sys admins prefer the "build only what I say", while users (who frequently aren't that familiar with the inners of a system) prefer the "build all" mentality. On May 14, 2014, at 3:16 PM, Ralph Castain <r...@open-mpi.org> wrote: > Indeed, a quick review indicates that the new policy for scheduler support > was not uniformly applied. I'll update it. > > To reiterate: we will only build support for a scheduler if the user > specifically requests it. We did this because we are increasingly seeing > distros include header support for various schedulers, and so just finding > the required headers isn't enough to know that the scheduler is intended for > use. So we wind up building a bunch of useless modules. > > > On May 14, 2014, at 3:09 PM, Ralph Castain <r...@open-mpi.org> wrote: > >> FWIW: I believe we no longer build the slurm support by default, though I'd >> have to check to be sure. The intent is definitely not to do so. >> >> The plan we adjusted to a while back was to *only* build support for >> schedulers upon request. Can't swear that they are all correctly updated, >> but that was the intent. >> >> >> On May 14, 2014, at 2:52 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> >> wrote: >> >>> Here's a bit of our rational, from the README file: >>> >>> Note that for many of Open MPI's --with-<foo> options, Open MPI will, >>> by default, search for header files and/or libraries for <foo>. If >>> the relevant files are found, Open MPI will built support for <foo>; >>> if they are not found, Open MPI will skip building support for <foo>. >>> However, if you specify --with-<foo> on the configure command line and >>> Open MPI is unable to find relevant support for <foo>, configure will >>> assume that it was unable to provide a feature that was specifically >>> requested and will abort so that a human can resolve out the issue. >>> >>> In some cases, we don't need header or library files. For example, with >>> SLURM and LSF, our native support is actually just fork/exec'ing the >>> SLURM/LSF executables under the covers (e.g., as opposed to using rsh/ssh). >>> So we can basically *always* build them. So we do. >>> >>> In general, OMPI builds support for everything that it can find on the >>> rationale that a) we can't know ahead of time exactly what people want, and >>> b) most people want to just "./configure && make -j 32 install" and be done >>> with it -- so build as much as possible. >>> >>> >>> On May 14, 2014, at 5:31 PM, Maxime Boissonneault >>> <maxime.boissonnea...@calculquebec.ca> wrote: >>> >>>> Hi Gus, >>>> Oh, I know that, what I am refering to is that slurm and loadleveler >>>> support are enabled by default, and it seems that if we're using >>>> Torque/Moab, we have no use for slurm and loadleveler support. >>>> >>>> My point is not that it is hard to compile it with torque support, my >>>> point is that it is compiling support for many schedulers while I'm rather >>>> convinced that very few sites actually use multiple schedulers at the same >>>> time. >>>> >>>> >>>> Maxime >>>> >>>> Le 2014-05-14 16:51, Gus Correa a écrit : >>>>> On 05/14/2014 04:25 PM, Maxime Boissonneault wrote: >>>>>> Hi, >>>>>> I was compiling OpenMPI 1.8.1 today and I noticed that pretty much every >>>>>> single scheduler has its support enabled by default at configure (except >>>>>> the one I need, which is Torque). Is there a reason for that ? Why not >>>>>> have a single scheduler enabled and require to specify it at configure >>>>>> time ? >>>>>> >>>>>> Is there any reason for me to build with loadlever or slurm if we're >>>>>> using torque ? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Maxime Boisssonneault >>>>> >>>>> Hi Maxime >>>>> >>>>> I haven't tried 1.8.1 yet. >>>>> However, for all previous versions of OMPI I tried, up to 1.6.5, >>>>> all it took to configure it with Torque support was to point configure >>>>> to the Torque installation directory (which is non-standard in my case): >>>>> >>>>> --with-tm=/opt/torque/bla/bla >>>>> >>>>> My two cents, >>>>> Gus Correa >>>>> >>>>> _______________________________________________ >>>>> users mailing list >>>>> us...@open-mpi.org >>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users >>>> >>>> >>>> -- >>>> --------------------------------- >>>> Maxime Boissonneault >>>> Analyste de calcul - Calcul Québec, Université Laval >>>> Ph. D. en physique >>>> >>>> _______________________________________________ >>>> users mailing list >>>> us...@open-mpi.org >>>> http://www.open-mpi.org/mailman/listinfo.cgi/users >>> >>> >>> -- >>> Jeff Squyres >>> jsquy...@cisco.com >>> For corporate legal information go to: >>> http://www.cisco.com/web/about/doing_business/legal/cri/ >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >