Re: [O-MPI users] Further thoughts
Hello, This has been an interesting discussion to follow. Here are my thoughts on the RPM packaging... On 6/16/05, Jeff Squyres wrote: [snip] > We've also got the "announce" mailing list -- a low volume list just > for announcing new releases (and *exciting* messages about products you > might be interested in... just kidding.). ;-) [snip] > We actually got a lot of help in this area from Greg Kurtzer from LBL > (of cAos/Warewulf/Centos fame). He helped us a bunch with our > [previously extremely lame] LAM/MPI .spec file, and then offered to > write one for Open MPI (which he did about a month or two ago). > > I have some random user questions about RPMs, though: > > 1. Would you prefer an all-in-one Open MPI RPM, or would you prefer > multiple RPMs (e.g., openmpi-doc, openmpi-devel, openmpi-runtime, > ...etc.)? I prefer split RPMs. The fingrained split you mention works well for thin/diskless-nodes, but a simple split of runtime vs everything-else would be "good enough". The primary problem with an all-in-one RPM would be the footprint of the non-MPI packages that satisfy MPI's dependence tree, especially the compilers. > 2. We're definitely going to provide an SRPM suitable for "rpmbuild > --rebuild". However, we're not 100% sure that it's worthwhile to > provide binary RPMs because everyone's cluster/development systems seem > to be "one off" from standard Linux distros. Do you want a binary > RPM(s)? If so, for which distros? (this is one area where vendors > tend to have dramatically different views than academics/researchers) If you supply fairly clean SRPMs, I think the distros themselves can do the binary RPM building themselves. At least that is easy enough for cAos to do. I guess the problem lies in the disparity in the distribution release cycle and Open MPI's expected release cycle. Certain RedHat distribution versions shipped with amazingly old versions of LAM/MPI, which I recall caused no end of trouble on the LAM/MPI mailing lists with questions from long-ago fixed bugs. How much is it worth to the Open MPI team to be able to answer those questions with: rpm -Uvh http://open-mpi.org//open-mpi-1.0-fixed.x86_64.rpm rather than having to explain how to do "rpmbuild --rebuild". I'll suggest that eventually you will want binary RPMs for SUSE 9.3 and CentOS 4 and/or Scientific Linux 4 in both i386 & x86_64 flavors. I'm sure you will get demand for a lot of Fedora Core flavors, but I think that road leads to madness... I think it might work out better to try and get Open MPI into Dag Wieers RPM/APT/YUM repositories... see: http://dag.wieers.com/home-made/apt/ or the still-under-construction RPMforge site: http://rpmforge.net/ That's more than my two cents... -- Tim Mattox - tmat...@gmail.com http://homepage.mac.com/tmattox/ I'm a bright... http://www.the-brights.net/
Re: [O-MPI users] re build time
Please paste the quoted text (appropriately expanded) into a readme or install or some other prominent doc location/appendix as soon as possible if it isn't there already. Details like this matter a lot to a few of us, and many of us haven't drunk completely the 3000 gallons of twisted logic that is the autotool conventions. thanks, ben On Thu, Jun 16, 2005 at 08:44:48PM -0400, Jeff Squyres wrote: > > The default build is to make libmpi be a shared library and build all > the components as dynamic shared objects (think "plugins"). > > But we currently use Autoconf+Automake+Libtool, so to build everything > static, the standard flags suffice: > > ./configure --enable-static --disable-shared > > This will make libmpi.a, all the components are statically linked into > libmpi.a, etc. There's more esoteric configure flags that allow > building some components as DSOs and others statically linked into > libmpi, but most people want entirely one way or the other, so I won't > provide the [uninteresting] details here.
Re: [O-MPI users] Further thoughts
Having been a vict^H^H^H^Hproducer of rpms for hpc apps, and from what i've seen of your installed files (which isn't an extremely large set) I vote as follows: 1) all-in-one. given the current state of HPC, nearly all "users" are also developers. 2) I'm in favor of source rpms, most particularly if you include in the source tarball (not just hidden inside the SRPM) the spec files. The more examples of the proper invocation of configure on specific architectures and network layers, the happier I'm going to be. One could argue the proper place for collecting such examples is a wiki, but in the source is good too. Binary rpms should be the responsibility of the distribution makers (redhat, whoever else) not developers. Ben On Thu, Jun 16, 2005 at 09:01:41PM -0400, Jeff Squyres wrote: > I have some random user questions about RPMs, though: > > 1. Would you prefer an all-in-one Open MPI RPM, or would you prefer > multiple RPMs (e.g., openmpi-doc, openmpi-devel, openmpi-runtime, > ...etc.)? > > 2. We're definitely going to provide an SRPM suitable for "rpmbuild > --rebuild". However, we're not 100% sure that it's worthwhile to > provide binary RPMs because everyone's cluster/development systems seem > to be "one off" from standard Linux distros. Do you want a binary > RPM(s)? If so, for which distros? (this is one area where vendors > tend to have dramatically different views than academics/researchers)
Re: [O-MPI users] re build time
'tis already in the README. Someday we'll have nice glossy PDF's like LAM, but for the beta, the README is what you get. :-) On Jun 17, 2005, at 1:11 PM, Ben Allan wrote: Please paste the quoted text (appropriately expanded) into a readme or install or some other prominent doc location/appendix as soon as possible if it isn't there already. Details like this matter a lot to a few of us, and many of us haven't drunk completely the 3000 gallons of twisted logic that is the autotool conventions. thanks, ben On Thu, Jun 16, 2005 at 08:44:48PM -0400, Jeff Squyres wrote: The default build is to make libmpi be a shared library and build all the components as dynamic shared objects (think "plugins"). But we currently use Autoconf+Automake+Libtool, so to build everything static, the standard flags suffice: ./configure --enable-static --disable-shared This will make libmpi.a, all the components are statically linked into libmpi.a, etc. There's more esoteric configure flags that allow building some components as DSOs and others statically linked into libmpi, but most people want entirely one way or the other, so I won't provide the [uninteresting] details here. ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users -- {+} Jeff Squyres {+} The Open MPI Project {+} http://www.open-mpi.org/