Re: [O-MPI users] shell interaction
> > Also on the fantasy wish list is for the libraries to > > be installed in libtool form (unless you go away from autotools > > altogether). > > Do you mean install the .la files as well as the usual .so and .a > files? If so, we already do that. If you mean something else, could > you explain what you mean? Just that precisely (oops, oh, there they are indeed). Only if you do decide to move toward scons or something and away from autotools, I would encourage you to consider keeping libtool in the mix. thanks ben
Re: [O-MPI users] re build time
[an aside for the mailing list admin before the main message: I want to subscribe a secondary address and then check the box that says nomail in the mailman membership list. The secondary address can post mail but runs no server to accept inbound mail, which tends to squish the confirm portion of the subscribe dialog. ] On Wed, Jun 15, 2005 at 06:17:09PM -0400, Jeff Squyres wrote: > > The ompi_info command was directly derived from the LAM/MPI laminfo > command. However, I've never liked the fact that there's a "_" in the > name. Should it be renamed? Options I see are: I, for obvious reasons (mainly to do with 'well, most projects name it that'), will vote for open-mpi-config and/or openmpi-config. In perusing the output of -al from ompi_info, some oddities 1) For us sedders MCA base: seems to have several instances of $heading : parameter $name default {linebreak} $somevalue which might be awk friendly, but i'm not sure how sed amateur friendly it is. Typically this is around long path names. 2) A nice catalog of flags used at compile, prefix dirs etc is provided, thank god and or Jeff. Of course ompi_info --help didn't tell me that. However, the compiler variables specify unadorned and hence unuseful names like C++ compiler: g++ Fortran77 compiler: g77 Fortran90 compiler: none which just cause problems on the *very good* chance that the user has a different path or installs new compilers. I can't count the number of times i've "debugged" some user trying to compile c++ code with a mismatched mpic[xx,++] wrapper. Please, extract the full path name to the compilers your wrappers are going to invoke and put them in ompi_info. thanks, (an incrementally happier) Ben > 1. ompi_info (the current name) > 2. ompi-info > 3. ompiinfo > 4. something else entirely
Re: [O-MPI users] re build time
On Wed, Jun 15, 2005 at 08:27:58PM -0400, Jeff Squyres wrote: > On Jun 15, 2005, at 7:02 PM, Ben Allan wrote: > > Ah -- I thought that that would be a different issue (I presume you're > speaking of the compile/lib flags command, like gnome-config et > al.)...? Are you saying that the compile/lib flags should be > accessible from ompi_info in a fine-grained fashion as well? (they're > not right now -- only "compile flags" and "link flags") As much info we can get, in as flexible a format as possible, is the best thing I can imagine. And of course some attention is needed so that from release to release, information output is consistent with the prior releases, i.e. the labels don't change gratuitously. What's called "c compiler" in one release shouldn't be called ISO-C-compiler in the next. New labels can be added, of course. > That would obviate using the frameworks like gnome-config (which can > read arbitrary *Conf.sh files), or ... er... I swear there was another > one, but I can't seem to find it at the moment. I'm not saying that > this is necessarily a Bad Thing; it's just something else that would > need to be implemented. Actually, i'm perfectly happy with a ompiConf.sh provided there's an open-mpi-config that will tell me where it sits (and maybe even query it for me). > Did you look at the output when you run with the -parsable flag? (see > my other mail about this) I tried that just now and it doesn't look different. Will check your other mail (which apparently i haven't reached yet in the mail reader). I take that back, apparently -parseable isn't recognized but -parsable is. A :-separated output results. tolerable. :) Kudos to you all, actually. I haven't seen anything this useful from the mpich team yet. {now matt can correct me...} > I'm still not sure that you're getting what you want, though. Note > that there's two sets of flags provided by "ompi_info -all" -- the > flags that Open MPI was built with and the flags that are added by the > wrapper compilers. Are you just extracting the wrapper compiler flags? > Are they sufficient? It is useful to know both. > Also note that the wrapper compilers will report their flags to you as > well: > > mpicc -showme > mpicc -showme:compile > mpicc -showme:link As usual, mpicc --help doesn't show showme as an option. > Finally, is there a reason you can't just use the wrapper compilers > themselves? They can even be layered with other compilers if > necessary. Unless there's a technical reason that you can't, I would > strongly advise using the wrapper compilers -- we wrote them for > exactly this purpose. I love it when compiler wrappers work. But in the context of multi-language builds, cranky c++ and fortran compilers competing for who gets to link either the executable or construct the shared library, mis-installations by sysadmins, portability to horrors like AIX, etc, all wrappers are taken with a grain of salt. My users expect to combine c,c++,fortran,python,java(!),and fortran-variant-x all in the same executable on a diversity of platforms. And when it doesn't work, they don't go to you, they tell me "hey, make it work, my mpi isn't broken -- it runs my vanilla C code all the time." The real issue is, of course, the utter insanity of history that is the linker. The workaround always involves reverse-engineering the compiler wrappers and assembling the link line details explicitly. Far better that this kind of insanity be testable and the work-arounds picked out by my configure scripts than all the users coming back to me for individual attention. > help messages as of yet. Don't worry; they will be there in the > not-distant future (look at LAM's documentation and verbose help > messages as an example: I believe in good error messages). Looking forward to it. > > Please, extract the full path name to the compilers your > > wrappers are going to invoke and put them in ompi_info. > > Actually, it is whatever was given to configure. In this case, only > "gcc" was given to configure. For example, if you: > > with > > ./configure CC=/path/to/gcc > > You'll see: > > C compiler: /path/to/gcc > > And that's also the name that mpicc will fork/exec to compile C > applications. I'd prefer to leave it this way for the following > reasons: > > - this is the way that I've seen most Autoconf-enabled build systems > work > - if people want to use absolute names for compilers, they can > - those who don't want absolute names aren't forced to (there's many an > in
Re: [O-MPI users] Questions on status
> > It takes time to incorporate a new mpi implementation (and yet > > another set of awful build requirement peculiarities) into a > > a package like mine that is expected to be portable and to cope > > seamlessly with every mpi that comes along. > > What is your tool, BTW? Well, several. Primarily ccaffeine(sandia) and babel(llnl) which are, respectively, a generic HPC component framework and the DOE language interoperability tool. > other mails today have indicated, OMPI has fully functional wrapper > compilers (mpicc, mpiCC, mpif77, mpif90) and an ompi_info command > (analogous to, but greatly superseding LAM's laminfo command). I'm looking forward to seeing how well the wrappers interact with babel+libtool. Ben
Re: [O-MPI users] re build time
On Thu, Jun 16, 2005 at 06:33:51PM -0400, Jeff Squyres wrote: > On Jun 16, 2005, at 2:58 PM, Ben Allan wrote: > > The only reason to have something like ompiConf.sh is to use the > frameworks that already exist (like the gnome-conf thingy). I was only > tossing that out as an example -- I didn't know if you were looking to > use a standardized tool or didn't really care where the info came from > as long as there was a defined interface to it. > > It sounds like the latter. I'm sure any standardized tool i assume won't be there, so yes, the latter. > > As usual, mpicc --help doesn't show showme as an option. > > The *only* flag that mpicc (and friends) recognizes is --showme. > *Everything* else is passed to the underlying compiler. We didn't want > to take the chance, for example, that --help was actually a valid flag > for the underlying compiler. So is this called out in the mpi-2 standard? Please god let it be so; i haven't noticed it yet if it is. > This kinda hamstrings the ability to add features into wrapper > compilers, but we can figure out something safe if we need to. yah. wouldn't it be nice if we could reserve --mpi-* for the compiler wrapper guys? I'm guessing there aren't *too many* existing serial compilers that already use --mpi-$x. > > portability to horrors like AIX, etc, all wrappers are > > taken with a grain of salt. > > I guess I still don't understand why -- all the reasons you cited above > are going to be problematic regardless of whether you're using a > wrapper compiler or not. All that mpicc (and friends) do is add the > proper -I, -L, -l, and other flags (like -pthread). That's it. Which > you use to link your application / create your shared library is still > up to you (e.g., mpicc/c++/f77/f90), for example. Well, unfortunately not. Some of those flags and libraries linked by an arbitrary (not necessarily your) mpi wrapper may make very definite assumptions about things like c++ or dynamic loading. That's fine when all the code in all the components in the final executable are built with your compiler wrapper. Unfortunately people have the habit of using serial libraries built ahead of time with non-mpi compilers in parallel applications. They even still do things like use pvm in mpi applications. Frequently it's much easier to extract CXXFLAGS and propagate them than it is to convince some third-party configure script to accept mpicxx as a compiler. > I remember that MPI wrapper compilers in the mid- and late-90's were > pretty crappy. But I think they're all gotten pretty reasonable of > late (I could be wrong here, though...?). I pray you're right, but am skeptical. > Let me know if the wrappers work for you (ditto for LAM/MPI; the > wrappers in OMPI are basically the same -- but slightly expanded -- as > the wrappers from LAM/MPI). I have only seen one situation (extremely > recently) where a LAM user *couldn't* use the wrapper compilers, but > they wanted to intercept MPI calls in a fairly non-standard way, so we > judged that an acceptable failure for the wrappers (i.e., the user was > satisfied with --showme:foo). On my list to test. That's why the bootleg source. I'm optimistic from what i've seen, but busy getting out some papers just now. > Are you saying that your configure re-orders the flags that you're > getting back from MPI installations? Well, I very much prefer to avoid doing that kind of thing, but S*** happens. usually I try to convince people to go through CXXFLAGS in the env or something rather than rehack multiple configure scripts. > Ah, ok. That's easy enough to do (too late for beta, I'll commit this > on the trunk tonight -- we try not to make configure.ac changes during > the work day; it keeps peer developer frustration down ;-) ). > > Slight rename, though: > > - > shell$ ./ompi_info --parsable -c | grep compiler: | egrep > ':command:|:absolute:' > compiler:c:command:gcc > compiler:c:absolute:/usr/i686-pc-linux-gnu/gcc-bin/3.3.5-20050130/gcc > compiler:cxx:command:g++ > compiler:cxx:absolute:/usr/i686-pc-linux-gnu/gcc-bin/3.3.5-20050130/g++ > compiler:f77:command:g77 > compiler:f77:absolute:/usr/i686-pc-linux-gnu/gcc-bin/3.3.5-20050130/g77 > compiler:f90:command:none > compiler:f90:absolute:none > - > > That's two minor changes: > > 1. Making the second field stay the name for easy grouped grepping > (e.g., grepping on "compiler:c:" gets all info about the C compiler); > make the 3rd field be different. > > 2. Change it from "which" to "absolute", because "which" reflects a > command that not everyo
Re: [O-MPI users] re build time
Please paste the quoted text (appropriately expanded) into a readme or install or some other prominent doc location/appendix as soon as possible if it isn't there already. Details like this matter a lot to a few of us, and many of us haven't drunk completely the 3000 gallons of twisted logic that is the autotool conventions. thanks, ben On Thu, Jun 16, 2005 at 08:44:48PM -0400, Jeff Squyres wrote: > > The default build is to make libmpi be a shared library and build all > the components as dynamic shared objects (think "plugins"). > > But we currently use Autoconf+Automake+Libtool, so to build everything > static, the standard flags suffice: > > ./configure --enable-static --disable-shared > > This will make libmpi.a, all the components are statically linked into > libmpi.a, etc. There's more esoteric configure flags that allow > building some components as DSOs and others statically linked into > libmpi, but most people want entirely one way or the other, so I won't > provide the [uninteresting] details here.
Re: [O-MPI users] Further thoughts
Having been a vict^H^H^H^Hproducer of rpms for hpc apps, and from what i've seen of your installed files (which isn't an extremely large set) I vote as follows: 1) all-in-one. given the current state of HPC, nearly all "users" are also developers. 2) I'm in favor of source rpms, most particularly if you include in the source tarball (not just hidden inside the SRPM) the spec files. The more examples of the proper invocation of configure on specific architectures and network layers, the happier I'm going to be. One could argue the proper place for collecting such examples is a wiki, but in the source is good too. Binary rpms should be the responsibility of the distribution makers (redhat, whoever else) not developers. Ben On Thu, Jun 16, 2005 at 09:01:41PM -0400, Jeff Squyres wrote: > I have some random user questions about RPMs, though: > > 1. Would you prefer an all-in-one Open MPI RPM, or would you prefer > multiple RPMs (e.g., openmpi-doc, openmpi-devel, openmpi-runtime, > ...etc.)? > > 2. We're definitely going to provide an SRPM suitable for "rpmbuild > --rebuild". However, we're not 100% sure that it's worthwhile to > provide binary RPMs because everyone's cluster/development systems seem > to be "one off" from standard Linux distros. Do you want a binary > RPM(s)? If so, for which distros? (this is one area where vendors > tend to have dramatically different views than academics/researchers)
Re: [O-MPI users] late comers not welcome?
Please see Re: [O-MPI users] Questions on status Jeff Squyres (2005-06-15 19:56:09) in http://www.open-mpi.org/community/lists/users/2005/06/date.php and the long related message thread for clarification on the limitations of the alpha testing and the plans for beta testing. in the meantime, mpich2 http://www-unix.mcs.anl.gov/mpi/mpich2/ is open for business. Ben On Mon, Jul 04, 2005 at 11:33:45AM +0300, Koray Berk wrote: > Hello, > This is Koray Berk, from Istanbul Technical University. > We have a high performance computing lab, with diverse platforms and > therefore interested in various mpi developments/projects. > > I have been instructed by my vice dean to start playing around with open > mpi, to get a feeling and understanding of it. > However, as far as I understand, right now, there is nothing for me to > do but wait until further releases are made, the web site says, you dont > accept new alpha testers anymore. Is this really the case, or am I > missing something? > > Do you have an estimate of a next release? > Best Regards, > Koray > istanbul > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
[O-MPI users] java?
Hi again, We have a number of applications interested in mixing java and mpi on clusters. (I know, shudder, but we do...) I thought before I go off and hack that I should see if work is already on-going, if quiet, for open-mpi to have a java and/or gcj binding. Clearly there might be some issues lurking around mpi datatypes... Anyone? thanks, Ben
Re: [O-MPI users] java?
I went through trying most of those wrapping efforts out a few years ago with very little luck. Most were out of date or single platform even then. And mpi2 has since appeared. I was hoping some of those original mpi/java folk might be participating in open-mpi, too. I'm thinking open-mpi might present a chance to create a defacto standard in preparation for extending the standard to include java. Clearly not high on everybody's list of favorite things to think about. Ben On Fri, Jul 15, 2005 at 11:18:31AM -0400, Jeff Squyres wrote: > On Jul 15, 2005, at 10:55 AM, Ben Allan wrote: > > > We have a number of applications interested in mixing java and > > mpi on clusters. (I know, shudder, but we do...) > > I thought before I go off and hack that I should see if work > > is already on-going, if quiet, for open-mpi to have a java and/or gcj > > binding. Clearly there might be some issues lurking around mpi > > datatypes... > > There were actually a few projects some years ago that came up with > Java bindings for MPI (because Java bindings are not officially part of > MPI). IIRC, they worked with most MPI's out there (i.e., they were a > layer over the MPI itself). You might want to google around and see > what you come up with. > > Subject to a little bit rot, they may actually "just work" if they used > the wrapper compilers to compile themselves, etc...? > > -- > {+} Jeff Squyres > {+} The Open MPI Project > {+} http://www.open-mpi.org/ > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [O-MPI users] java?
I'm dealing with mixed language requirements, the babel interoperability tool from LLNL, gcj and whatever other mpis or javas I may have to resort to. Good to hear others more or less hit the same issues with mpi-java prototypes that are published. Ben On Fri, Jul 15, 2005 at 04:04:27PM -0400, Scott Robert Ladd wrote: > > Are you locked into Java for entire packages, or can you use a mixed > mdoel with C/C++/Fortran engines wrapped in Java interfaces? I've found > that approach to be rather rewarding. > > ..Scott
[O-MPI users] mpi opaque object fortran type
Hi, I deal with mixed-language c/c++/fortran codes and it appears I might be able to define an inter-language opaque reference (e.g. a Comm) as C int64t for passing to fortran and using the MPI_Comm_c2f/f2c macros to encode/decode it on the C side. The MPI standard says that on the FORTRAN side the object handles are of type INTEGER. Presumably, then to make sure things are not done accidently, the FORTRAN interface receiving such an integer from C would declare it INTEGER*8, not default the integer size at the whim of the FORTRAN compiler. On the fortran side, one might need to step the int64t down to an int32t (INTEGER*4) before calling to MPI on some of the compilers I know of today. My question for the MPI implementation wizards is: Does anyone know of a current platform where f90 INTEGER is *bigger* than C int64t/INTEGER*8 (e.g. default is INTEGER*16, yikes!) or a where misplaced fortran compiler option might make that true? Due to an automated code generator in the processing (babel) I have to pick one of INTEGER*4 or INTEGER*8 and stick to it. I'm guessing INTEGER*4 would be a poor choice for MPI opaque objects in calling on some MPI implementations. Ben
[OMPI users] mpi.h macro naming
Thanks in advance if this is already fixed in a later release I've not caught up to, I'm at 1.2.3. Is there some subtle reason that ompi's mpi.h leaves the following macros both unguarded with an ifndef and un-prefixed with OMPI_ ? This produces considerable amounts of compiler whinage for other codes that include mpi.h. As always, extraneous whinage makes real errors harder to find. (And yes, those other codes also need *their* definitions of HAVE_LONG_LONG, etc properly protected). And of course who knows how the answer was defined for any given unprotected appearance of these macros? /* Define to 1 if the system has the type `long long'. */ #undef HAVE_LONG_LONG /* The size of a `bool', as computed by sizeof. */ #undef SIZEOF_BOOL /* The size of a `int', as computed by sizeof. */ #undef SIZEOF_INT If it's simply a matter of developer hours, I can post a patch somewhere to address this. It appears that of these, only sizeof_int affects more than a few source files. thanks, Ben Allan
Re: [OMPI users] mpi.h macro naming
On Wed, Feb 20, 2008 at 06:15:27AM -0700, Jeff Squyres wrote: > The #defines that are mpi.h are limited to the ones that we need for > that file itself. More specifically: the majority of the #define's that > are generated via OMPI's configure are not in mpi.h. And that's much appreciated. > Our assumption was that if some other package defined these values, they > would either likely be coming from the same standard autoconf tests or > use the same #define conventions as the autoconf tests. As such, the > values that they are #defined to would be the same (and compilers don't > whine about multiple #defines of the same macro to the same value -- they > only whine if the values are different). The particular offending packages in question are indeed using autoconf/autoheader, however ompi's defines #define HAVE_LONG_LONG 1 while the others only #define HAVE_LONG_LONG more ac version madness? > There's two places that would need to be changed: > > - the relevant parts of OMPI's configure script to *also* define an > OMPI_* equivalent of the macro (which will sometimes mean extracting > non-public information from the Autoconf tests -- usually a risky > proposition because Autoconf can change their internals at any time). > The only safe way I can think of would be to AC_TRY_RUN and write the > #define'd value out to a temp file. This, of course, won't work for > cross-compiling environments, though. > > - modify mpi.h.in to use the new OMPI_* macros. > > Keep in mind that mpi.h only has a small subset of the #defines from > OMPI's configure script. opal_config.h (and internal OMPI file that is > not installed) has *all* the #defines; that's what's used to compile the > OMPI code base. mpi.h replicates a small number of these defines that > are used by OMPI's public interface. I will think about this guidance and see what kind of patches and alternative patches I can suggest. I did not detect autoheader being used in the process of building mpi.h; is that correct? it would make some simpler workarounds easier. Ben
Re: [OMPI users] signal handling
A build-related questions about 1.1.4 Is parallel make usage (make -j 8) supported (at least if make is gnu?). thanks, Ben
Re: [OMPI users] how to identify openmpi in configure script
What you really want, after configure has confirmed openmpi with the macro check, is to extract the libraries listed in the output of mpif90 -v test.f that are needed. Ideally someone could update ompi_config to output the link flags for 3 cases: C++ linking without using mpicxx F90 linking without using mpif90 F90 linking without f90 but that's a nontrivial bit of work. The fortran lib extraction process can be rather hairy if done portably-- the babel team at llnl has largely solved it in babel-$VERSION/runtime/m4/llnl_confirm_babel_f90_support.m4 and related files obtainable from http://www.llnl.gov/CASC/components/docs/babel-1.1.0.tar.gz Ben
Re: [OMPI users] how to identify openmpi in configure script
Jeff is right-- if you've already confirmed ompi, just use the ompi specific arguments to get the MPI LDFLAGS out. I withdraw the comment about adding a feature to ompi_info. It is unfortunate, but true, that the mpi compiler wrappers give no hint of the existence of -show, --showme:link, etc before passing on the --help argument to the underlying compilers. Granted on a proper installation, 'man mpif90' will tell you about the -show switches, but there's an awful lot of private installations of mpi where $mpiprefix/man doesn't end up in the manpath and $mpiprefix/bin doesn't end up in the regular PATH. Ben
Re: [OMPI users] Issues with DL POLY
Are you saying t(single-process execution) < t(4-process execution) for identical problems on each (same total amount of data)? There's rarely a speedup in such a case-- processing the same amount of data while shipping some fraction of it over a slow network between processing steps is almost certain to be slower. Where things get interesting (and encouraging) is if you increase the total data being processed (hold data quantity per node constant). ben allan On Thu, Jun 07, 2007 at 08:24:03PM -0400, Aaron Thompson wrote: > Hello, > Does anyone have experience using DL POLY with OpenMPI? I've gotten > it to compile, but when I run a simulation using mpirun with two dual- > processor machines, it runs a little *slower* than on one CPU on one > machine! Yet the program is running two instances on each node. Any > ideas? The test programs included with OpenMPI show that it is > running correctly across multiple nodes. > Sorry if this is a little off-topic, I wasn't able to find help on > the official DL POLY mailing list. > > Thank you! > > Aaron Thompson > Vanderbilt University > aaron.p.thomp...@vanderbilt.edu > > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users