On Thu, Mar 10, 2005 at 11:49:52AM -0500, Larry Stewart wrote:
> The presentation ignores the issue of instruction set. Even within
> the x86 family we have IA-32, EMT 64, and AMD-64.
Larry,
Thanks for sending some interesting comments.
The presentation wasn't intended to be all things to all
On Fri, Mar 11, 2005 at 02:21:58PM +1100, Stuart Midgley wrote:
> One major implementation issue is the equivalent of mpirun (which I
> assume would be part of an ABI?) - or the startup requirements of
> different MPI's.
This may or may not be part of an ABI.
The reason to not include it is th
Jeff,
One of the interesting aspects of an ABI MPI is that some folks won't
be convinced that it's interesting until they've had it proven that it
will work, both technically and socially, which is putting the cart
before the horse to a certain extent. It's also not helpful that all I
distributed
On Thu, Mar 17, 2005 at 12:29:22PM +, Neil Storer wrote:
> Be careful when you say:
Neil,
I think that you'll find that pathf90 accepts -1 for TRUE, so this is
easily handled by the binding for MPI. I'd have to write some test
programs to be sure, and I'll get back to you on that. I think th
> Create a new software project (preferably open source, preferably with
> an BSD-like license so that ISV's can incorporate this software into
> their products) that provides a compatibility layer for all the
> different MPI implementations out there. Let's call it MorphMPI.
Jeff,
A similar
On Fri, Mar 25, 2005 at 02:59:18PM -0500, Patrick Geoffray wrote:
> I don't see it that way. First, the implementations of the translation
> layers will be done by each MPI implementations.
In which case it's basically the same as doing an ABI. Or did I miss
something? Does this somehow save a
On Fri, Mar 25, 2005 at 06:03:15PM -0500, Patrick Geoffray wrote:
> What Jeff thought is a nightmare, I believe, is to have to decide a
> common interface and then force the MPI implementations to adopt this
> interface internally instead of having them translating on the fly.
Ah. But no one ev
On Fri, Mar 25, 2005 at 05:49:13PM -0500, Jeff Squyres wrote:
> MorphMPI (or, as Patrick suggests, we need a cooler name -- PatrickMPI?
> ;-) ) is the work of 1 grad clever student (or anyone else industrious
> enough). Elapsed time: a few months.
Right.
> Making even 2 MPI implementations ag
On Sat, Mar 26, 2005 at 06:47:41AM -0500, Jeff Squyres wrote:
> Regardless of which way you choose, your statement "No internals have
> to change" is inaccurate. At a minimum, *EVERY* MPI API function in
> somebody's implementation will have to change.
That's what I call the interface, yes. I
On Sun, Apr 03, 2005 at 02:19:39PM -0400, Jeff Squyres wrote:
> If so, are we therefore in agreement that a MorphMPI-like approach is a
> good first step?
No, because we apparently disagree about what MorphMPI is. You claim
it's a lot less work than an ABI; I claim it's about the same. We
both a
On Sat, Sep 10, 2005 at 04:45:48PM -0500, Brian Barrett wrote:
> I think that this is a fairly easy fix - Irix identifies any MIPS
> chip as a mips-* from config.guess, but Linux apparently makes a
> distinction between mips and mips64.
That's because there is a difference between mips and mp
On Sun, Sep 11, 2005 at 09:24:02AM -0500, Brian Barrett wrote:
> I'll admit that my only interaction with the MIPS architectures are
> the R4K and above in SGI systems, so I'm a bit out of my league.
The R4K was the first MIPS64 chip. I bought a second-hand manual for
it online for a few bucks. T
On Mon, Sep 12, 2005 at 07:24:43PM -0500, Brian Barrett wrote:
> By the way, to clarify, the assembly
> has been tested on a MIPS R14K in 64 bit mode (and 32 bit mode using
> SGI's n32 ABI -- it will not work with their o32 ABI).
In Linux/binutils lingo, that means it's MIPS64 code, and not M
> ignoring the politics for a moment, what are the technical sticking points?
MPI types
values of constants
Fortran name-mangling
Fortran LOGICAL
program startup (optional)
> for instance, I have the impression that the linux x86_64 ABI is reasonably
> well-defined,
It mostly works. We have ru
> The government is one of the few forces that could mandate a proper MPI
> ABI at this point in time;
They certainly aren't the only ones -- vendors of proprietary
applications that use MPI plus vendors of interconnect hardware get
significant benefits from an ABI.
Anyone who wants to distribute
On Wed, Oct 12, 2005 at 12:05:13PM +0100, Ashley Pittman wrote:
> Thirdly is the performance issue, any MPI vendor worth his salt tries
> very hard to reduce the number of function calls and library's between
> the application and the network, adding another one is a step in the
> wrong direction.
On Wed, Oct 12, 2005 at 07:06:54PM +0100, Ashley Pittman wrote:
> As it turns out I'm in a position to measure this fairly easily, our MPI
> sits on top of a library called libelan, this does all the tag matching
> at a very low level, all MPI does is convert the communicator into a bit
> pattern,
Dries,
We have built OpenMPI-1.0 with our compilers, but don't build SVN.
In your case you seem to have some kind of SUSE weirdness going on. Is
it possible you have a gcc-3.3 compatibility package of some kind
installed? We should probably take this conversation over to
supp...@pathscale.com, bu
On Fri, Feb 24, 2006 at 04:44:19PM +0100, Benoit Semelin wrote:
> Well I have actually tried with 2 different versions: 8.0.034 and
> 8.1.018. The config.log is attached (I hope this works on the mailing
> list...)
Looks like ifort on your system is generating 32-bit .o files by
default.
confi
On Wed, Mar 01, 2006 at 09:56:38AM -0500, George Bosilca wrote:
> But, even if it was
> right to do so, it should at least print a warning message to inform
> me that I was stupid enough to do such basic mistakes ...
Most compilers don't because they don't want to print warnings for
codes whi
On Thu, Mar 09, 2006 at 09:13:46PM -0500, Brian Barrett wrote:
> I think I have this fixed on the trunk. It looks like PGI tried to
> make the 6.1 compilers support GCC inline assembly, but it doesn't
> look like it's 100% correct,
... and that's no surprise. The spec in the gcc info pages d
On Tue, Jul 11, 2006 at 12:23:16PM -0400, George Bosilca wrote:
> I doubt that icc should know anything about
> the gxx_personality. In fact it look like icc is trying to use some
> libraries compiled with g++.
As an aside, both Intel C++ and PathScale C++ are 100% g++ compatible.
Symbols lik
On Tue, Sep 05, 2006 at 11:50:54AM -0400, George Bosilca wrote:
> Yes and yes. However, these architectures fit better on a different
> programming model. If you want to get the max performance out of
> them, a OMP approach (instead of MPI) is more suitable.
Eh? You mean all of those examples
On Tue, Oct 03, 2006 at 12:01:37PM -0600, Troy Telford wrote:
> I can't claim to know which ones are *known* to work, but I've never seen
> an IB HCA that didn't work with Open MPI.
Ditto. Ours works fine with the OFED stack, and also there's
"accelerated" support for our PSM software interface
On Tue, Nov 07, 2006 at 05:02:54PM +, Miguel Figueiredo Mascarenhas Sousa
Filipe wrote:
> if your aplication is on one given node, sharing data is better than
> copying data.
Unless sharing data repeatedly leads you to false sharing and a loss
in performance.
> the MPI model assumes you don
On Wed, Nov 08, 2006 at 09:57:18AM -0500, Hugh Merz wrote:
> The conventional wisdom of pure MPI being as good as hybrid models
> is primarily driven by the fact that people haven't had much incentive
> to re-write their algorithms to support both models.
Actually, I was thinking of apps where pe
On Wed, Nov 08, 2006 at 12:25:20PM +, Miguel Figueiredo Mascarenhas Sousa
Filipe wrote:
> > Unless sharing data repeatedly leads you to false sharing and a loss
> > in performance.
>
> what does that mean.. I did not understand that.
Google indexes a bunch of good webpages on "false sharing
On Fri, Dec 01, 2006 at 11:51:24AM +0100, Peter Kjellstrom wrote:
> This might be a bit naive but, if you spawn two procs on a dual core dual
> socket system then the linux kernel should automagically schedule them this
> way.
No, we checked this for OpenMP and MPI, and in both cases wiring the
On Sat, Dec 02, 2006 at 10:31:30AM -0500, Jeff Squyres wrote:
> FWIW, especially on NUMA machines (like AMDs), physical access to
> network resources (such as NICs / HCAs) can be much faster on
> specific sockets.
Yes, the penalty is actually 50 ns per hop, and you pay it on both
sides. So ou
29 matches
Mail list logo