Re: [OMPI users] flex.exe

2010-01-21 Thread Manuel Prinz
Am Donnerstag, den 21.01.2010, 11:52 -0500 schrieb Michael Di Domenico:
> openmpi-1.4.1/contrib/platform/win32/bin/flex.exe
> 
> I understand this file might be required for building on windows,
> since I'm not I can just delete the file without issue.
> 
> However, for those of us under import restrictions, where binaries are
> not allowed in, this file causes me to open the tarball and delete the
> file (not a big deal, i know, i know).
> 
> But, can I put up a vote for a pure source only tree?

I'm very much in favor of that since we can't ship this binary in
Debian. We'd have to delete it from the tarball and repack it with every
release which is quite cumbersome. If these tools could be shipped in a
separate tarball that would be great!

Best regards
Manuel



Re: [OMPI users] ABI stabilization/versioning

2010-01-25 Thread Manuel Prinz
Am Montag, den 25.01.2010, 12:11 + schrieb Dave Love:
> I assumed that the libraries would then be versioned (at least for ELF
> -- I don't know about other formats) and we could remove a major source
> of grief from dynamically linking against the wrong thing, and I think
> Jeff said that would happen.  However, the current sources don't seem to
> be trying to set libtool version info, though I'm not sure what
> determines them producing .so.0.0.1 instead of .0.0.0 in other binaries
> I have.  This doesn't seem to have been addressed in the Debian or
> Fedora packaging, either

The ABI should be stable since 1.3.2. OMPI 1.4.x does set the libtool
version info; Versions where bumped to 0.0.1 for libmpi which has no
effect for dynamic linking.

Could you please elaborate on what needs to be addressed? Debian does
not have 1.4.1 yet though I'm planning to upload it really soon. The ABI
did not change (also not in an incompatible way, AFAICS). If you know of
any issues, I'd be glad if you could tell us, so we can find a solution
before any damage is done. Thanks in advance!

Best regards
Manuel



Re: [OMPI users] [Pkg-openmpi-maintainers] Open MPI and mpi-defaults

2009-01-07 Thread Manuel Prinz
Am Mittwoch, den 07.01.2009, 06:29 -0500 schrieb Jeff Squyres:
> It sounds like we need an alpha implementation (and mips?) of our  
> assembly code...

A patch for MIPS can be found in the Debian bug tracker [1]. There are
some issues with it still, maybe one of you guys has a clue. The patch
did not apply to 1.2.8 cleanly, so I updated it. I do not have access to
a MIPS machine at the moment, so I can't test whether or not the patch
is correct/working. Nevertheless, I hope it may be useful.

Best regards
Manuel

[1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=489173




Re: [OMPI users] [Pkg-openmpi-maintainers] Open MPI and mpi-defaults

2009-01-07 Thread Manuel Prinz
Am Dienstag, den 06.01.2009, 16:33 -0500 schrieb Adam C Powell IV:
> Okay, found it.  This function is inline assembly in timer.h, which
> exists in opal/sys/amd64, ia32, ia64, powerpc and sparcv9 but not alpha,
> mips, sparc or win32.  That said, timer.h in opal/sys has:
> 
> #ifndef OPAL_HAVE_SYS_TIMER_GET_CYCLES
> #define OPAL_HAVE_SYS_TIMER_GET_CYCLES 0
> 
> which somehow is working on sparc (no reference to this function in the
> buildd log) but not alpha.  (On mips, there are a bunch of assembler
> errors of the form "opcode not supported on this processor".)

The reason why it works on sparc is probably because Debian only
supports sparcv9 (TTBOMK), which has the timer functions implemented in
opal/include/opal/sys/sparcv9/timer.h. The build log also reads:

checking for asssembly architecture... SPARCV9_32

> That's about what I have time for now.  Don't worry about mpi-defaults,
> it's not trying to get into Lenny; but we should worry about OpenMPI not
> building on alpha.

The reasoning for reassigning was that Open MPI built fine on alpha
before (first breakage in 1.2.8), so something has changed. I'm still
trying to find out what.

Thanks for investigating the issue!

Best regards
Manuel


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Re: [OMPI users] Configure OpenMPI and SLURM on Debian (Lenny)

2009-03-24 Thread Manuel Prinz
Hi Jerome!

Am Dienstag, den 24.03.2009, 16:27 +0800 schrieb Jerome BENOIT:
> With LAM some configuration files must be set up, I guess it is the same here.
> But as SLURM is also involved, it is not clear to me right now how I must
> configure both SLURM and OpenMPI to make them work together. Any hint is 
> welcome !

Open MPI integrates nicely with SLURM in Lenny. All you need to do is to
call your job with mpiexec OR mpirun. You do not need to setup anything
besides SLURM. If you use sbatch, you could create a file like this:

cat >test.sbatch <

Re: [OMPI users] Configure OpenMPI and SLURM on Debian (Lenny)

2009-03-25 Thread Manuel Prinz
Am Mittwoch, den 25.03.2009, 00:38 +0800 schrieb Jerome BENOIT:
> is there a way to check that SLURM and OpenMPI communicate as expected ?

You can check if mpirun forks as many instances as you requested via
SLURM. Also, you could check if the hostnames of the hosts your job ran
match those allocated for you via SLURM. There are probably more
sophisticated methods but I would start with that.

Best regards
Manuel



Re: [OMPI users] [Fwd: Re: Configure OpenMPI and SLURM on Debian (Lenny)]

2009-03-27 Thread Manuel Prinz
Am Freitag, den 27.03.2009, 11:01 +0800 schrieb Jerome BENOIT:
> Finally I succeeded with the sbatch approach ... when my firewall are
> stopped !
> So I guess that I have to configure my firewall (I use firehol):
> I have just tried but without success. I will try again later.
> Is there any other ports than the SLURM ones which are involved ?

On the SLURM side, you have to open the ports to the SLURM control
daemons and the service that handles the credentials. On Debian systems
that's MUNGE.

Also, you need to open ports for the MPI processes to communicate. The
port range is rather wide, so the easiest setup (I guess) is to not use
a firewall between computing nodes.

Best regards
Manuel



Re: [OMPI users] [Fwd: Re: Configure OpenMPI and SLURM on Debian (Lenny)]

2009-03-27 Thread Manuel Prinz
Am Freitag, den 27.03.2009, 20:34 +0800 schrieb Jerome BENOIT:
> I have just tried the Sid package (1.3-2), but it does not work properly
> (when the firewall are off)

Though this should work, the version in Sid is broken in other respects.
I do not recommend using it.

> I have just read that the current stable version for OpenMPI is now 1.3.1:
> I will give it a try once it will be packaged in Sid.

I'm the Open MPI maintainer in Debian and am planning to upload a fixed
version soon, possible around middle of next week. (It has to be
coordinated with the release team.) There is a working version availble
in SVN (try "debcheckout openmpi"). You can either build it yourself or
I could build it for you. You can mail me in private if you'd like me to
do so. Please not that installing the new version will break other
MPI-related Debian packages. I can explain you the details in private
mail since I think this is off-topic for the list.

Best regards
Manuel