[OMPI users] vers 1.6.1

2012-05-24 Thread Ricardo Reis
Hi When is it thought that 1.6.1 goes public? best, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instigator @ Rádio

Re: [OMPI users] MPI-IO puzzlement

2012-05-16 Thread Ricardo Reis
all problems gone, thanks for the input and assistance. cheers, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instigator @

Re: [OMPI users] MPI-IO puzzlement

2012-05-16 Thread Ricardo Reis
the other one) Anyway, although is becomes obvious after tracking it I think it can be a normal pitfall for the unaware... best, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance Computi

Re: [OMPI users] MPI-IO puzzlement

2012-05-15 Thread Ricardo Reis
On Tue, 15 May 2012, Jeff Squyres wrote: On May 15, 2012, at 2:19 PM, Ricardo Reis wrote: INTEGER(kind=MPI_OFFSET_KIND) :: offset MPI_OFFSET_KIND is insuficient to represent my offset... Is it not a 64 bit integer for your compiler? I'm still interested in the answer to this que

Re: [OMPI users] MPI-IO puzzlement

2012-05-15 Thread Ricardo Reis
On Tue, 15 May 2012, Jeff Squyres wrote: On May 15, 2012, at 10:53 AM, Ricardo Reis wrote: My problem is rather that INTEGER(kind=MPI_OFFSET_KIND) :: offset MPI_OFFSET_KIND is insuficient to represent my offset... Is it not a 64 bit integer for your compiler? There *is* a bug in OMPI at

Re: [OMPI users] MPI-IO puzzlement

2012-05-15 Thread Ricardo Reis
My problem is rather that INTEGER(kind=MPI_OFFSET_KIND) :: offset MPI_OFFSET_KIND is insuficient to represent my offset... best, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance

Re: [OMPI users] MPI-IO puzzlement

2012-05-15 Thread Ricardo Reis
ffset -2045256448 offset is of the type MPI_OFFSET_KIND which seems insuficient to hold the correct size for the offset. So... am I condemned to write my own MPI data type so I can write the files? ideas... ? best regards, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical En

Re: [OMPI users] MPI-IO puzzlement

2012-05-10 Thread Ricardo Reis
what file system is this on? gluster connected by infiniband. all disks in the same machine, everyone speaks on infiniband. Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance Computing,

Re: [OMPI users] MPI-IO puzzlement

2012-05-10 Thread Ricardo Reis
have some feedback. Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instigator @ Rádio Zero http://www.radiozero.pt http://ww

[OMPI users] MPI-IO puzzlement

2012-05-10 Thread Ricardo Reis
parently in the MPI_write_at_all call. Any ideas of what to do or where to look are welcomed. best, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist

Re: [OMPI users] OpenMPI with SGE: "-np N" for mpirun needed?

2012-05-09 Thread Ricardo Reis
E=hostfile awk '{print $1" cpu="$2}' ${PE_HOSTFILE} > ${HOSTFILE} mpirun -machinefile ${HOSTFILE} -np ${NSLOTS} ${EXEC} ) best (sorry if I extended the answer) Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Compu

Re: [OMPI users] OpenMPI with SGE: "-np N" for mpirun needed?

2012-05-09 Thread Ricardo Reis
of processors from SGE, but I would like to have some more solid confirmation. You might want to use a smaller number of processors than those made available by SGE. best, Ricardo Reis 'Non Serviam' PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering Computational Fl

Re: [OMPI users] MPI_File_Read_all and large file

2011-03-02 Thread Ricardo Reis
he file size per process must be lower than 4Gb There was a discussion a short time ago about this... best, Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instiga

Re: [OMPI users] [SPAM:### 89%] OpenMPI LAM ?

2010-12-17 Thread Ricardo Reis
ppens if you do `mpirun --version` in the said script (or node)? (what I intend to say is how are you sure that is not lam mpirun that is being called in that particular node?) best, Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Co

Re: [OMPI users] mpi-io, fortran, going crazy... (ADENDA)

2010-11-17 Thread Ricardo Reis
uter 2010. Hard to catch their attention right now, but eventually somebody will clarify this. oh, just a small grain of sand... doesn't seems worth to stop the full machine for it... :) many thanks all Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computation

Re: [OMPI users] mpi-io, fortran, going crazy... (ADENDA)

2010-11-17 Thread Ricardo Reis
is exactly 2^31-1). Thanks for the explanation. Then this should be updated in the spec no...? cheers! Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instigator @

Re: [OMPI users] mpi-io, fortran, going crazy... (ADENDA)

2010-11-17 Thread Ricardo Reis
Big files with normal Fortran shouldn't this behaviour be found with MPI-IO? And, more to the point, if not, shouldn't it be documented somewhere? Does anyone knows if this carries over to other MPI implementations (or the answer is "download, try it and tell us?") b

Re: [OMPI users] mpi-io, fortran, going crazy... (ADENDA)

2010-11-17 Thread Ricardo Reis
On Tue, 16 Nov 2010, Gus Correa wrote: Ricardo Reis wrote: and sorry to be such a nuisance... but any motive for an MPI-IO "wall" between the 2.0 and 2.1 Gb? Salve Ricardo Reis! Is this "wall" perhaps the 2GB Linux file size limit on 32-bit systems? No. This is a

Re: [OMPI users] mpi-io, fortran, going crazy... (ADENDA)

2010-11-16 Thread Ricardo Reis
and sorry to be such a nuisance... but any motive for an MPI-IO "wall" between the 2.0 and 2.1 Gb? (1 mpi process) best, Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.i

Re: [OMPI users] mpi-io, fortran, going crazy...

2010-11-16 Thread Ricardo Reis
pe for writing... ideas? Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instigator @ Rádio Zero http://www.radiozero.pt Keep them Flying! Ajude a/help Aero Fénix!

[OMPI users] mpi-io, fortran, going crazy...

2010-11-16 Thread Ricardo Reis
one... doesn't write! the code is here: http://aero.ist.utl.pt/~rreis/test_io.f90 can some kind soul just look at it and give some input? or, simply, point me also to where fortran error n 3 meaning is explained? best and many thanks for your time, Ricardo Reis 'Non

Re: [OMPI users] OS X - Can't find the absoft directory

2010-04-19 Thread Ricardo Reis
ly sh is just a simlink to bash... 2028.0 $ ls -l /bin/sh lrwxrwxrwx 1 root root 4 Sep 7 2009 /bin/sh -> bash Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural I

[OMPI users] OpenMPI, OpenMP, threads and hybrid programming...

2010-04-17 Thread Ricardo Reis
e something I should be on the watch to make this work? I've already taken care of making the send and receive buffers THREAD_PRIVATE cheers and thanks for your input, Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing

Re: [OMPI users] OS X - Can't find the absoft directory

2010-04-17 Thread Ricardo Reis
environment variable set, for instance, in the init file of your shell. Please read http://www.absoft.com/Support/FAQ/lixfaq_installation.htm Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.

Re: [OMPI users] Best way to reduce 3D array

2010-04-05 Thread Ricardo Reis
On Mon, 5 Apr 2010, Rob Latham wrote: On Tue, Mar 30, 2010 at 11:51:39PM +0100, Ricardo Reis wrote: If using the master/slace IO model, would it be better to cicle through all the process and each one would write it's part of the array into the file. This file would be open in "st

Re: [OMPI users] Best way to reduce 3D array

2010-03-31 Thread Ricardo Reis
On Tue, 30 Mar 2010, Gus Correa wrote: Salve Ricardo Reis! Como vai a Radio Zero? :) busy, busy, busy. we are preparing to celebrate Yuri's Night, April the 12th! Doesn't this serialize the I/O operation across the processors, whereas MPI_Gather followed by rank_0 I/O may pe

Re: [OMPI users] Best way to reduce 3D array

2010-03-30 Thread Ricardo Reis
write_to_file closefile endif call MPI_Barrier(world,ierr) enddo cheers, Ricardo Reis 'Non Serviam' PhD candidate @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence http://www.lasef.ist.utl.pt Cultural Instigator @ Rádio Zero http:

Re: [OMPI users] MPI-IO, providing buffers

2009-12-19 Thread Ricardo Reis
make a copy yourself and allow the original buffer to be freed. Thanks. So in an asynchronous write, the old buffer would only be available after the I/O ended. So maybe I really need to think to set some process aside just for I/O... Ricardo Reis 'Non Serviam' PhD candidate @

[OMPI users] MPI-IO, providing buffers

2009-12-17 Thread Ricardo Reis
Hi all I have a doubt. I'm starting using MPI-IO and was wondering if I can use the MPI_BUFFER_ATTACH to provide the necessary IO buffer (or will it use the array I'm passing the MPI_Write...??) many thanks, Ricardo Reis 'Non Serviam' PhD candidate @ Lasef

Re: [OMPI users] fortran and MPI_Barrier, not working?

2009-11-15 Thread Ricardo Reis
n flush it. Yes, I know. But this should function if the Barrier would be working has supposed. I've seen it working previously and I'm seing it working in other MPI implementations (mvapich) So, what's the catch? Grande abraço a um conhecedor de Pessoa e habitante do país de Wal

[OMPI users] fortran and MPI_Barrier, not working?

2009-11-14 Thread Ricardo Reis
_rank 1 idest 3 ISTEP 2 IDX 2 my_rank 2 idest 0 ISTEP 2 IDX 3 my_rank 3 idest 1 * ISTEP 3 IDX 0 my_rank 0 idest 3 ISTEP 3 IDX 1 my_rank 1 idest 2 ISTEP 3 IDX 2 my_rank 2 idest 1 ISTEP 3 IDX 3 my_rank 3 idest 0 - < expected output - cut he

Re: [OMPI users] open-mpi 1.2.3 on Linux ia32 and Intel 10.0.25

2007-07-25 Thread Ricardo Reis
On Wed, 25 Jul 2007, Jeff Squyres wrote: I'm still awaiting access to the Intel 10 compilers to try to reproduce this problem myself. Sorry for the delay... What do you need for this to happen? The intel packages? I can give you access to a machine if you wan't to try it out.

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-16 Thread Ricardo Reis
EMT64 and it worked. ompi gives what is asked, no problem... greets, Ricardo Reis 'Non Serviam' PhD student @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence <http://www.lasef.ist.utl.pt> & Cultural Instigator @ Rádio Zero http://radio.ist.utl.pt

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-13 Thread Ricardo Reis
Intel Corporation. All rights reserved. Do the intel compilers come with any error checking tools to give more diagnostics? yes, they come with their own debugger. I'll try to use it and send more info when done. thanks!, Ricardo Reis 'Non Serviam' PhD student @ Lasef Comp

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-12 Thread Ricardo Reis
On Wed, 11 Jul 2007, Jeff Squyres wrote: LAM uses C++ for the laminfo command and its wrapper compilers (mpicc and friends). Did you use those successfully? yes, no problem. attached out from laminfo -all strace laminfo greets, Ricardo Reis 'Non Serviam&#

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-11 Thread Ricardo Reis
. Has I said previously I can compile and use LAM MPI with my intel compiler instalation. I believe that LAM uses C++ inside no? greets, Ricardo Reis 'Non Serviam' PhD student @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence <http://www.las

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-11 Thread Ricardo Reis
r non-trivial C++ apps to compile in this machine... Do you wan't to suggest some? (hello_world works...) greets, Ricardo Reis 'Non Serviam' PhD student @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence <http://www.lasef.ist.utl.pt> &

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-10 Thread Ricardo Reis
already loaded for /opt/intel/cc/10.0.023/lib/libintlc.so.5 (gdb) | ------ | greets, Ricardo Reis 'Non Serviam' PhD student @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence <http://www.lasef.ist.utl.pt> & Cultural Instigator @ Rádio Zero http://radio.ist.utl.pt

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-05 Thread Ricardo Reis
Symbols already loaded for /lib/i686/cmov/libc.so.6 Symbols already loaded for /lib/i686/cmov/libdl.so.2 Symbols already loaded for /opt/intel/cc/10.0.023/lib/libimf.so Symbols already loaded for /opt/intel/cc/10.0.023/lib/libintlc.so.5 Ricardo Reis 'Non Serviam' PhD student @ Lasef Com

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-05 Thread Ricardo Reis
pirun -np ) gives segmentation fault. Ompi_info gives output and then segfaults. ompi_info --all segfaults immediatly. Added ompi_info log (without --all) Added strace ompi_info --all log Added strace mpirun log greets, Ricardo Reis 'Non Serviam' PhD student @ Lasef Comp

Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-04 Thread Ricardo Reis
so added config.log e make.log) I have compiled lam 7.1.3 with this set of compilers and have no problem at all. thanks, Ricardo Reis 'Non Serviam' PhD student @ Lasef Computational Fluid Dynamics, High Performance Computing, Turbulence <http://www.lasef.ist.utl.pt>

[OMPI users] mpi with icc,icpc and ifort :: segfault

2007-07-03 Thread Ricardo Reis
debian linux box, 32 bit, no flags given to the compilers. 4999.0 $ uname -a Linux umdrum 2.6.21.5-rt17 #2 SMP PREEMPT RT Mon Jun 25 23:02:11 WEST 2007 i686 GNU/Linux 5003.0 $ ldd --version ldd (GNU libc) 2.5 help? Ricardo Reis 'Non Serviam' PhD student @ Lasef Comp