Hi
When is it thought that 1.6.1 goes public?
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio
all problems gone, thanks for the input and assistance.
cheers,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @
the other one)
Anyway, although is becomes obvious after tracking it I think it can be a
normal pitfall for the unaware...
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computi
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
I'm still interested in the answer to this que
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 10:53 AM, Ricardo Reis wrote:
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
There *is* a bug in OMPI at
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance
ffset -2045256448
offset is of the type MPI_OFFSET_KIND which seems insuficient to hold the
correct size for the offset.
So... am I condemned to write my own MPI data type so I can write the
files? ideas... ?
best regards,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical En
what file system is this on?
gluster connected by infiniband. all disks in the same machine, everyone
speaks on infiniband.
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing,
have some feedback.
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www.radiozero.pt
http://ww
parently in the MPI_write_at_all
call.
Any ideas of what to do or where to look are welcomed.
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist
E=hostfile
awk '{print $1" cpu="$2}' ${PE_HOSTFILE} > ${HOSTFILE}
mpirun -machinefile ${HOSTFILE} -np ${NSLOTS} ${EXEC}
)
best (sorry if I extended the answer)
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Compu
of processors
from SGE, but I would like to have some more solid confirmation.
You might want to use a smaller number of processors than those made
available by SGE.
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fl
he file size per process must be lower than 4Gb
There was a discussion a short time ago about this...
best,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instiga
ppens if you do `mpirun --version` in the said script (or node)?
(what I intend to say is how are you sure that is not lam mpirun that is
being called in that particular node?)
best,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Co
uter 2010.
Hard to catch their attention right now,
but eventually somebody will clarify this.
oh, just a small grain of sand... doesn't seems worth to stop the full
machine for it...
:)
many thanks all
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computation
is exactly 2^31-1).
Thanks for the explanation. Then this should be updated in the spec no...?
cheers!
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @
Big files with normal Fortran
shouldn't this behaviour be found with MPI-IO? And, more to the point, if
not, shouldn't it be documented somewhere?
Does anyone knows if this carries over to other MPI implementations (or
the answer is "download, try it and tell us?")
b
On Tue, 16 Nov 2010, Gus Correa wrote:
Ricardo Reis wrote:
and sorry to be such a nuisance...
but any motive for an MPI-IO "wall" between the 2.0 and 2.1 Gb?
Salve Ricardo Reis!
Is this "wall" perhaps the 2GB Linux file size limit on 32-bit systems?
No. This is a
and sorry to be such a nuisance...
but any motive for an MPI-IO "wall" between the 2.0 and 2.1 Gb?
(1 mpi process)
best,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.i
pe for writing...
ideas?
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www.radiozero.pt
Keep them Flying! Ajude a/help Aero Fénix!
one... doesn't write!
the code is here:
http://aero.ist.utl.pt/~rreis/test_io.f90
can some kind soul just look at it and give some input?
or, simply, point me also to where fortran error n 3 meaning is
explained?
best and many thanks for your time,
Ricardo Reis
'Non
ly sh is just a simlink to bash...
2028.0 $ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Sep 7 2009 /bin/sh -> bash
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural I
e something I should be on the watch to make this work? I've
already taken care of making the send and receive buffers THREAD_PRIVATE
cheers and thanks for your input,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing
environment variable set, for instance, in
the init file of your shell.
Please read http://www.absoft.com/Support/FAQ/lixfaq_installation.htm
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.
On Mon, 5 Apr 2010, Rob Latham wrote:
On Tue, Mar 30, 2010 at 11:51:39PM +0100, Ricardo Reis wrote:
If using the master/slace IO model, would it be better to cicle
through all the process and each one would write it's part of the
array into the file. This file would be open in "st
On Tue, 30 Mar 2010, Gus Correa wrote:
Salve Ricardo Reis!
Como vai a Radio Zero?
:) busy, busy, busy. we are preparing to celebrate Yuri's Night, April the
12th!
Doesn't this serialize the I/O operation across the processors,
whereas MPI_Gather followed by rank_0 I/O may pe
write_to_file
closefile
endif
call MPI_Barrier(world,ierr)
enddo
cheers,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http:
make a copy yourself and allow the original buffer to be freed.
Thanks. So in an asynchronous write, the old buffer would only be
available after the I/O ended. So maybe I really need to think to set some
process aside just for I/O...
Ricardo Reis
'Non Serviam'
PhD candidate @
Hi all
I have a doubt. I'm starting using MPI-IO and was wondering if I can use
the MPI_BUFFER_ATTACH to provide the necessary IO buffer (or will it use
the array I'm passing the MPI_Write...??)
many thanks,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
n flush it.
Yes, I know. But this should function if the Barrier would be working has
supposed. I've seen it working previously and I'm seing it working in
other MPI implementations (mvapich)
So, what's the catch?
Grande abraço a um conhecedor de Pessoa e habitante do país de Wal
_rank 1 idest 3
ISTEP 2 IDX 2 my_rank 2 idest 0
ISTEP 2 IDX 3 my_rank 3 idest 1
*
ISTEP 3 IDX 0 my_rank 0 idest 3
ISTEP 3 IDX 1 my_rank 1 idest 2
ISTEP 3 IDX 2 my_rank 2 idest 1
ISTEP 3 IDX 3 my_rank 3 idest 0
- < expected output - cut he
On Wed, 25 Jul 2007, Jeff Squyres wrote:
I'm still awaiting access to the Intel 10 compilers to try to
reproduce this problem myself. Sorry for the delay...
What do you need for this to happen? The intel packages? I can give you
access to a machine if you wan't to try it out.
EMT64 and it worked. ompi gives what is asked, no problem...
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl.pt>
&
Cultural Instigator @ Rádio Zero
http://radio.ist.utl.pt
Intel Corporation. All rights reserved.
Do the intel compilers come with any error checking tools to give
more diagnostics?
yes, they come with their own debugger. I'll try to use it and send more
info when done.
thanks!,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Comp
On Wed, 11 Jul 2007, Jeff Squyres wrote:
LAM uses C++ for the laminfo command and its wrapper compilers (mpicc
and friends). Did you use those successfully?
yes, no problem.
attached out from laminfo -all
strace laminfo
greets,
Ricardo Reis
'Non Serviam
. Has I said previously I can compile and use
LAM MPI with my intel compiler instalation. I believe that LAM uses C++
inside no?
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.las
r non-trivial C++ apps to compile in this machine...
Do you wan't to suggest some? (hello_world works...)
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl.pt>
&
already loaded for /opt/intel/cc/10.0.023/lib/libintlc.so.5
(gdb)
| ------ |
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl.pt>
&
Cultural Instigator @ Rádio Zero
http://radio.ist.utl.pt
Symbols already loaded for /lib/i686/cmov/libc.so.6
Symbols already loaded for /lib/i686/cmov/libdl.so.2
Symbols already loaded for /opt/intel/cc/10.0.023/lib/libimf.so
Symbols already loaded for /opt/intel/cc/10.0.023/lib/libintlc.so.5
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Com
pirun -np ) gives segmentation fault.
Ompi_info gives output and then segfaults. ompi_info --all segfaults
immediatly.
Added ompi_info log (without --all)
Added strace ompi_info --all log
Added strace mpirun log
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Comp
so added config.log e make.log)
I have compiled lam 7.1.3 with this set of compilers and have no problem
at all.
thanks,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl.pt>
debian linux box, 32
bit, no flags given to the compilers.
4999.0 $ uname -a
Linux umdrum 2.6.21.5-rt17 #2 SMP PREEMPT RT Mon Jun 25 23:02:11 WEST 2007
i686 GNU/Linux
5003.0 $ ldd --version
ldd (GNU libc) 2.5
help?
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Comp
42 matches
Mail list logo