Re: [OMPI users] MPI_ERR_TRUNCATE returned from MPI_Test

2010-02-24 Thread Brian Budge
Thanks for confirming. We'll try valgrind next :) On Wed, Feb 24, 2010 at 6:35 PM, Jeff Squyres wrote: > On Feb 24, 2010, at 8:17 PM, Brian Budge wrote: > >> We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after >> enabling the RETURN error handler).  I'm confused as to what might >

Re: [OMPI users] MPI_ERR_TRUNCATE returned from MPI_Test

2010-02-24 Thread Jeff Squyres
On Feb 24, 2010, at 8:17 PM, Brian Budge wrote: > We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after > enabling the RETURN error handler). I'm confused as to what might > cause this, as I was assuming that this generally resulted from a recv > call being made requesting fewer byte

Re: [OMPI users] [btl_tcp_frag.c:216:mca_btl_tcp_frag_recv] mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)

2010-02-24 Thread Jeff Squyres
*Usually*, I have see these "readv failed: ..." kinds of error messages as a side effect of an MPI process exiting abnormally. The "readv..." messages are from the peers that are left that suddenly had sockets close unexpectedly (because of the dead peer). Check into the signal 11 message (tha

Re: [OMPI users] Using dynamic process management without mpirun/mpiexec

2010-02-24 Thread Damien Hocking
Yes, that's right. It will launch a singleton, and then add slaves as required. Thank you. Damien On 24/02/2010 6:17 PM, Ralph Castain wrote: Let me see if I understand your question. You want to launch an initial MPI code using mpirun or as a singleton. This code will then determine availa

Re: [OMPI users] problems on parallel writing

2010-02-24 Thread Terry Frankcombe
On Wed, 2010-02-24 at 13:40 -0500, w k wrote: > Hi Jordy, > > I don't think this part caused the problem. For fortran, it doesn't > matter if the pointer is NULL as long as the count requested from the > processor is 0. Actually I tested the code and it passed this part > without problem. I believ

Re: [OMPI users] Using dynamic process management without mpirun/mpiexec

2010-02-24 Thread Ralph Castain
Let me see if I understand your question. You want to launch an initial MPI code using mpirun or as a singleton. This code will then determine available resources and use MPI_Comm_spawn to launch the "real" MPI job. Correct? If so, then yes - you can do that. When you do the comm_spawn, you nee

[OMPI users] MPI_ERR_TRUNCATE returned from MPI_Test

2010-02-24 Thread Brian Budge
Hi all - We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after enabling the RETURN error handler). I'm confused as to what might cause this, as I was assuming that this generally resulted from a recv call being made requesting fewer bytes than were sent. Can anyone shed some light o

[OMPI users] Using dynamic process management without mpirun/mpiexec

2010-02-24 Thread Damien Hocking
Hi all, Does OpenMPI support dynamic process management without launching through mpirun or mpiexec? I need to use some MPI code in a shared-memory environment where I don't know the resources in advance. Damien

Re: [OMPI users] problems on parallel writing

2010-02-24 Thread jody
Hi I can't answer your question about the array q offhand, but i will try to translate your program to C and see if it fails the same way. Jody On Wed, Feb 24, 2010 at 7:40 PM, w k wrote: > Hi Jordy, > > I don't think this part caused the problem. For fortran, it doesn't matter > if the pointer

Re: [OMPI users] problems on parallel writing

2010-02-24 Thread w k
Hi Jordy, I don't think this part caused the problem. For fortran, it doesn't matter if the pointer is NULL as long as the count requested from the processor is 0. Actually I tested the code and it passed this part without problem. I believe it aborted at MPI_FILE_SET_VIEW part. Just curious, how

[OMPI users] mpicc failure

2010-02-24 Thread Jeff Squyres
On Feb 24, 2010, at 11:04 AM, Rodolfo Chua wrote: > I've successfully installed openMPI on other PC. But when I tried to install > it > on my laptop and typed 'mpicc' , the response was: Please do not reply off-topic -- please start a new thread with a different subject if you have an unrelate

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Rodolfo Chua
I've successfully installed openMPI on other PC. But when I tried to install it on my laptop and typed 'mpicc' , the response was: The program 'mpicc' can be found in the following packages: * lam4-dev * libmpich-mpd1.0-dev * libmpich-shmem1.0-dev * libmpich1.0-dev * libopenmpi-dev * mpich2

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Nadia Derbey
On Wed, 2010-02-24 at 07:36 -0700, Ralph Castain wrote: > I'm afraid not. We are working on alternative error response > mechanisms, but nothing is released at this time. Don't know if this would work, but why not doing what follows: 1. set a signal handler in your application. This where you woul

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Ralph Castain
I'm afraid not. We are working on alternative error response mechanisms, but nothing is released at this time. On Feb 24, 2010, at 7:17 AM, Gabriele Fatigati wrote: > Mm, > i'm trying to explain better. > > My target is, when a MPI process dead for some reason, after launched > MPI_Abort i wou

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Gabriele Fatigati
Mm, i'm trying to explain better. My target is, when a MPI process dead for some reason, after launched MPI_Abort i would like to control this behaviour. Example: rank 0 died and launc MPI_Abort i would like to do something before other process died. So i want to control shutdown of my MPI appli

Re: [OMPI users] configure error

2010-02-24 Thread Rainer Keller
Dear Rockhee Sung, Your explanation on which variant (1,2 or 3) gave which error message. I assume, the output You provided is from variant 1. Not having an Apple MAC at hand, the F77 compiler gfortran here complains about: configure:35830: gfortran -o c

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Ralph Castain
I don't believe the error handler will help suppress the messages you are trying to avoid as they don't originate in the MPI layer. They are actually generated in the RTE layer as mpirun is exiting. You could try adding the --quiet option to your mpirun cmd line. This will help eliminate some (

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Jed Brown
On Wed, 24 Feb 2010 14:21:02 +0100, Gabriele Fatigati wrote: > Yes, of course, > > but i would like to know if there is any way to do that with openmpi See the error handler docs, e.g. MPI_Comm_set_errhandler. Jed

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Gabriele Fatigati
Yes, of course, but i would like to know if there is any way to do that with openmpi 2010/2/24 jody > Hi Gabriele > you could always pipe your output through grep > > my_app | grep "MPI_ABORT was invoked" > > jody > > On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati > wrote: > > Hi Nadia,

[OMPI users] configure error

2010-02-24 Thread Admin
Hi there, I tried 3 different ways. (1)./configure (2)../configure CFLAGS='-arch x86_64' CXXFLAGS='-arch x86_64' (3)../configure FFLAGS='-arch x86_64' CFLAGS='-arch x86_64' CXXFLAGS='-arch x86_64' (1) and (2) gave same error but for (3), error shows such as below.. Does it mean different def

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread jody
Hi Gabriele you could always pipe your output through grep my_app | grep "MPI_ABORT was invoked" jody On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati wrote: > Hi Nadia, > > thanks for quick reply. > > But i suppose that parameter is 0 by default. Suppose i have the follw > output: > > - --

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Gabriele Fatigati
Hi Nadia, thanks for quick reply. But i suppose that parameter is 0 by default. Suppose i have the follw output: - -- - --> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 4. <-- NOTE: invokin

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Nadia Derbey
On Wed, 2010-02-24 at 09:55 +0100, Gabriele Fatigati wrote: > > Dear Openmpi users and developers, > > i have a question about MPI_Abort error message. I have a program > written in C++. Is there a way to decrease a verbosity of this error? > When this function is called, openmpi prints many info

[OMPI users] MPi Abort verbosity

2010-02-24 Thread Gabriele Fatigati
Dear Openmpi users and developers, i have a question about MPI_Abort error message. I have a program written in C++. Is there a way to decrease a verbosity of this error? When this function is called, openmpi prints many information like stack trace, rank of processor who called MPI_Abort ecc.. Bu

Re: [OMPI users] problems on parallel writing

2010-02-24 Thread jody
Hi I know nearly nothing about fortran but it looks to me as the pointer 'temp' in > call MPI_FILE_WRITE(FH, temp, COUNT, MPI_REAL8, STATUS, IERR) is not defined (or perhaps NULL?) for all processors except processor 0 : > if ( myid == 0 ) then > count = 1 > else > count = 0 > end if