Thanks for confirming. We'll try valgrind next :)
On Wed, Feb 24, 2010 at 6:35 PM, Jeff Squyres wrote:
> On Feb 24, 2010, at 8:17 PM, Brian Budge wrote:
>
>> We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
>> enabling the RETURN error handler). I'm confused as to what might
>
On Feb 24, 2010, at 8:17 PM, Brian Budge wrote:
> We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
> enabling the RETURN error handler). I'm confused as to what might
> cause this, as I was assuming that this generally resulted from a recv
> call being made requesting fewer byte
*Usually*, I have see these "readv failed: ..." kinds of error messages as a
side effect of an MPI process exiting abnormally. The "readv..." messages are
from the peers that are left that suddenly had sockets close unexpectedly
(because of the dead peer).
Check into the signal 11 message (tha
Yes, that's right. It will launch a singleton, and then add slaves as
required. Thank you.
Damien
On 24/02/2010 6:17 PM, Ralph Castain wrote:
Let me see if I understand your question. You want to launch an initial MPI code using
mpirun or as a singleton. This code will then determine availa
On Wed, 2010-02-24 at 13:40 -0500, w k wrote:
> Hi Jordy,
>
> I don't think this part caused the problem. For fortran, it doesn't
> matter if the pointer is NULL as long as the count requested from the
> processor is 0. Actually I tested the code and it passed this part
> without problem. I believ
Let me see if I understand your question. You want to launch an initial MPI
code using mpirun or as a singleton. This code will then determine available
resources and use MPI_Comm_spawn to launch the "real" MPI job.
Correct?
If so, then yes - you can do that. When you do the comm_spawn, you nee
Hi all -
We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
enabling the RETURN error handler). I'm confused as to what might
cause this, as I was assuming that this generally resulted from a recv
call being made requesting fewer bytes than were sent.
Can anyone shed some light o
Hi all,
Does OpenMPI support dynamic process management without launching
through mpirun or mpiexec? I need to use some MPI code in a
shared-memory environment where I don't know the resources in advance.
Damien
Hi
I can't answer your question about the array q offhand,
but i will try to translate your program to C and see if
it fails the same way.
Jody
On Wed, Feb 24, 2010 at 7:40 PM, w k wrote:
> Hi Jordy,
>
> I don't think this part caused the problem. For fortran, it doesn't matter
> if the pointer
Hi Jordy,
I don't think this part caused the problem. For fortran, it doesn't matter
if the pointer is NULL as long as the count requested from the processor is
0. Actually I tested the code and it passed this part without problem. I
believe it aborted at MPI_FILE_SET_VIEW part.
Just curious, how
On Feb 24, 2010, at 11:04 AM, Rodolfo Chua wrote:
> I've successfully installed openMPI on other PC. But when I tried to install
> it
> on my laptop and typed 'mpicc' , the response was:
Please do not reply off-topic -- please start a new thread with a different
subject if you have an unrelate
I've successfully installed openMPI on other PC. But when I tried to install it
on my laptop and typed 'mpicc' , the response was:
The program 'mpicc' can be found in the following packages:
* lam4-dev
* libmpich-mpd1.0-dev
* libmpich-shmem1.0-dev
* libmpich1.0-dev
* libopenmpi-dev
* mpich2
On Wed, 2010-02-24 at 07:36 -0700, Ralph Castain wrote:
> I'm afraid not. We are working on alternative error response
> mechanisms, but nothing is released at this time.
Don't know if this would work, but why not doing what follows:
1. set a signal handler in your application. This where you woul
I'm afraid not. We are working on alternative error response mechanisms, but
nothing is released at this time.
On Feb 24, 2010, at 7:17 AM, Gabriele Fatigati wrote:
> Mm,
> i'm trying to explain better.
>
> My target is, when a MPI process dead for some reason, after launched
> MPI_Abort i wou
Mm,
i'm trying to explain better.
My target is, when a MPI process dead for some reason, after launched
MPI_Abort i would like to control this behaviour. Example:
rank 0 died and launc MPI_Abort
i would like to do something before other process died. So i want to control
shutdown of my MPI appli
Dear Rockhee Sung,
Your explanation on which variant (1,2 or 3) gave which error message. I
assume, the output You provided is from variant 1.
Not having an Apple MAC at hand, the F77 compiler gfortran here complains
about:
configure:35830: gfortran -o c
I don't believe the error handler will help suppress the messages you are
trying to avoid as they don't originate in the MPI layer. They are actually
generated in the RTE layer as mpirun is exiting.
You could try adding the --quiet option to your mpirun cmd line. This will help
eliminate some (
On Wed, 24 Feb 2010 14:21:02 +0100, Gabriele Fatigati
wrote:
> Yes, of course,
>
> but i would like to know if there is any way to do that with openmpi
See the error handler docs, e.g. MPI_Comm_set_errhandler.
Jed
Yes, of course,
but i would like to know if there is any way to do that with openmpi
2010/2/24 jody
> Hi Gabriele
> you could always pipe your output through grep
>
> my_app | grep "MPI_ABORT was invoked"
>
> jody
>
> On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati
> wrote:
> > Hi Nadia,
Hi there,
I tried 3 different ways.
(1)./configure
(2)../configure CFLAGS='-arch x86_64' CXXFLAGS='-arch x86_64'
(3)../configure FFLAGS='-arch x86_64' CFLAGS='-arch x86_64' CXXFLAGS='-arch
x86_64'
(1) and (2) gave same error but for (3), error shows such as below..
Does it mean different def
Hi Gabriele
you could always pipe your output through grep
my_app | grep "MPI_ABORT was invoked"
jody
On Wed, Feb 24, 2010 at 11:28 AM, Gabriele Fatigati
wrote:
> Hi Nadia,
>
> thanks for quick reply.
>
> But i suppose that parameter is 0 by default. Suppose i have the follw
> output:
>
> - --
Hi Nadia,
thanks for quick reply.
But i suppose that parameter is 0 by default. Suppose i have the follw
output:
- --
- --> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 4. <--
NOTE: invokin
On Wed, 2010-02-24 at 09:55 +0100, Gabriele Fatigati wrote:
>
> Dear Openmpi users and developers,
>
> i have a question about MPI_Abort error message. I have a program
> written in C++. Is there a way to decrease a verbosity of this error?
> When this function is called, openmpi prints many info
Dear Openmpi users and developers,
i have a question about MPI_Abort error message. I have a program written in
C++. Is there a way to decrease a verbosity of this error? When this
function is called, openmpi prints many information like stack trace, rank
of processor who called MPI_Abort ecc.. Bu
Hi
I know nearly nothing about fortran
but it looks to me as the pointer 'temp' in
> call MPI_FILE_WRITE(FH, temp, COUNT, MPI_REAL8, STATUS, IERR)
is not defined (or perhaps NULL?) for all processors except processor 0 :
> if ( myid == 0 ) then
> count = 1
> else
> count = 0
> end if
25 matches
Mail list logo