By the way, I am using Open MPI 1.6.5 and programming with C++.

On Thu, Sep 18, 2014 at 4:43 PM, XingFENG <xingf...@cse.unsw.edu.au> wrote:

> Dear all,
>
> I am new to MPI. Please forgive me if I ask a redundant question.
>
> I am now programming about graph processing using MPI. I get two problems
> as described below.
>
> a. How to get more information about errors? I got errors like below. This
> says that program exited abnormally in function *MPI_Test().* But is
> there a way to know more about the error?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> **** An error occurred in MPI_Test*** on communicator MPI_COMM_WORLD***
> MPI_ERR_TRUNCATE: message truncated*** MPI_ERRORS_ARE_FATAL: your MPI job
> will now
> abort--------------------------------------------------------------------------mpirun
> has exited due to process rank 2 with PID 4341 onnode
> xing-HP-Compaq-Elite-8300-SFF exiting improperly. There are two reasons
> this could occur:1. this process did not call "init" before exiting, but
> others inthe job did. This can cause a job to hang indefinitely while it
> waitsfor all processes to call "init". By rule, if one process calls
> "init",then ALL processes must call "init" prior to termination.2. this
> process called "init", but exited without calling "finalize".By rule, all
> processes that call "init" MUST call "finalize" prior toexiting or it will
> be considered an "abnormal termination"This may have caused other processes
> in the application to beterminated by signals sent by mpirun (as reported
> here).--------------------------------------------------------------------------***
> An error occurred in MPI_Test*** on communicator MPI_COMM_WORLD***
> MPI_ERR_TRUNCATE: message truncated*** MPI_ERRORS_ARE_FATAL: your MPI job
> will now abort*** An error occurred in MPI_Test*** on communicator
> MPI_COMM_WORLD*** MPI_ERR_TRUNCATE: message truncated***
> MPI_ERRORS_ARE_FATAL: your MPI job will now
> abort--------------------------------------------------------------------------mpirun
> has exited due to process rank 2 with PID 4378 onnode SFF exiting
> improperly. *
> b. Are there anything to note about asynchronous communication? I use
> MPI_Isend, MPI_Irecv, MPI_Test to implement asynchronous communication. My
> program works well on small data sets(10K nodes graphs), but it exits
> abnormally on large data set (1M nodes graphs).
>
> Any help would be greatly appreciated!
>
>
> --
> Best Regards.
>



-- 
Best Regards.

Reply via email to