Hi guys,
I just came across this thread while googling when I faced a similar
problem with a certain code - after scratching my head for a bit, it
turns out the solution is pretty simple. My guess is that Jeff's code
has it's own copy of 'mpif.h' in its source directory, and in all
likelihood, i
On Thu, 2007-06-21 at 14:14 -0500, Anthony Chan wrote:
> What test you are refering to ?
>
> config.log contains the test results of the features that configure is
> looking for. Failure of some thread test does not mean OpenMPI can't
> support threads. In fact, I was able to run a simple/correc
What test you are refering to ?
config.log contains the test results of the features that configure is
looking for. Failure of some thread test does not mean OpenMPI can't
support threads. In fact, I was able to run a simple/correct
MPI_THREAD_MULTIPLE program uses pthread with openmpi-1.2.3 an
Looks like everything is fine ... I had the MPI parameter check
option disabled, that's why it didn't complain about calling free on
the MPI_COMM_NULL. If I activate the check, the program now fails as
expected (i.e. complain and give up on the MPI_Comm_free).
Thanks,
george.
On Jun
On Thu, 2007-06-21 at 13:27 -0500, Anthony Chan wrote:
> It seems the hang only occurs when OpenMPI is built with
> --enable-mpi-threads --enable-progress-threads. [My OpenMPI builds use
> gcc (GCC) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)]. Probably
> --enable-mpi-threads is the relevant option to cause th
It seems the hang only occurs when OpenMPI is built with
--enable-mpi-threads --enable-progress-threads. [My OpenMPI builds use
gcc (GCC) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)]. Probably
--enable-mpi-threads is the relevant option to cause the hang.
Also, control-c at the terminal that launches mpiexec
I was using the latest trunk. Now that you raised the issue about the
code ... I read it. You're right, for the server process (rank n-1 on
MPI_COMM_WORLD) there are 2 calls to MPI_Comm_free for the
counter_comm, and [obviously] the second one *should* fail. I'll take
a look in the Open MPI
Hi George,
It does seem to me that your execution is correct.
Here are my stats...
-bash-3.00$ mpicc -V
Intel(R) C Compiler for Intel(R) EM64T-based applications, Version
9.1Build 20060925 Package ID: l_cc_c_9.1.044
Copyright (C) 1985-2006 Intel Corporation. All rights reserved.
-bash-3.00
Forgot to mention, to get Jeff's program to work, I modified
MPE_Counter_free() to check for MPI_COMM_NULL, i.e.
if ( *counter_comm != MPI_COMM_NULL ) {/* new line */
MPI_Comm_rank( *counter_comm, &myid );
...
MPI_Comm_free( counter_comm );
}
Hi George,
Just out of curiosity, what version of OpenMPI that you used works fine
with Jeff's program (after adding MPI_Finalize)? The program aborts with
either mpich2-1.0.5p4 or OpenMPI-1.2.3 on a AMD x86_64 box(Ubuntu 7.04)
because MPI_Comm_rank() is called with MPI_COMM_NULL.
With OpenMPI:
Jeff,
With the proper MPI_Finalize added at the end of the main function,
your program orks fine with the current version of Open MPI up to 32
processors. Here is the output I got for 4 processors:
I am 2 of 4 WORLD procesors
I am 3 of 4 WORLD procesors
I am 0 of 4 WORLD procesors
I am 1 of
Hi,
ANL suggested I post this question to you. This is my second
posting..but now with the proper attachments.
--- Begin Message ---
Hello All,
This will probably turn out to be my fault as I haven't used MPI in a
few years.
I am attempting to use an MPI implementation of a "nxtval" (see
12 matches
Mail list logo