What test you are refering to ?
config.log contains the test results of the features that configure is
looking for. Failure of some thread test does not mean OpenMPI can't
support threads. In fact, I was able to run a simple/correct
MPI_THREAD_MULTIPLE program uses pthread with openmpi-1.2.3 an
Looks like everything is fine ... I had the MPI parameter check
option disabled, that's why it didn't complain about calling free on
the MPI_COMM_NULL. If I activate the check, the program now fails as
expected (i.e. complain and give up on the MPI_Comm_free).
Thanks,
george.
On Jun
On Thu, 2007-06-21 at 13:27 -0500, Anthony Chan wrote:
> It seems the hang only occurs when OpenMPI is built with
> --enable-mpi-threads --enable-progress-threads. [My OpenMPI builds use
> gcc (GCC) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)]. Probably
> --enable-mpi-threads is the relevant option to cause th
It seems the hang only occurs when OpenMPI is built with
--enable-mpi-threads --enable-progress-threads. [My OpenMPI builds use
gcc (GCC) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)]. Probably
--enable-mpi-threads is the relevant option to cause the hang.
Also, control-c at the terminal that launches mpiexec
I was using the latest trunk. Now that you raised the issue about the
code ... I read it. You're right, for the server process (rank n-1 on
MPI_COMM_WORLD) there are 2 calls to MPI_Comm_free for the
counter_comm, and [obviously] the second one *should* fail. I'll take
a look in the Open MPI
mpi4py has emerged as the best python mpi bindings. It has the
best coverage of the mpi spec, the best test coverage and has fantastic
performance:
http://www.cimec.org.ar/ojs/index.php/cmm/article/view/8/11
Brian
On 6/20/07, de Almeida, Valmor F. wrote:
Hello list,
I would appreciate rec
On Tue, Jun 19, 2007 at 11:28:33AM -0700, George Bosilca wrote:
>
> The deadlock happens with or without your patch ? If it's with your
> patch, the problem might come from the fact that you start 2
> processes on each node and you will share the port range (because of
> your patch).
If pro
On Tue, Jun 19, 2007 at 03:40:36PM -0400, Jeff Squyres wrote:
> On Jun 19, 2007, at 2:24 PM, George Bosilca wrote:
>
> > 1. I don't believe the OS to release the binding when we close the
> > socket. As an example on Linux the kernel sockets are release at a
> > later moment. That means the so
Thanks for all your replies and sorry for the delay in getting back to you.
On Tue, Jun 19, 2007 at 01:40:21PM -0400, Jeff Squyres wrote:
> On Jun 19, 2007, at 9:18 AM, Chris Reeves wrote:
>
> > Also attached is a small patch that I wrote to work around some firewall
> > limitations on the node
Thanks for the info Jeff! All of my "test" nodes are temporarily busy,
but I should be able to play with this some more tomorrow.
I'll update the post if I have more questions or find any additional
tips ;-)
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayettevil
Hi George,
It does seem to me that your execution is correct.
Here are my stats...
-bash-3.00$ mpicc -V
Intel(R) C Compiler for Intel(R) EM64T-based applications, Version
9.1Build 20060925 Package ID: l_cc_c_9.1.044
Copyright (C) 1985-2006 Intel Corporation. All rights reserved.
-bash-3.00
On Jun 16, 2007, at 3:22 AM, Francesco Pietra wrote:
The question is whether, in compiling openmpi,the flag
libnuma is needed or simply useful also in the special
arrangement of the Tyan S2895 Thunder K8WE with two
dual-core opterons and eighth memory modules, two GB
each.
At worst, it is not
Ick; I'm surprised that we don't have this info on the FAQ. I'll try
to rectify that shortly.
How are you launching your jobs through SLURM? OMPI currently does
not support the "srun -n X my_mpi_application" model for launching
MPI jobs. You must either use the -A option to srun (i.e., g
Forgot to mention, to get Jeff's program to work, I modified
MPE_Counter_free() to check for MPI_COMM_NULL, i.e.
if ( *counter_comm != MPI_COMM_NULL ) {/* new line */
MPI_Comm_rank( *counter_comm, &myid );
...
MPI_Comm_free( counter_comm );
}
Hi George,
Just out of curiosity, what version of OpenMPI that you used works fine
with Jeff's program (after adding MPI_Finalize)? The program aborts with
either mpich2-1.0.5p4 or OpenMPI-1.2.3 on a AMD x86_64 box(Ubuntu 7.04)
because MPI_Comm_rank() is called with MPI_COMM_NULL.
With OpenMPI:
15 matches
Mail list logo