On Wed, Jun 9, 2010 at 7:58 AM, Jeff Squyres wrote:
> On Jun 8, 2010, at 12:33 PM, David Turner wrote:
>
> > Please verify: if using openib BTL, the only threading model is
> MPI_THREAD_SINGLE?
>
> Up to MPI_THREAD_SERIALIZED.
>
> > Is there a timeline for full support of MPI_THREAD_MULTIPLE in
On Jun 8, 2010, at 12:33 PM, David Turner wrote:
> Please verify: if using openib BTL, the only threading model is
> MPI_THREAD_SINGLE?
Up to MPI_THREAD_SERIALIZED.
> Is there a timeline for full support of MPI_THREAD_MULTIPLE in Open MPI's
> openib BTL?
IBM has been making some good strides
I once had a crash in libpthread something like the one below. The
very un-obvious cause was a stack overflow on subroutine entry - large
automatic array.
HTH,
Douglas.
On Wed, Mar 04, 2009 at 03:04:20PM -0500, Jeff Squyres wrote:
> On Feb 27, 2009, at 1:56 PM, Mahmoud Payami wrote:
>
> >I am u
On Feb 27, 2009, at 1:56 PM, Mahmoud Payami wrote:
I am using intel lc_prof-11 (and its own mkl) and have built
openmpi-1.3.1 with connfigure options: "FC=ifort F77=ifort CC=icc
CXX=icpc". Then I have built my application.
The linux box is 2Xamd64 quad. In the middle of running of my
applic
Open MPI currently has minimal use of hidden "progress" threads, but
we will likely be experimenting with more usage of them over time
(previous MPI implementations have shown that progress threads can be
a big performance win for large messages, although they do tend to
add a bit of latenc
I have used POSIX threading and Open MPI without problems on our Opteron
2216 Cluster (4 cores per node). Moving to core-level parallelization
with multi threading resulted in significant performance gains.
Sam Adams wrote:
I have been looking, but I haven't really found a good answer about
sy