On Wed, Feb 27, 2008 at 10:01:06AM -0600, Brian W. Barrett wrote:
> The only solution to this problem is to suck it up and audit all the code
> to eliminate calls to opal_progress() in situations where infinite
> recursion can result. It's going to be long and painful, but there's no
> quick
To clean this up for the web archives, we were able to get it to work by
using '--disable-dlopen'
Tim
Tim Prins wrote:
Scott,
I can replicate this on big red. Seems to be a libtool problem. I'll
investigate...
Thanks,
Tim
Teige, Scott W wrote:
Hi all,
Attempting a build of 1.2.5 on a p
Hi, and thanks for the feedback everyone.
George Bosilca wrote:
Brian is completely right. Here is a more detailed description of this
problem.
[]
On the other side, I hope that not many users write such applications.
This is the best way to completely kill the performances of any MPI
imp
On Thu, 28 Feb 2008, Gleb Natapov wrote:
> The trick is to call progress only from functions that are called
> directly by a user process. Never call progress from a callback functions.
> The main offenders of this rule are calls to OMPI_FREE_LIST_WAIT(). They
> should be changed to OMPI_FREE_LIST
In this particular case, I don't think the solution is that obvious.
If you look at the stack in the original email, you will notice how we
get into this. The problem here, is that the FREE_LIST_WAIT is used to
get a fragment to store an unexpected message. If this macro return
NULL (in oth
On Feb 28, 2008, at 2:45 PM, John Markus Bjørndalen wrote:
Hi, and thanks for the feedback everyone.
George Bosilca wrote:
Brian is completely right. Here is a more detailed description of
this
problem.
[]
On the other side, I hope that not many users write such
applications.
This i
Dear All,
I am a graduate student working on molecular dynamic simulation. My
professor/adviser is planning to buy Linux based clusters. But before that he
wanted me to parallelize a serial code on molecular dynamic simulations and
test it on a intelcore 2 duo machine with fedora 8 on it. I ha
Hey Folks,
Anyone got ScaLapack and BLACS working and not just compiled under
OSX10.5 in 64-bit mode?
The FAQ site directions were followed and every thing compiles just
fine. But ALL of the single precision routines and many of the double
precisions routines in the TESTING directory fail w
On Feb 28, 2008, at 5:32 PM, Chembeti, Ramesh (S&T-Student) wrote:
Dear All,
I am a graduate student working on molecular dynamic simulation. My
professor/adviser is planning to buy Linux based clusters. But
before that he wanted me to parallelize a serial code on molecular
dynamic sim
Dear Mr. Palen
Thank you very much for your instant reply. I will let you know if I
face any problem in future.
Ramesh
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Brock Palen
Sent: Thursday, February 28, 2008 4:51 PM
To: Open MPI U
On Thu, 2008-02-28 at 16:32 -0600, Chembeti, Ramesh (S&T-Student) wrote:
> Dear All,
>
> I am a graduate student working on molecular dynamic simulation. My
> professor/adviser is planning to buy Linux based clusters. But before that he
> wanted me to parallelize a serial code on molecular dyna
Yes I have used MPI_SUBROUTINES to parallelize it. If you want me to
send my code I can do that because this is my first effort towards
parallel computing, so your suggestions and ideas are valuable to me.
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.o
12 matches
Mail list logo