I'm not sure - just fishing for possible answers. When we see high cpu usage,
it usually occurs during MPI communications - when a process is waiting for a
message to arrive, it polls at a high rate to keep the latency as low as
possible. Since you have one process "sleep" before calling the fin
Thanks all for your comments
Ralph
What I was initially looking at is a tool (or option of orte-clean) that
clean up the mess you are talking about, but only the mess that have been
created by a single mpirun command. As far I have understood, orte-clean
clean all mess on a node associated to all
Hi Ralph, thank you for your comment.
I understand what you mean. As you pointed out, I have one process sleep
before finalize. Then, mumps finalize might affect the behavior.
I will remove mumps finalize (and/or initialize) function from my testing
program ant try again on next Monday to make
On Oct 26, 2012, at 4:14 AM, Nicolas Deladerriere
wrote:
> Thanks all for your comments
>
> Ralph
>
> What I was initially looking at is a tool (or option of orte-clean) that
> clean up the mess you are talking about, but only the mess that have been
> created by a single mpirun command. As
Open MPI doesn't really do much file IO at all. We do a little during startup
/ shutdown, but during the majority of the MPI application run, there's
little/no file IO from the MPI layer.
Note that the above statements assume that you are not using the MPI IO
function calls. If your applicati
Dear all,
I am willing to use OpenMPI on Windows for a CFD instead of MPICH2. My
solver is developed if Fortran77 and piloted by a C++ interface; the both
levels call MPI functions.
So, I installed OpenMPI-1.6.2-x64 on my system and compiled my code
successfully. But, at the runtime it crashed.
>You can usually resolve that by configuring with --disable-dlopen
Ok I will try.
So what is the purpose of enabling dlopen? Why dlopen is not disabled by
default.
I mean why high traffic configuration is enabled by default?
Regards,
Mahmood
From: Ralph Ca