Why are you using ompi-clean for this purpose instead of a simple ctrl-c?
It wasn't intended for killing jobs, but only for attempting cleanup of lost
processes in extremity (i.e., when everything else short of rebooting the node
fails). So it isn't robust by any means.
On May 6, 2011, at 11:5
Hi.
I am having a problem when I try to kill a spawned process. I am using ompi
1.4.3. I use the command ompi-clean to kill all the processes I have
running, but those ones which were dynamically spawned are not killed.
Any idea?
Thanks in advance.
On 5/6/2011 10:22 AM, Tim Hutt wrote:
On 6 May 2011 16:45, Tim Hutt wrote:
On 6 May 2011 16:27, Tim Prince wrote:
If you want to use the MPI Fortran library, don't convert your Fortran to C.
It's difficult to understand why you would consider f2c a "simplest way,"
but at least it should all
On 6 May 2011 16:45, Tim Hutt wrote:
> On 6 May 2011 16:27, Tim Prince wrote:
>> If you want to use the MPI Fortran library, don't convert your Fortran to C.
>> It's difficult to understand why you would consider f2c a "simplest way,"
>> but at least it should allow you to use ordinary C MPI fun
Please find test program...
mar_f_dp.f
Description: Binary data
Please find callstack of test program...
Please find the callstack of my application...
Greetings!!!
I am observing crash in MPI_Allreduce() call from my actual application.
After debugging I found that MPI_Allreduce() with MPI_DOUBLE_PRECISION
returns NULL for following code in op.h
if (0 != (op->o_flags & OMPI_OP_FLAGS_INTRINSIC)) {
op->o_func.intrinsic.fns[ompi_op_ddt_map[
On 6 May 2011 16:27, Tim Prince wrote:
> If you want to use the MPI Fortran library, don't convert your Fortran to C.
> It's difficult to understand why you would consider f2c a "simplest way,"
> but at least it should allow you to use ordinary C MPI function calls.
Sorry, maybe I wasn't clear.
Thank you Jed, sounds like the log_summary should be sufficient for my
needs!
I appreciate your help :)
Have a great weekend!
Paul Monday
On 5/6/11 3:38 AM, Jed Brown wrote:
On Thu, May 5, 2011 at 23:15, Paul Monday (Parallel Scientific)
mailto:paul.mon...@parsci.com>> wrote:
Hi, I'm h
On 5/6/2011 7:58 AM, Tim Hutt wrote:
Hi,
I'm trying to use PARPACK in a C++ app I have written. This is an
FORTRAN MPI routine used to calculate SVDs. The simplest way I found
to do this is to use f2c to convert it to C, and then call the
resulting functions from my C++ code.
However PARPACK re
Hi,
I'm trying to use PARPACK in a C++ app I have written. This is an
FORTRAN MPI routine used to calculate SVDs. The simplest way I found
to do this is to use f2c to convert it to C, and then call the
resulting functions from my C++ code.
However PARPACK requires that I write some user-defined o
Hi Peter,
Thanks that helped a lot.
BTW: please let me know if you any comment on "Windows:
MPI_Allreduce() crashes when using MPI_DOUBLE_PRECISION" thread.
Thank you.
-Hiral
2011/5/4 Peter Kjellström :
> On Wednesday, May 04, 2011 04:04:37 PM hi wrote:
>> Greetings !!!
>>
>> I am observing foll
On Thu, May 5, 2011 at 23:15, Paul Monday (Parallel Scientific) <
paul.mon...@parsci.com> wrote:
> Hi, I'm hoping someone can help me locate a SpMV benchmark that runs w/
> Open MPI so I can benchmark how my systems are interacting with the network
> as I add nodes / cores to the pool of systems.
14 matches
Mail list logo