Surely this is the problem of the scheduler that your system uses,
rather than MPI?
On Wed, 2010-03-03 at 00:48 +, abc def wrote:
> Hello,
>
> I wonder if someone can help.
>
> The situation is that I have an MPI-parallel fortran program. I run it
> and it's distributed on N cores, and each
It works after creating a new pe and even from the command prompt with out
using SGE.
Thanks
Rangam
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] On Behalf Of
Reuti [re...@staff.uni-marburg.de]
Sent: Tuesday, March 02, 2010 12:35 PM
To: Ope
Hello,
I wonder if someone can help.
The situation is that I have an MPI-parallel fortran program. I run it
and it's distributed on N cores, and each of these processes must
call an external program.
This external program is also an MPI program, however I want to run it
in serial, on the co
Am 02.03.2010 um 19:26 schrieb Eugene Loh:
Eugene Loh wrote:
Addepalli, Srirangam V wrote:
i tried using the following syntax with machinefile
mpirun -np 14 -npernode 7 -machinefile machinefile ven_nw.e
It "works" for me. I'm not using SGE, though.
When it's tightly integrated with SG
Eugene Loh wrote:
Addepalli, Srirangam V wrote:
i tried using the following syntax with machinefile
mpirun -np 14 -npernode 7 -machinefile machinefile ven_nw.e
It "works" for me. I'm not using SGE, though.
% cat machinefile
% mpirun -tag-output -np 14 -npernode 7 -machinefile machinefile h
Correct, i was not clear. It spawns more than 7 processes per node. (It spawns
8 of them).
Rangam
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] On Behalf Of
Ralph Castain [r...@open-mpi.org]
Sent: Tuesday, March 02, 2010 11:55 AM
To: Op
Addepalli, Srirangam V wrote:
i tried using the following syntax with machinefile
mpirun -np 14 -npernode 7 -machinefile machinefile ven_nw.e
It "works" for me. I'm not using SGE, though.
% cat machinefile
node0
node0
node0
node0
node0
node0
node0
node1
node1
node1
node1
node1
node1
node1
%
When you say "it fails", what do you mean? That it doesn't run at all, or
that it still fills each node, or...?
On Tue, Mar 2, 2010 at 9:49 AM, Addepalli, Srirangam V <
srirangam.v.addepa...@ttu.edu> wrote:
> Hello All.
> I am trying to run a parallel application that should use one core less
>
Hello All.
I am trying to run a parallel application that should use one core less than
the no of cores that are available on the system. Are there any flags that i
can use to specify this.
i tried using the following syntax with machinefile
openmpi-1.4-BM/bin/mpirun -np 14 -npernode 7 -machine
On Sun, Feb 28, 2010 at 11:11 PM, Fernando Lemos wrote:
> Hello,
>
>
> I'm trying to come up with a fault tolerant OpenMPI setup for research
> purposes. I'm doing some tests now, but I'm stuck with a segfault when
> I try to restart my test program from a checkpoint.
>
> My test program is the "r
Hello,
Yes, I compiled OpenMPI with --enable-heterogeneous. More precisely I
compiled with :
$ ./configure --prefix=/tmp/openmpi --enable-heterogeneous
--enable-cxx-exceptions --enable-shared
--enable-orterun-prefix-by-default
$ make all install
I attach the output of ompi_info of my 2 machines.
Did you configure Open MPI with --enable-heterogeneous?
On Feb 28, 2010, at 1:22 PM, TRINH Minh Hieu wrote:
> Hello,
>
> I have some problems running MPI on my heterogeneous cluster. More
> precisley i got segmentation fault when sending a large array (about
> 1) of double from a i686 machin
Hi,
I've recently been trying to develop a client-server distributed file
system (for my thesis) using the MPI. The communication between the
machines is working great, however when ever the MPI_Comm_accept()
function is called, the server starts like consuming 100% of the CPU.
One interest
I found the problem - the orted wasn't whacking any lingering session
directories when it exited. Missing one line...sigh.
Rolf: I have submitted a patch for the 1.4 branch. Can you please review? It is
a trivial fix.
David: Thanks for bringing it to my attention. Sorry for the problem.
Ralph
14 matches
Mail list logo