Thanks for the reply.
When I modify the code it still fails with segmentation error.
my latest code looks like
#include "mpi.h"
#include
#include
#include
#include
#include
#include
#define MASTER0
#define ARRAYSIZE 4000
int
*master
Hi,
I have written an parallel program and when i run my program on 4,8,16
nodes and calculated the execution time at master using MPI_Wtime in master
node. The problem the execution time is increasing rapidly like NON
parallel program-55 sec, and for parallel program 2-nodes--60sec , 4-nodes
74sec
Hi, could you also attach your current code ?
Regards
Björn
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of seshendra seshu
Sent: den 3 maj 2012 13:49
To: Open MPI Users
Subject: [OMPI users] Regarding the execution time calculation
Hi,
I have written
At 12:51 03/05/2012, you wrote:
Thanks for the reply.
When I modify the code it still fails with segmentation error.
You run it on different servers or runs in the same server?
If you are testing on one server, perhaps your gpu is out of memory.
Check your cudaMalloc calls, perhaps memory is
I have solved this issue. All the paths were correct but I still had to use
mpirun -x LD_LIBRARY_PATH while executing the job.
Other option is to update your .bashrc/.cshrc.
Just add the LD_LIBRARY_PATH to the file and the update variable will be
available on remote machines as well.
(I ma
Hello,
I have a problem when running a mpi program with openmpi library. I did the
following.
1.- I installed the ofed 1.5.4 from RHEL. The hardware are qlogic 7340 ib
cards.
2.- I am using openmpi 1.4.3 , the one that comes with ofed 1.5.4
3.- I have check openmpi website, and I have all
You apparently are running on a cluster that uses Torque, yes? If so, it won't
use ssh to do the launch - it uses Torque to do it, so the passwordless ssh
setup is irrelevant.
Did you ensure that your LD_LIBRARY_PATH includes the OMPI install lib location?
On May 3, 2012, at 9:59 AM, Acero Fer
Not related to this question , but just curious, is Wtime context switch safe ?
--
Sent from my iPhone
On May 3, 2012, at 4:48 AM, seshendra seshu wrote:
> Hi,
> I have written an parallel program and when i run my program on 4,8,16 nodes
> and calculated the execution time at master using MPI
I'm attempting to use MPI over tcp; the attached (rather trivial) code
gets stuck in MPI_Send. Looking at TCP dumps indicates that the TCP
connection is made successfully to the right port, but the actual data
doesn't appear to be sent.
I'm beginning to suspect that there's some basic problem with
I tried your program on a single node and it worked fine. Yes, TCP message
passing in Open MPI has been working well for some time.
I have a few suggestions.
1. Can you run something like hostname successfully (mpirun -np 10 -hostfile
yourhostfile hostname)
2. If that works, then you can also ru
On Thu, 03 May 2012, Rolf vandeVaart wrote:
> I tried your program on a single node and it worked fine.
It works fine on a single node, but deadlocks when it communicates in
between nodes. Single node communication doesn't use tcp by default.
> Yes, TCP message passing in Open MPI has been worki
11 matches
Mail list logo