HI,
I read some howtos at OpenMPI official site but i still have some problems here.
I build a Kerrighed Clusters with 4 nodes so they look like a big SMP
machine. every node has 1 processor with dingle core.
1) Dose MPI programs could be running on such kinds of machine? If
yes, could anyone sh
amjad ali wrote:
Please tell me that if have multiple such ISEND-RECV squrntially for
sending receiving data then DO we need to declare separate
status(MPI_STATUS_SIZE) with for example status1, status2, ; or a
single declaration of it will work for all??
First of all, it really is good
On Fri, Aug 14, 2009 at 1:32 AM, Eugene Loh wrote:
> amjad ali wrote:
>
> I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose that the grid
> (all triangles) is partitioned among 8 processes using METIS. Each process
> has different number of neighboring processes. Suppose each process
Our wiki is open -- it does not require authentication. You can get
the CA for the SSL certificate, if you care, from here (for the entire
Computer Science Department at Indiana University):
http://www.cs.indiana.edu/Facilities/FAQ/Mail/csci.crt
On Aug 13, 2009, at 4:17 PM, Kritiraj S
amjad ali wrote:
I am parallelizing a CFD 2D code in
FORTRAN+OPENMPI. Suppose
that the grid (all triangles) is partitioned among 8 processes using
METIS.
Each process has different number of neighboring processes. Suppose
each
process has n elements/faces whose data it need
Hi Josh,
I can't access the link you gave. Its a secure link and I think needs
authentication.
Thanks
Raj
--- On Thu, 8/13/09, Josh Hursey wrote:
> From: Josh Hursey
> Subject: Re: [OMPI users] configure OPENMPI with DMTCP
> To: "Open MPI Users"
> Date: Thursday, August 13, 2009,
Hi, all,
I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose that the grid
(all triangles) is partitioned among 8 processes using METIS. Each process
has different number of neighboring processes. Suppose each process has n
elements/faces whose data it needs to sends to corresponding ne
On Aug 12, 2009, at 19:09 PM, Ralph Castain wrote:
Hmmm...well, I'm going to ask our TCP friends for some help here.
Meantime, I do see one thing that stands out. Port 4 is an awfully
low port number that usually sits in the reserved range. I checked
the /etc/services file on my Mac, and
Agreed -- ports 4 and 260 should be in the reserved ports range. Are
you running as root, perchance?
On Aug 12, 2009, at 10:09 PM, Ralph Castain wrote:
Hmmm...well, I'm going to ask our TCP friends for some help here.
Meantime, I do see one thing that stands out. Port 4 is an awfully
low
On Thu, Aug 13, 2009 at 1:51 AM, Eugene Loh wrote:
>>Is this behavior expected? Are there any tunables to get the OpenMPI
>>sockets up near HP-MPI?
>
> First, I want to understand the configuration. It's just a single node. No
> interconnect (InfiniBand or Ethernet or anything). Right?
Yes, th
On Aug 12, 2009, at 3:35 PM, Kritiraj Sajadah wrote:
HI,
I want to configure OPENMPI to checkpoint MPI applications using
DMTCP. Does anyone know how to specify the path to the DMTCP
application when installing OPENMPI.
I have not experimented with Open MPI using DMTCP. If I understand
Hi David
You are quite correct. IIRC, we didn't bother checking the local_err
because we found it to be unreliable - all Torque checks is that the
program exec's. It doesn't report back an error if it segfaults
instantly, for example, or aborts because it fails to find a required
library.
Just a couple of data points:
1. so we don't confuse folks, there is no legal thing about a space in
OpenMPI. Heck, most of us developers drop the space in our
discussions. It was put in there to avoid confusion with OpenMP. While
the more marketing oriented worry about it, the rest of the
Hi,
1.
The Mellanox has a newer fw for those HCAshttp://
www.mellanox.com/content/pages.php?pg=firmware_table_IH3Lx
I am not sure if it will help, but newer fw usually have some bug fixes.
2.
try to disable leave_pinned during the run. It's on by default in 1.3.3
Lenny.
On Thu, Aug 13, 2009 at 5:1
I was away on vacation for two weeks and therefore missed most of this
thread, but I'm quite interested.
Michael Di Domenico wrote:
>I'm not sure I understand what's actually happened here. I'm running
>IMB on an HP superdome, just comparing the PingPong benchmark
>
>HP-MPI v2.3
>Max ~ 700-800
Maybe this should go to the devel list but I'll start here.
In tracking the way the PBS tm API propagates error information
back to clients, I noticed that Open MPI is making an incorrect
assumption. (I'm looking 1.3.2.) The relevant code in
orte/mca/plm/tm/plm_tm_module.c is:
/* TM poll f
16 matches
Mail list logo