You are absolutely right, sir -- thanks!
I have committed the fix to the trunk and filed a ticket to get it
moved over to the upcoming v1.2.4 release.
On Jul 23, 2007, at 3:18 PM, Jeff Dusenberry wrote:
I'm trying to use MPI_TYPE_MATCH_SIZE (Fortran interface) and no
matter
what I give i
On Jul 23, 2007, at 6:43 AM, Biagio Cosenza wrote:
I'm working on a parallel real time renderer: an embarassing
parallel problem where latency is the threshold to high perfomance.
Two observations:
1) I did a simple "ping-pong" test (the master does a Bcast + an
IRecv for each node + a Wai
On Jul 23, 2007, at 5:11 PM, Bert Wesarg wrote:
I'm not sure what these command line switches do...? "-openmpi" is
not a switch that our configure supports.
No, he tries to configure the application "Amber9", so this is not the
Open MPI configure.
Ah, I mis-read this completely.
I'm unfort
Hi,
running conventional TCP/IP all is safe AFAICS - all processes will
be killed on all involved nodes. The problem arises with OFED, with
which we also have this behavior using MVAPICH.
Unfortunately we have only a limited number of nodes with InfiniBand,
and hence time to test and deve
> On Jul 23, 2007, at 4:31 PM, Francesco Pietra wrote:
>
>> openmpi-1.2.3 compiled on Debian Linux amd64 etch with
>>
>> ./configure CC=/opt/intel/cce/9.1.042/bin/icc
>> CXX=/opt/intel/cce/9.1.042/bin/icpc F77=/opt/intel/fce/9.1.036/bin/
>> ifort
>> FC=/opt/intel/fce/9.1.036/bin/ifort --with-libnum
It *should* work. We stopped developing for the Cisco (mVAPI) stack
a while ago, but as far as we know, it still works fine. See:
http://www.open-mpi.org/faq/?category=openfabrics#vapi-support
That being said, your approach of "it ain't broke, don't fix it" is
certainly quite reasonabl
Hmmm...compilation SEEMED to go OK with the following .configure...
./configure --prefix=/nfsutil/openmpi-1.2.3
--with-mvapi=/usr/local/topspin/ CC=icc CXX=icpc F77=ifort FC=ifort
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64
And the following looks promising...
./ompi_info | grep mvapi
On Jul 23, 2007, at 4:31 PM, Francesco Pietra wrote:
openmpi-1.2.3 compiled on Debian Linux amd64 etch with
./configure CC=/opt/intel/cce/9.1.042/bin/icc
CXX=/opt/intel/cce/9.1.042/bin/icpc F77=/opt/intel/fce/9.1.036/bin/
ifort
FC=/opt/intel/fce/9.1.036/bin/ifort --with-libnuma=/usr/lib
ompi
openmpi-1.2.3 compiled on Debian Linux amd64 etch with
./configure CC=/opt/intel/cce/9.1.042/bin/icc
CXX=/opt/intel/cce/9.1.042/bin/icpc F77=/opt/intel/fce/9.1.036/bin/ifort
FC=/opt/intel/fce/9.1.036/bin/ifort --with-libnuma=/usr/lib
ompi_info |grep libnuma
ompi_info |grep maffinity
reported O
Hi Henk,
SLIM H.A. wrote:
Dear Pak Lui
I can delete the (sge) job with qdel -f such that it disappears from the
job list but the application processes keep running, including the
shepherds. I have to kill them with -15
For some reason the kill -15 does not reach mpirun. (We use such a
paramete
I'm trying to use MPI_TYPE_MATCH_SIZE (Fortran interface) and no matter
what I give it, it always fails with MPI_ERR_ARG.
The last line of code in type_match_size_f.c seems to be the source of
the problem, as it always calls the error handler:
(void)OMPI_ERRHANDLER_INVOKE(MPI_COMM_WORLD, MPI_
Dear Pak Lui
I can delete the (sge) job with qdel -f such that it disappears from the
job list but the application processes keep running, including the
shepherds. I have to kill them with -15
For some reason the kill -15 does not reach mpirun. (We use such a
parameter to mpirun on our myrinet mx
Thanks, Brian. That did the trick.
-Ken
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Brian Barrett
> Sent: Thursday, July 19, 2007 3:39 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_File_set_view rejecting subarray v
Yes...it would indeed.
On 7/23/07 9:03 AM, "Kelley, Sean" wrote:
> Would this logic be in the bproc pls component?
> Sean
>
>
> From: users-boun...@open-mpi.org on behalf of Ralph H Castain
> Sent: Mon 7/23/2007 9:18 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] orterun --bynode/--bysl
Would this logic be in the bproc pls component?
Sean
From: users-boun...@open-mpi.org on behalf of Ralph H Castain
Sent: Mon 7/23/2007 9:18 AM
To: Open MPI Users
Subject: Re: [OMPI users] orterun --bynode/--byslot problem
No, byslot appears to be working just
Good morning all,
I have been very impressed so far with OpenMPI on one of our smaller
clusters running Gnu compilers and Gig-E interconnects, so I am
considering a build on our large cluster. The potential problem is that
the compilers are Intel 8.1 versions and the Infiniband is supported by
Hi Henk,
The sge script should not require any extra parameter. The qdel command
should send the kill signal to mpirun and also remove the SGE allocated
tmp directory (in something like /tmp/174.1.all.q/) which contains the
OMPI session dir for the running job, and in turns would cause orted a
> > From: Jeff Squyres
> >
> > Can you be a bit more specific than "it dies"? Are you talking about
> > mpif90/mpif77, or your app?
>
> Sorry, tuspid me. When executing mpif90 or mpif77 I have a segfault and it
> doesn't compile. I've tried both with or without input (i.e., giving it
> something t
No, byslot appears to be working just fine on our bproc clusters (it is the
default mode). As you probably know, bproc is a little strange in how we
launch - we have to launch the procs in "waves" that correspond to the
number of procs on a node.
In other words, the first "wave" launches a proc on
Hi,
We are experiencing a problem with the process allocation on our Open MPI
cluster. We are using Scyld 4.1 (BPROC), the OFED 1.2 Topspin Infiniband
drivers, Open MPI 1.2.3 + patch (to run processes on the head node). The
hardware consists of a head node and N blades on private ethernet
Hi
I am in the process of moving a parallel program from our old 32 bit based
(Xeon @ 2.8 GHz) Linux cluster to a new EM64T (Intel Xeon 5160 @ 3.00GHz)
base linux cluster.
OS on the old cluster is Redhat 9 and Fedora 7 on the new cluster.
I have installed the Intel Fortran compiler version
Hello,
I'm working on a parallel real time renderer: an embarassing parallel
problem where latency is the threshold to high perfomance.
Two observations:
1) I did a simple "ping-pong" test (the master does a Bcast + an IRecv for
each node + a Waitall) similar to effective renderer workload. Usin
Call for Participation: EuroPVM/MPI'07
http://www.pvmmpi07.org
Please join us for the 14th European PVM/MPI Users' Group
conference, which will be held in Paris, France from
September 30 to October 3. This conference is a forum for
the discussion and presentation of recent advances
I am using OpenMPI 1.2.3 with SGE 6.0u7 over InfiniBand (OFED 1.2),
following the recommendation in the OpenMPI FAQ
http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge
The job runs but when the user wants to delete the job with the qdel
command, this fails. Does the mpirun command
mpi
24 matches
Mail list logo