that
shouldn't impact OMPI in a Torque environment).
In any case, I went from version 12.1.0.233 (Build 20110811) to 12.1.4.319
(Build 20120410), and rebuilt Open MPI 1.6. After that, all tests worked,
for any number of tasks.
--
Edmund Sumbar
University of Alberta
+1 780 492 9360
mpiled with Torque support?
> If not, I wonder if clauses like '-bynode' would work at all.
> Jeff may correct me if I am wrong, but if your
> OpenMPI lacks Torque support,
> you may need to pass to mpirun
> the $PBS_NODEFILE as your hostfile.
>
--
Edmund Sumbar
University of Alberta
+1 780 492 9360
the procs=24 case failed,
but then it worked a few seconds later, with the same list of allocated
nodes. So there's definitely something amiss with the cluster, although I
wouldn't know where to start investigating. Perhaps there is a
pre-installed OMPI somewhere that's interfering, but I'm doubtful.
By the way, thanks for all the support.
--
Edmund Sumbar
University of Alberta
+1 780 492 9360
962,0],8] ORTE_ERROR_LOG: Data unpack would read past
end of buffer in file base/odls_base_default_fns.c at line 2342
--
mpiexec noticed that process rank 77 with PID 5142 on node cl2n005 exited
on signal 11 (Segmentation fault).
--
--
Edmund Sumbar
University of Alberta
+1 780 492 9360
fault).
--
On Thu, May 31, 2012 at 2:54 PM, Jeff Squyres wrote:
> This type of error usually means that you are inadvertently mixing
> versions of Open MPI (e.g., version A.B.C on one node and D.E.F on another
> node).
--
Edmund Sumbar
University of Albe
v1.6)
MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6)
MCA notifier: command (MCA v2.0, API v1.0, Component v1.6)
MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6)
--
Edmund Sumbar
University of Alberta
+1 780 492 9360
ine 1: syntax error: unexpected end of file
> bash: error importing function definition for `module'
>
--
Edmund Sumbar
Research Computing Support
University of Alberta
+1 780 492 9360
The interface to MPI_Bcast does not specify a assumed-shape-array dummy
first argument. Consequently, as David points out, the compiler makes a
contiguous temporary copy of the array section to pass to the routine. If
using ifort, try the "-check arg_temp_created" compiler option to verify
creation
mmand.
So, what exactly does mpirun call which might trigger this error?
This seems to be a known problem for gridengine...
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=238562
--
Edmund Sumbar
AICT Research Support Group
esum...@ualberta.ca
780.492.9360
t[rank-1]);
MPI_SHORT and the other data types are actually macros that resolve to
MPI_Datatype which is a pointer to a struct.
[...]
--
Edmund Sumbar
AICT Research Support Group
esum...@ualberta.ca
780.492.9360
On Wed, Jul 30, 2008 at 01:15:54PM -0700, Scott Beardsley wrote:
> Brock Palen wrote:
> > On all MPI's I have always used there was only MPI
> >
> > use mpi;
>
> Please excuse my admittedly gross ignorance of all things Fortran but
> why does "include 'mpif.h'" work but "use mpi" does not? When
I'm trying to run skampi-5.0.1-r0191 under PBS
over IB with the command line
mpirun -np 2 ./skampi -i coll.ski -o coll_ib.sko
The pt2pt and mmisc tests run to completion.
The coll and onesided tests, on the other hand,
start to produce output but then seem to hang.
Actually, the cpus appear to
John Borchardt wrote:
I was hoping someone could help me with the following situation. I have a
program which has no MPI support that I'd like to run "in parallel" by
running a portion of my total task on N CPUs of a PBS/Maui/Open-MPI
cluster. (The algorithm is such that there is no real need f
sebastien.he...@external.thalesgroup.com wrote:
Hi,
Is there any mean to reduce the shared RAM used by MPI?
For a very simple application, I have about 500Mo of shared RAM.
Try
mpirun --mca mpool_sm_size -np ./a.out
^^
--
Ed[mund] [Sumbar]]
Research Suppor
14 matches
Mail list logo