On Fri, 04 Nov 2005 16:45:59 -0700, Troy Telford
wrote:
the 'globalop' test was a dog on 4 nodes (some odd 360 times
slower on
mvapi than on mx); it'll take a while to verify whether it tickles the
65-process issue or not.
Globalop runs fine on 100 processes.
--
Troy Telfor
(Using svn 'trunk' revision 7927 of OpenMPI):
I've found an interesting issue with OpenMPI and the mvapi btl mca: Most
of the benchmarks I've tried (HPL, HPCC, Presta, IMB), do not seem to run
properly when the number of processes is sufficiently large (the barrier
seems to be at 65 proces
Hi,
I have been using Open MPI in conjunction with PETSc on OSX 10.4, and
had been having trouble with undefined symbols when trying tests with
PETSc:
/usr/bin/ld: Undefined symbols:
_pmpi_wtick__
_pmpi_wtime__
After playing around with things for a while, I realized that these
undefined
On Nov 4, 2005, at 3:09 PM (GMT +2), Jeff Squyres wrote:
> Which "pool" are you referring to? The number of nodes, size of
> memory, etc.?
Nodes (and processors).
> Open MPI jobs have been run on a few thousand nodes (2k, I believe?) on
> Lawrence Livermore machines. We've still got some scal
On Nov 3, 2005, at 5:05 PM, Sebastian Forsman wrote:
Are there any "hard coded" limits in the size of a Open MPI pool?
Which "pool" are you referring to? The number of nodes, size of
memory, etc.?
How about the maximum amount of nodes running a single job?
Open MPI jobs have been run on