Hello,
I'm having problems running Open MPI jobs under PBS Pro 10.2. I've configured
and built OpenMPI 1.4.1 with the Intel 11.1 compiler on Linux and with
--with-tm support and the build runs fine. I've also built with static
libraries per the FAQ suggestion since libpbs is static. However,
ks
Ralph
On Feb 12, 2010, at 8:50 AM, Repsher, Stephen J wrote:
Hello,
I'm having problems running Open MPI jobs under PBS Pro 10.2. I've configured
and built OpenMPI 1.4.1 with the Intel 11.1 compiler on Linux and with
--with-tm support and the build runs fine. I've a
Pro has done something unusual.
On Feb 12, 2010, at 1:41 PM, Repsher, Stephen J wrote:
Yes, the failure seems to be in mpirun, it never even gets to my application.
The proto for tm_init looks like this:
int tm_init(void *info, struct tm_roots *roots);
where the struct has 6 elements: 2 x tm_task
Hello again,
Hopefully this is an easier question
My cluster uses Infiniband interconnects (Mellanox Infinihost III and some
ConnectX). I'm seeing terrible and sporadic latency (order ~1000 microseconds)
as measured by the subounce code (http://sourceforge.net/projects/subounce/),
but th
Well the "good" news is I can end your debate over binding here...setting
mpi_paffinity_alone 1 did nothing. (And personally as a user, I don't care what
the default is so long as info is readily apparent in the main docs...and I did
see the FAQs on it).
It did lead me to try another parameter
ing that it only seems to
> be doing 10 iterations on each size. For small sizes, this might well be not
> enough to be accurate. Have you tried increasing it? Or using a different
> benchmark app, such as NetPIPE, osu_latency, ...etc.?
>
>
>
> On Feb 16, 2010, at 8:4