FWIW: I don’t think this actually has anything to do with the #procs you are
trying to run. Instead, I expect it has to do with confusion over how many
cores it can bind across. When you tell it to use-hwthread-cpus, you are asking
us to map processes to hwthreads, and not cores. I don’t know wh
I'm having a strange problem w/OpenMPI 1.8.6. If I run
my OpenMPI test code (compiled against OpenMPI 1.8.6
libraries) on < 131 slots I get no issues. Anything over 131
errors out:
mpirun -np 132 -report-bindings --prefix /hpc/apps/mpi/openmpi/1.8.6/
--hostfile hostfile-single --mca btl_tcp_if_in
Thanks, will try it on Sunday (won't have access to the system till then)
On 06/18/2015 04:36 PM, Gilles Gouaillardet wrote:
This is really odd...
you can run
ompi_info --all
and search coll_ml_priority
it will display the current value and the origin
(e.g. default, system wide config, user co
This is really odd...
you can run
ompi_info --all
and search coll_ml_priority
it will display the current value and the origin
(e.g. default, system wide config, user config, cli, environment variable)
Cheers,
Gilles
On Thursday, June 18, 2015, Daniel Letai wrote:
> No, that's the issue.
>
No, that's the issue.
I had to disable it to get things working.
That's why I included my config settings - I couldn't figure out which
option enabled it, so I could remove it from the configuration...
On 06/18/2015 02:43 PM, Gilles Gouaillardet wrote:
Daniel,
ML module is not ready for prod
Daniel,
ML module is not ready for production and is disabled by default.
Did you explicitly enable this module ?
If yes, I encourage you to disable it
Cheers,
Gilles
On Thursday, June 18, 2015, Daniel Letai wrote:
> given a simple hello.c:
>
> #include
> #include
>
> int main(int argc, ch
given a simple hello.c:
#include
#include
int main(int argc, char* argv[])
{
int size, rank, len;
char name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_proc