Jeff,
Are you looking for pkg for the latest official release (1.10.1 at this time)
or pkg with the latest snapshots (and in this case, which branch v1.10 ? v2.x ?
master ?
Cheers,
Gilles
Jeff Hammond wrote:
>I setup Travis CI support for ARMCI-MPI but the available version in whatever
>Ubu
Thanks for the answer.
Isn't that the same thing by the 2-dim arrays?
I mean m1.length*m1.length, for example:
MPI.COMM_WORLD.send(m1, m1.length*m1.length , MPI.INT, 1, tag);
But I get this exception: ArrayIndexOutOfBoundsException.
What should I write to avoid this exception?
Best r
Maybe not...
I do not remember how Java treats 2dims array (e.g. matrix or array of
array)
at first, you can try
int[][]m = new int [2][3];
and print m.length
it could be 2, 3 or 6 ...
bottom line, you might have to use one send per row, or use a datatype, or
pack and send
Cheers,
Gilles
On Thu
Inquiring about how btl_openib_receive_queues actually gets its default
setting, since what I am seeing is not joving with documentation. We are using
OpenMPI 1.6.5, but I gather the version is moot.
Below is from ompi_info:
$ ompi_info --all | grep btl_openib_receive
MCA btl:
Hello Open MPI community,
We have a smaller Linux GPU cluster here at Boise State University which is
running the following:
CentOS 6.5
Bright Cluster Manager 6.1
PBS Pro 11.2
Openmpi Versions:
1.6.5
1.8.4
1.8.5
On our cluster, we allow th
Fabian Wein writes:
>># hwloc-bind node1:1 hwloc-ps | grep hwloc
>>13425 NUMANode:1 hwloc-ps
>
> I don't understand what you mean:
>
> opt/hwloc-1.11.1/bin/hwloc-bind
> /opt/hwloc-1.11.1/bin/hwloc-bind: nothing to do!
>
> /opt/hwloc-1.11.1/bin/hwloc-bind node1:1
> /opt/hw
Nick Papior writes:
> This is what I do to successfully get the best performance for my
> application using OpenMP and OpenMPI:
>
> (note this is for 8 cores per socket)
>
> mpirun -x OMP_PROC_BIND=true --report-bindings -x OMP_NUM_THREADS=8
> --map-by ppr:1:socket:pe=8
>
> It assigns 8 cores pe
2015-11-05 18:51 GMT+01:00 Dave Love :
> Nick Papior writes:
>
> > This is what I do to successfully get the best performance for my
> > application using OpenMP and OpenMPI:
> >
> > (note this is for 8 cores per socket)
> >
> > mpirun -x OMP_PROC_BIND=true --report-bindings -x OMP_NUM_THREADS=8
Jason Cook writes:
>- 2. Since we allow sharing of the compute nodes with multiple
>jobs, I noticed if users utilize the option bind-to-core, Open MPI starts
>with CPU core 0 and works its way sequentially as stated in the man pages
>for this option. Since we do allow sharing
Below is from ompi_info:
$ ompi_info --all | grep btl_openib_receive
MCA btl: parameter "btl_openib_receive_queues" (current value:
,
data source: default value)
This tunning does make sense for me.
#receive_queues = P,128,256,192,128:S,65536,256,192,128
Most likely it w
I did some code-digging and I found the answer.
If the MCA parameter btl_openib_receive_queues is not spec'd on the mpirun
command line and not specified in
MPI_HOME/share/openmpi/mca-btl-openib-device-params.ini (via receive_queues
parameter), then OpenMPI derives the default setting from the
11 matches
Mail list logo