Hi,
For development purposes, I built and installed Open MPI 5.0.2 on my
workstation. As I understand it, to use OpenSHMEM, one has to include
ucx. I configured with
./configure --build=x86_64-linux-gnu
--prefix=/usr/local/openmpi/5.0.2_gcc-12.2.0 --with-ucx
--with-pmix=internal --with-li
Hi all,
There are several bug reports on 4.1.x describing MPI_Win_create failing
for various architectures. I too am seeing the same for 4.1.0-10 which
is packaged for Debian 11, just on a standard workstation where at least
vader,tcp,self, and sm are identified (not sure which are being used
Hi,
I'm trying to get a better understanding of coordinating
(non-overlapping) local stores with remote puts when using passive
synchronization for RMA. I understand that the window should be locked
for a local store, but can it be a shared lock? In my example, each
process retrieves and in
penib...
cgio_open_file:H5Dwrite:write to node data failed
The files system in NFS and an openmpi-v3.0.x-201711220306-2399e85 build.
Stephen
Stephen Guzik, Ph.D.
Assistant Professor, Department of Mechanical Engineering
Colorado State University
On 01/18/2018 04:17 PM, Jeff Squyres (jsquyres)
and working on, that
> might trigger this behavior (although it should actually work for
> collective I/O even in that case).
>
> try to set something like
>
> mpirun --mca io romio314 ...
>
> Thanks
>
> Edgar
>
>
> On 10/12/2017 8:26 PM, Stephen Guzik wrote:
>
running the job across the two workstations seems to work fine.
- on a single node, everything works as expected in all cases. In the
case described above where I get an error, the error is only observed
with processes on two nodes.
- code follows.
Thanks,
Stephen Guzik
--
#include
#incl
Yes, I can confirm that openmpi 3.0.0 builds without issue when
libnl-route-3-dev is installed.
Thanks,
Stephen
Stephen Guzik, Ph.D.
Assistant Professor, Department of Mechanical Engineering
Colorado State University
On 09/21/2017 12:55 AM, Gilles Gouaillardet wrote:
> Stephen,
>
>
&g
v3... ibverbs nl-3
so I wonder if perhaps there is something more serious is going on. Any
suggestions?
Thanks,
Stephen Guzik
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
e4:30795] Signal code: Address not mapped (1)
Stephen
On 02/11/2016 05:30 PM, Stephen Guzik wrote:
> Hi,
>
> I would like to divide n processes between the sockets on a node, with
> one process per core, and bind them to a hwthread. Consider a system
> with 2 sockets, 10 cores per sock
Hi,
I would like to divide n processes between the sockets on a node, with
one process per core, and bind them to a hwthread. Consider a system
with 2 sockets, 10 cores per socket, and 2 hwthreads per core. If I enter
-np 20 --map-by ppr:1:core --bind-to hwthread
then this works as I intend.
Hi,
To the Devs. I just noticed that MPI::BOTTOM requires a cast. Not sure
if that was intended.
Compiling 'MPI::COMM_WORLD.Bcast(MPI::BOTTOM, 1, someDataType, 0);'
results in:
error: invalid conversion from ‘const void*’ to ‘void*’
error: initializing argument 1 of ‘virtual void MPI::Comm::
11 matches
Mail list logo