Ryan,
What filesystem are you running on ?
Open MPI defaults to the ompio component, except on Lustre filesystem
where ROMIO is used.
(if the issue is related to ROMIO, that can explain why you did not
see any difference,
in that case, you might want to try an other filesystem (local
filesystem o
Simone,
If you want to run a single MPI task, you can either
- mpirun -np 1 ./a.out (this is the most standard option)
- ./a.out (this is the singleton mode. Note a.out will fork&exec an
orted daemon under the hood, this is necessary for example if your app
will MPI_Comm_spawn().
- OMPI_MCA_ess
Hi,
For testing purposes I run some MPI+OpenMP benchmarks with `mpirun -np 1
./a.out`, and I am using OpenMPI 3.1.3.
As far as I understand, `mpirun` sets an affinity mask, and the OpenMP runtime
(in my case the LLVM OpenMP RT) respects this mask and only sees 1 physical
core.
In my case, I am
I verified that it makes it through to a bash prompt, but I’m a little less
confident that something make test does doesn’t clear it. Any recommendation
for a way to verify?
In any case, no change, unfortunately.
Sent from my iPhone
> On Feb 16, 2019, at 08:13, Gabriel, Edgar wrote:
>
> Wha
Probably not. I think this is now fixed. Might be worth trying master to
verify.
> On Feb 16, 2019, at 7:01 AM, Bart Janssens wrote:
>
> Hi Gilles,
>
> Thanks, that works (I had to put quotes around the ^rdma). Should I file a
> github issue?
>
> Cheers,
>
> Bart
>> On 16 Feb 2019, 14:05 +
Hi Gilles,
Thanks, that works (I had to put quotes around the ^rdma). Should I file a
github issue?
Cheers,
Bart
On 16 Feb 2019, 14:05 +0100, Gilles Gouaillardet
, wrote:
> Bart,
>
> It looks like a bug that involves the osc/rdma component.
>
> Meanwhile, you can
> mpirun --mca osc ^rdma ...
>
What file system are you running on?
I will look into this, but it might be later next week. I just wanted to
emphasize that we are regularly running the parallel hdf5 tests with ompio, and
I am not aware of any outstanding items that do not work (and are supposed to
work). That being said, I r
Bart,
It looks like a bug that involves the osc/rdma component.
Meanwhile, you can
mpirun --mca osc ^rdma ...
Cheers,
Gilles
On Sat, Feb 16, 2019 at 8:43 PM b...@bartjanssens.org
wrote:
>
> Hi,
>
> Running the following test code on two processes:
>
> #include
> #include
> #include
>
> #d
Hi,
Running the following test code on two processes:
#include #include #include #define N 2 int main(int
argc, char **argv){int i, rank, num_procs, len, received[N], buf[N];MPI_Aint
addrbuf[1], recvaddr[1]; MPI_Win win, awin; MPI_Init(&argc,
&argv);MPI_Comm_rank(MPI_COMM_WORLD, &rank);MPI_C