Charles,
are you saying that even if you
mpirun --mca pml ob1 ...
(e.g. force the ob1 component of the pml framework) the memory leak is
still present ?
As a side note, we strongly recommend to avoid
configure --with-FOO=/usr
instead
configure --with-FOO
should be used (otherwise you will end u
Posting this on UCX list.
On Thu, Oct 4, 2018 at 4:42 PM Charles A Taylor wrote:
>
> We are seeing a gaping memory leak when running OpenMPI 3.1.x (or 2.1.2,
> for that matter) built with UCX support. The leak shows up
> whether the “ucx” PML is specified for the run or not. The applications
"Gabriel, Edgar" writes:
> It was originally for performance reasons, but this should be fixed at
> this point. I am not aware of correctness problems.
>
> However, let me try to clarify your question about: What do you
> precisely mean by "MPI I/O on Lustre mounts without flock"? Was the
> Lustr
Oops! We had a typo in yesterday's fix -- fixed:
https://github.com/open-mpi/ompi/pull/5847
Ralph also put double extra super protection to make triple sure that this
error can't happen again in:
https://github.com/open-mpi/ompi/pull/5846
Both of these should be in tonight's nightly s
It was originally for performance reasons, but this should be fixed at this
point. I am not aware of correctness problems.
However, let me try to clarify your question about: What do you precisely mean
by "MPI I/O on Lustre mounts without flock"? Was the Lustre filesystem mounted
without flock?
Is romio preferred over ompio on Lustre for performance or correctness?
If it's relevant, the context is MPI-IO on Lustre mounts without flock,
which ompio doesn't seem to require.
Thanks.
___
users mailing list
users@lists.open-mpi.org
https://lists.open
Please send Jeff and I the opal/mca/pmix/pmix4x/pmix/config.log again - we’ll
need to see why it isn’t building. The patch definitely is not in the v4.0
branch, but it should have been in master.
> On Oct 5, 2018, at 2:04 AM, Siegmar Gross
> wrote:
>
> Hi Ralph, hi Jeff,
>
>
> On 10/3/18 8
Hi Ralph, hi Jeff,
On 10/3/18 8:14 PM, Ralph H Castain wrote:
Jeff and I talked and believe the patch in
https://github.com/open-mpi/ompi/pull/5836 should fix the problem.
Today I've installed openmpi-master-201810050304-5f1c940 and
openmpi-v4.0.x-201810050241-c079666. Unfortunately, I stil
Hi,
I've tried to install openmpi-master-201810050304-5f1c940 on my "SUSE Linux
Enterprise Server 12.3 (x86_64)" with Sun C 5.15 (Oracle Developer Studio
12.6). Unfortunately, I get the following error.
loki openmpi-master-201810050304-5f1c940-Linux.x86_64.64_cc 128 head -7
config.log | tail -