Any chance this is all due to an OS X security setting? Apple has
been putting locked doors on many, many things lately.
On Thu, May 5, 2022 at 8:57 AM Jeff Squyres (jsquyres) via users
wrote:
>
> Scott --
>
> Sorry; something I should have clarified in my original email: I meant you to
> run
If you are running this on a cluster or other professionally supported
machine, your system administrator may be able to help.
You should also check to make sure that you should be running LS-DYNA
directly. I believe that you should be running mpirun or mpiexec
followed by the name of the LS-DYNA
Luis,
Can you install OpenMPI into your home directory (or other shared
filesystem) and use that? You may also want contact your cluster
admins to see if they can help do that or offer another solution.
On Wed, Jan 26, 2022 at 3:21 PM Luis Alfredo Pires Barbosa via users
wrote:
>
> Hi Ralph,
>
definition of `ompi_op_avx_3buff_functions_avx2'
>
> ./.libs/liblocal_ops_avx2.a(liblocal_ops_avx2_la-op_avx_functions.o):/project/muno/OpenMPI/BUILD/SRC/openmpi-4.1.1/ompi/mca/op/avx/op_avx_functions.c:651:
> first defined here
> make[2]: *** [mca_op_avx.la] Error 2
> make[2]: Le
Ray,
If all the errors about not being compiled with -fPIC are still appearing,
there may be a bug that is preventing the option from getting through to
the compiler(s). It might be worth looking through the logs to see the
full compile command for one or more of them to see whether that is true?
We are getting this message when OpenMPI starts up.
--
WARNING: There was an error initializing an OpenFabrics device.
Local host: gls801
Local device: mlx5_0
Thomas,
I think OpenMP is installed correctly. This
$ mpiexec -mca btl ^openib -N 5 gcc --version
asks OpenMPI to run `gcc --version` once for each processor assigned to the
job, so if you did NOT get 5 sets of output, it would be incorrect.
>From your error error message, it looks to me as th
It covers a good deal more than MPI, but there is at least one full
chapter on MPI in
Scientific Programming and Computer Architecture, Divakar
Viswanath (MIT Press, 2017)
also available online at
https://divakarvi.github.io/bk-spca/spca.html
https://divakarvi.github.io/bk-spca/spca.
We are getting errors on our system that indicate that we should
export OMPI_MCA_btl_vader_single_copy_mechanism=none
Our user originally reported
> This occurs for both GCC and PGI. The errors we get if we do not set this
> indicate something is going wrong in our communication which uses
This is what CentOS installed.
$ yum list installed hwloc\*
Loaded plugins: langpacks
Installed Packages
hwloc.x86_64 1.11.8-4.el7@os
hwloc-devel.x86_64 1.11.8-4.el7@os
hwloc-libs.x86_64
duler (Slurm),
PMIx, and OpenMPI, so I am a bit muddled about how all the moving
pieces work yet.
On Sun, Feb 2, 2020 at 4:16 PM Jeff Squyres (jsquyres)
wrote:
>
> Bennet --
>
> Just curious: is there a reason you're not using UCX?
>
>
> > On Feb 2, 2020, a
We get these warnings/error from OpenMPI, version 3.1.4 and 4.0.2
--
WARNING: No preset parameters were found for the device that Open MPI
detected:
Local host:gl3080
Device name: mlx5_0
Device ven
Setting UCX_LOG_LEVEL=error suppresses the messages.
There may be release eager messages.
If anyone is interested, this is the GitHub Issue:
https://github.com/openucx/ucx/issues/4175
On Sun, Sep 8, 2019 at 11:37 AM Bennet Fauber wrote:
>
> I am posting this here, first, as I think these quest
I am posting this here, first, as I think these questions are probably
OpenMPI related and not related specifically to parallel HDF5.
I am trying to get parallel HDF5 installed, but in the `make check`, I
am getting many, many warnings of the form
-
mpool.c:38 UCX WARN object 0x2afbefc67f
14 matches
Mail list logo