Hi,
I'm on a single socket, 20 threaded machine.
I'm trying to run a job of "nwchem" with parallel processing mode (with
load balancing).
I was trying with: "mpirun -np 4 nwchem my_file.nw"
But this is launching the same job 4 times in a row and resulting in a
crash. Am I going wrong in this scena
It seems that siesta build its own mpi library named libmpi_f90.a which has
the same name as MPI's libraries. I solved it. Thanks for all suggestions.
Regards,
Mahmood
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/ma
That typically occurs if nwchem is linked with MPICH and you are using
OpenMPI mpirun.
A first, i recommend you double check your environment, and run
ldd nwchem
the very same Open MPI is used by everyone
Cheers,
Gilles
On Wednesday, September 14, 2016, abhisek Mondal
wrote:
> Hi,
> I'm on a s
Hi,
Here is the problem with statically linking an application with a program.
by specifying the library names:
FC=/export/apps/siesta/openmpi-1.8.8/bin/mpifort
FFLAGS=-g -Os
FPPFLAGS= -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
LDFLAGS=-static
MPI1=/export/apps/siesta/openmpi-1.8.8/lib/libmpi_mpifh.a
Mahmood,
try to prepend /export/apps/siesta/openmpi-1.8.8/lib to your
$LD_LIBRARY_PATH
note this is not required when Open MPI is configure'd with
--enable-mpirun-prefix-by-default
Cheers,
Gilles
On Wednesday, September 14, 2016, Mahmood Naderan
wrote:
> Hi,
> Here is the problem with stat
Well I want to omit LD_LIBRARY_PATH. For that reason I am building the
binary statically.
> note this is not required when Open MPI is configure'd with
>--enable-mpirun-prefix-by-default
I really did that. Using Rocks-6, I installed the application and openmpi
on the shared file system (/export).
Mahmood,
It looks like it is dlopen that is complaining. What happens if
--disable-dlopen?
On Wed, Sep 14, 2016 at 8:34 AM, Mahmood Naderan wrote:
> Well I want to omit LD_LIBRARY_PATH. For that reason I am building the
> binary statically.
>
>> note this is not required when Open MPI is confi
in this case, you should configure OpenMPI with
--disable-shared --enable-static --disable-dlopen
then you can manually run the mpifort link command with --showme,
so you get the fully expanded gfortran link command.
then you can edit this command, and non system libs (e.g. lapack, openmpi,
siesta
Do you mean --disable-dl-dlopen? The last lines of configure are
+++ Configuring MCA framework dl
checking for no configure components in framework dl...
checking for m4 configure components in framework dl... libltdl, dlopen
--- MCA component dl:dlopen (m4 configuration macro, priority 80)
check
Mahmood,
See Gilles's subsequent reply which contains more details, but, no, I
think we mean --disable-dlopen. configure recognizes two negations of
a feature, which you can see from the ./configure --help output
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
The
Mahmood,
i meant
--disable-dlopen
i am not aware of a --disable-dl-dlopen option, which does not mean it does
not exist
Cheers,
Gilles
On Wednesday, September 14, 2016, Mahmood Naderan
wrote:
> Do you mean --disable-dl-dlopen? The last lines of configure are
>
> +++ Configuring MCA framework
> Am 14.09.2016 um 15:05 schrieb Gilles Gouaillardet
> :
>
> in this case, you should configure OpenMPI with
> --disable-shared --enable-static --disable-dlopen
>
> then you can manually run the mpifort link command with --showme,
> so you get the fully expanded gfortran link command.
> then yo
So, I used
./configure --prefix=/export/apps/siesta/openmpi-1.8.8
--enable-mpirun-prefix-by-default --enable-static --disable-shared
--disable-dlopen
and added -static to LDFLAGS, but I get:
/export/apps/siesta/openmpi-1.8.8/bin/mpifort -o transiesta -static
libfdf.a libSiestaXC.a \
> Am 14.09.2016 um 19:12 schrieb Mahmood Naderan :
>
> So, I used
>
> ./configure --prefix=/export/apps/siesta/openmpi-1.8.8
> --enable-mpirun-prefix-by-default --enable-static --disable-shared
> --disable-dlopen
>
> and added -static to LDFLAGS, but I get:
>
> /export/apps/siesta/openmpi-1
I installed libibverb-devel-static.x86_64 via yum
root@cluster:tpar# yum list libibverb*
Installed Packages
libibverbs.x86_64
1.1.8-4.el6@base
libibverbs-devel.x86_64
1.1.8-4.el6@base
libibverbs-devel-static.x86_64
1.1.8-4.el6
We have a new high-speed component for RMA in 2.0.x called osc/rdma. Since the
component is doing direct rdma on the target we are much more strict about the
ranges. osc/pt2pt doesn't bother checking at the moment.
Can you build Open MPI with --enable-debug and add -mca osc_base_verbose 100 to
Thanks for reporting this! There are a number of things going on here.
It seems there may be a problem with the Java bindings checked by CReqops.Java
because the C test passes. Ill take a look at that. The issue can be found
at: https://github.com/open-mpi/ompi/issues/2081
MPI_Compare_an
This error was the result of a typo which caused an incorrect range check when
the compare-and-swap was on a memory region less than 8 bytes away from the end
of the window. We never caught this because in general no apps create a window
as small as that MPICH test (4 bytes). We are adding the
Good news :)
>If I drop -static, the error is gone... However, ldd command shoes that
binary can not access those two MPI libraries.
In the previous installation, I kept both .so and .a files. Therefore, it
first searched for .so files and that was the reason why ldd failed.
Forget about dlopen
Am 14.09.2016 um 20:09 schrieb Mahmood Naderan:
> I installed libibverb-devel-static.x86_64 via yum
>
>
> root@cluster:tpar# yum list libibverb*
> Installed Packages
> libibverbs.x86_64 1.1.8-4.el6
>@base
> libibverbs-devel.x86_64
20 matches
Mail list logo