Dear all,
ompi_info reports pml components are available:
$ /usr/mpi/gcc/openmpi-3.1.0rc2/bin/ompi_info -a | grep pml
MCA pml: v (MCA v2.1.0, API v2.0.0, Component v3.1.0)
MCA pml: monitoring (MCA v2.1.0, API v2.0.0, Component
v3.1.0)
MCA pml: ya
ping
2018-06-01 22:29 GMT+03:00 Dmitry N. Mikushin :
> Dear all,
>
> Looks like I have a weird issue never encountered before. While trying to
> run simplest "Hello world" program, I get:
>
> $ cat hello.c
> #include
>
> int main(int argc, char
Dear all,
Looks like I have a weird issue never encountered before. While trying to
run simplest "Hello world" program, I get:
$ cat hello.c
#include
int main(int argc, char* argv[])
{
MPI_Init(&argc, &argv);
MPI_Finalize();
return 0;
}
$ mpicc hello.c -o hello
$ mpirun -np 1 ./hello
Hi Justin,
If you can build application in debug mode, try inserting valgrind into
your MPI command. It's usually very good in tracking down failing memory
allocations origins.
Kind regards,
- Dmitry.
2017-06-20 1:10 GMT+03:00 Sylvain Jeaugey :
> Justin, can you try setting mpi_leave_pinned to
Hi Juraj,
Although MPI infrastructure may technically support forking, it's known
that not all system resources can correctly replicate themselves to forked
process. For example, forking inside MPI program with active CUDA driver
will result into crash.
Why not to compile down the MATLAB into a n
Not sure if this is related, but:
I've seen a case of performance degradation on NFS and Lustre when
writing NetCDF files. The reason was that the file was filled with a
loop writing one 4-byte record at once. Performance became close to
local hard drive, when I simply introduced buffering of reco
Hi Justin,
Quick grepping reveals several cuMemcpy calls in OpenMPI. Some of them are
even synchronous, meaning stream0.
I think the best way of exploring this sort of behavior is to execute
OpenMPI runtime (thanks to its open-source nature!) under debugger. Rebuild
OpenMPI with -g -O0, add some
Hi,
Modern Fortran has a feature called ISO_C_BINDING. It essentially
allows to declare a binding of external function to be used from
Fortran program. You only need to provide a corresponding interface.
ISO_C_BINDING module contains C-like extensions in type system, but
you don't need them, as yo
Hi Zbigniew,
> a) I noticed that on my 6-GPU 2-CPU platform the initialization of CUDA 4.2
> takes a long time, approx 10 seconds.
> Do you think I should report this as a bug to nVidia?
This is an expected time for creation of driver contexts on so many
devices. I'm sure NVIDIA already got
Dear Syed,
Why do you think it is related to MPI?
You seem to be compiling the COSMO model, which depends on netcdf lib, but
the symbols are not passed to linker by some reason. Two main reasons are:
(1) the library linking flag is missing (check you have something like
-lnetcdf -lnetcdff in your
t; *To:* Open MPI Users
> *Cc:* Олег Рябков
> *Subject:* Re: [OMPI users] NVCC mpi.h: error: attribute "__deprecated__"
> does not take arguments
>
> ** **
>
> Hi Dmitry:
>
> Let me look into this.
>
> ** **
>
> Rol*f*
>
> ** **
>
>
Yeah, definitely. Thank you, Jeff.
- D.
2012/6/18 Jeff Squyres
> On Jun 18, 2012, at 10:41 AM, Dmitry N. Mikushin wrote:
>
> > No, I'm configuring with gcc, and for openmpi-1.6 it works with nvcc
> without a problem.
>
> Then I think Rolf (from Nvidia) should fig
ler and then trying to compile
> with another (like the command line in your mail implies), all bets are off
> because Open MPI has tuned itself to the compiler that it was configured
> with.
>
>
>
>
> On Jun 18, 2012, at 10:20 AM, Dmitry N. Mikushin wrote:
>
> >
Hello,
With openmpi svn trunk as of
Repository Root: http://svn.open-mpi.org/svn/ompi
Repository UUID: 63e3feb5-37d5-0310-a306-e8a459e722fe
Revision: 26616
we are observing the following strange issue (see below). How do you think,
is it a problem of NVCC or OpenMPI?
Thanks,
- Dima.
[dmikushin
Hi Ghobad,
The error message means the OpenMPI wants to use cl.exe - the compiler
from Microsoft Visual Studio.
Here http://www.open-mpi.org/software/ompi/v1.5/ms-windows.php is it stated:
This is the first binary release for Windows, with basic MPI libraries
and executables. The supported platf
e the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1
Command failed: ./autogen.sh
Does it work for you with 2.67?
Thanks,
- D.
2011/12/30 Ralph Castain :
>
> On Dec 29, 2011, at 3:39 PM, Dmitry N. Mikushin wrote:
>
>> No, that was autoREconf, and all
y too old for us. However, what you just sent now shows
> 2.67, which would be fine.
>
> Why the difference?
>
>
> On Dec 29, 2011, at 3:27 PM, Dmitry N. Mikushin wrote:
>
>> Hi Ralph,
>>
>> URL: http://svn.open-mpi.org/svn/ompi/trunk
>> Repository Root: http:
red
> levels? The requirements differ by version.
>
> On Dec 29, 2011, at 2:52 PM, Dmitry N. Mikushin wrote:
>
>> Dear Open MPI Community,
>>
>> I need a custom OpenMPI build. While running ./autogen.pl on Debian
>> Squeeze, there is an error:
>>
>> ---
Dear Open MPI Community,
I need a custom OpenMPI build. While running ./autogen.pl on Debian
Squeeze, there is an error:
--- Found autogen.sh; running...
autoreconf2.50: Entering directory `.'
autoreconf2.50: configure.in: not using Gettext
autoreconf2.50: running: aclocal --force -I m4
autorecon
an check by adding a sleep or
> something like that), but they are not synchronized and do not know each
> other. This is what MPI_Init is used for.
>
>
>
> Matthieu Brucher
>
> 2011/12/14 Dmitry N. Mikushin
>
> Dear colleagues,
>
> For GPU Winter School powered
Dear colleagues,
For GPU Winter School powered by Moscow State University cluster
"Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
capabilities of MPI. There is one strange warning I cannot understand:
OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry,
but how
> CUDA is an Nvidia-only technology, so it might be a bit limiting in some
> cases.
I think here it's more a question of compatibility (that is ~ 1.0 /
[magnitude of effort]), rather than corporate selfishness >:) Consider
memory buffers implementation - counter to CUDA in OpenCL they are
some ab
Hi,
Maybe Mickaël means load balancing could be achieved simply by
spawning various number of MPI processes, depending on how many cores
particular node has? This should be possible, but accuracy of such
balancing will be task-dependent due to other factors, like memory
operations and communicatio
ic}}
3) Save specs file into compiler's folder
/usr/lib/gcc/// For example, in case of Ubuntu 10.10
with gcc 4.6.1 it's /usr/lib/gcc/x86_64-linux-gnu/4.6.1/
With this change no unresolvable relocations anymore!
- D.
2011/10/3 Dmitry N. Mikushin :
> Hi,
>
> Here's a repr
nu
Thread model: posix
gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3)
2011/9/28 Dmitry N. Mikushin :
> Hi,
>
> Interestingly, the errors are gone after I removed "-g" from the app
> compile options.
>
> I tested again on the fresh Ubuntu 11.10 install: both 1.4.3 and 1.
-bit.
- D.
2011/9/24 Jeff Squyres :
> Check the output from when you ran Open MPI's configure and "make all" -- did
> it decide to build the F77 interface?
>
> Also check that gcc and gfortran output .o files of the same bitness / type.
>
>
> On Sep 24, 2011, at 8
you compile / link simple OMPI applications without this problem?
>
> On Sep 24, 2011, at 7:54 AM, Dmitry N. Mikushin wrote:
>
>> Hi Jeff,
>>
>> Today I've verified this application on the Feroda 15 x86_64, where
>> I'm usually building OpenMPI from source us
t;
> Try running "file" on the Open MPI libraries and/or your target application
> .o files to see what their bitness is, etc.
>
>
> On Sep 22, 2011, at 3:15 PM, Dmitry N. Mikushin wrote:
>
>> Hi Jeff,
>>
>> You're right because I also tried 1.4.3, a
y to link
> them together).
>
> Can you verify that everything was built with all the same 32/64?
>
>
> On Sep 22, 2011, at 1:21 PM, Dmitry N. Mikushin wrote:
>
>> Hi,
>>
>> OpenMPI 1.5.4 compiled with gcc 4.6.1 and linked with target app gives
>> a load o
Same error when configured with --with-pic --with-gnu-ld
2011/9/22 Dmitry N. Mikushin :
> Hi,
>
> OpenMPI 1.5.4 compiled with gcc 4.6.1 and linked with target app gives
> a load of linker messages like this one:
>
> /usr/bin/ld: ../../lib/libutil.a(parallel_utilities.o)(
Hi,
OpenMPI 1.5.4 compiled with gcc 4.6.1 and linked with target app gives
a load of linker messages like this one:
/usr/bin/ld: ../../lib/libutil.a(parallel_utilities.o)(.debug_info+0x529d):
unresolvable R_X86_64_64 relocation against symbol
`mpi_fortran_argv_null_
There are a lot of similar me
link: invalid option -- 'd'
Try `link --help' for more information.
link: invalid option -- 'd'
Try `link --help' for more information.
configure: error: unknown naming convention:
2011/8/24 Barrett, Brian W :
> On 8/24/11 11:29 AM, "Dmitry N. Mikushin&quo
Hi,
Quick question: is there an easy switch to compile and install both
32-bit and 64-bit OpenMPI libraries into a single tree? E.g. 64-bit in
/prefix/lib64 and 32-bit in /prefix/lib.
Thanks,
- D.
BasitAli,
Signal 15 apparently means one of the WRF's MPI processes has been
unexpectedly terminated, maybe by program decision. No matter, if it
is OpenMPI-specific or not, issue needs to be tracked somehow to get
more details about it. Ideally, best thing is to get debugger attached
once the pro
Sorry, disregard this, the issue was created by my own buggy compiler wrapper.
- D.
2011/7/10 Dmitry N. Mikushin :
> Hi,
>
> Maybe it would be useful to report the openmpi 1.5.3 archive currently
> has a strange issue when installing on Fedora 15 x86_64 (gcc 4.6),
> that *does n
Hi,
Maybe it would be useful to report the openmpi 1.5.3 archive currently
has a strange issue when installing on Fedora 15 x86_64 (gcc 4.6),
that *does not* happen with 1.4.3:
$ ../configure --prefix=/opt/openmpi_kgen-1.5.3 CC=gcc CXX=g++
F77=gfortran FC=gfortran
...
$ sudo make install
...
;
>> b) when you built/installed Open MPI, it couldn't find a working C++ /
>> Fortran compiler, so it skipped building support for them.
>>
>>
>>
>> On Jun 22, 2011, at 12:05 PM, Dmitry N. Mikushin wrote:
>>
>>> Here's mine produced from
> libltdl support: yes
> Heterogeneous support: no
> mpirun default --prefix: no
> MPI I/O support: yes
> MPI_WTIME support: gettimeofday
> Symbol visibility support: yes
> ..
>
>
> On Wed, Jun 22, 2011 at 12:34 PM, Dmitry N. Mikushin
> wrote:
Alexandre,
Did you have a working Fortran compiler in system in time of OpenMPI
compilation? To my experience Fortran bindings are always compiled by
default. How did you configured it and have you noticed any messages
reg. Fortran support in configure output?
- D.
2011/6/22 Alexandre Souza :
>
inherit everything from your environment.
>
> I advised the user to "sudo -s" and ten setup the compiler environment and
> then run make install.
>
> Sent from my phone. No type good.
>
> On May 7, 2011, at 9:37 PM, "Dmitry N. Mikushin" wrote:
>
>
f ./configure
CC=/full/path/to/icc, then both "make" and "make install" work.
Nothing needs to be searched, icc is already in PATH, since
compilevars are sourced in profile.d. Or am I missing something?
Thanks,
- D.
2011/5/8 Tim Prince :
> On 5/7/2011 2:35 PM, Dmitry N. Miku
> didn't find the icc compiler
Jeff, on 1.4.3 I saw the same issue, even more generally: "make
install" cannot find the compiler, if it is an alien compiler (i.e.
not the default gcc) - same situation for intel or llvm, for example.
The workaround is to specify full paths to compilers with CC=...
Eric,
You have a link-time error complaining about the absence of some
libraries. At least two of them libm and libdl must be provided by
system, not by MPI implementation. Could you locate them in
/usr/lib64? Also it should be useful to figure out if the problem is
global or specific to HPL: do y
I checked that this issue is not caused by using different compile
options for different libraries. There is a set of libraries and
executable compiled with mpif90, and this warning comes for
executable's object and one of libraries...
2011/3/25 Dmitry N. Mikushin :
> Hi,
>
> I&
Hi,
I'm wondering if anybody have seen something similar, and have you
succeeded to run your application compiled by openmpi-pgi-1.4.2 with
the following warnings:
/usr/bin/ld: Warning: size of symbol `mpi_fortran_errcodes_ignore_'
changed from 4 in foo.o to 8 in lib/libfoolib2.so
/usr/bin/ld: Wa
45 matches
Mail list logo