Hello,
I am trying to implement some graceful application shutdown in case mpirun
receives a SIGTERM. With Open MPI 4.x, this works just fine, and SIGTERM is
forwarded.
With Open MPI 5.x I now struggle as prte seems not to forward SIGTERM by
default. If I try to include this in the list of SIGN
The linked pastebin includes the following version information:
[1,0]:package:Open MPI spackapps@eu-c7-042-03 Distribution
[1,0]:ompi:version:full:4.0.2
[1,0]:ompi:version:repo:v4.0.2
[1,0]:ompi:version:release_date:Oct 07, 2019
[1,0]:orte:version:full:4.0.2
[1,0]:orte:version:repo:v4.0.2
[1,0]:or
here.
Best way to solve this is to update your application to the new mpif08 module.
I know this may end up in a lot of work and finding bugs along the way. ;)
Best regards
Christoph Niethammer
- Original Message -
From: "Open MPI Users"
To: "Open MPI Users"
C
Hello Carlo,
If you execute multiple mpirun commands they will not know about each others
resource bindings.
E.g. if you bind to cores each mpirun will start with the same core to assign
with again.
This results then in over subscription of the cores, which slows down your
programs - as you did
Hi,
MTT is a testing infrastructure to automate building MPI libraries and tests,
running tests and collecting test results but does not come with MPI testsuites
itself.
Best
Christoph
- Original Message -
From: "Open MPI Users"
To: "Open MPI Users"
Cc: "Joseph Schuchart"
Sent: Frid
Hello,
What do you wanne test in detail?
If you are interested in testing combinations of datatypes and communicators
the mpi_test_suite [1] may be of interest for you.
Best
Christoph Niethammer
[1] https://projects.hlrs.de/projects/mpitestsuite/
- Original Message -
From: "
local file (mpirun wrapper.sh, in which wrapper.sh sets
the output file based on $PMIX_RANK or $$, and then exec strace ...
Cheers,
Gilles
On Sat, Mar 30, 2019 at 6:29 PM Christoph Niethammer wrote:
>
> Hello,
>
> I was trying to investigate some processes with strace under Open M
rapper.sh sets
the output file based on $PMIX_RANK or $$, and then exec strace ...
Cheers,
Gilles
On Sat, Mar 30, 2019 at 6:29 PM Christoph Niethammer wrote:
>
> Hello,
>
> I was trying to investigate some processes with strace under Open MPI.
> However I have some issues when MPI I/O
which help me to understand what is going on.
Best
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
_
uot; calls.
However, the program works fine without strace.
I tried with Open MPI 3.x and 4.0.1 switching between ompi and romio on
different operating systems (CentOS 7.6, SLES 12).
I'd appreciate any hints which help me to understand what is going on.
Best
Christoph
--
Christoph
his happens and/or how to debug this?
>
> In case this helps, the NFS mount flags are:
> (rw,nosuid,nodev,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,pro
> to=tcp,timeo=600,retrans=2,sec=sys,mountaddr=,mountvers=3,mountport= ort>,mountproto=udp,local_lock=none,addr=)
>
&g
t;than valgrind.
Best
Christoph Niethammer
[1]
http://www.springer.com/cda/content/document/cda_downloaddocument/9783642373480-c1.pdf?SGWID=0-0-45-1397615-p175067491
- Original Message -
From: "Dave Love"
To: "Open MPI Users"
Sent: Thursday, August 24, 2017 1:22:
n" according to the mpirun
man page and how I can achieve the desired behaviour with Open MPI.
Thanks for your help.
Best
Christoph Niethammer
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
Hello,
Which MPI Version are you using?
This looks for me like it triggers https://github.com/open-mpi/ompi/issues/2399
You can check if you are running into this problem by playing around with the
mca_io_ompio_cycle_buffer_size parameter.
Best
Christoph Niethammer
--
Christoph Niethammer
Hi,
The behaviour is reproduceable on our systems:
* Linux Cluster (Intel Xeon E5-2660 v3, Scientific Linux release 6.8 (Carbon),
Kernel 2.6.32, nightly 2.x branch)
The error is independent of the used btl combination on the cluster (Tested
'sm,self,vader', 'sm,self,openib', 'sm,self', 'vader,s
Hello,
The Error is not 100% reproducible for me every time but seems to disappear
entirely if one excludes
-mca osc ^rdma
or
-mca btl ^openib
component.
The error is present in 2.0.0 and also 2.0.1rc1.
Best
Christoph Niethammer
- Original Message -
From: "Joseph Schuchart
_CHECK ::"
#define OMPI_FORTRAN_IGNORE_TKR_TYPE
#define OMPI_FORTRAN_HAVE_IGNORE_TKR 1
configure:10267: result: yes (mpif.h, mpi and mpi_f08 modules)
configure:10417: checking which 'use mpi_f08' implementation to use
configure:58804: checking which mpi_f08 implementation to build
x27; bindings
configure:56608: result: yes
configure:57983: checking if building Fortran 'use mpi_f08' bindings
configure:57990: result: no
With Intel 14 the mpi_f08 module is build correctly.
Any ideas where the problem could come from and how to solve it?
Best regards
Christoph Niethammer
Christoph Niethammer
Hello,
Find attached a minimal example - hopefully doing what you intended.
Regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
.
Regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
- Ursprüngliche Mail -
Von: "Pradeep Jha"
An: "
limit" (current value: <4096>, data source: default value)
Maximum size (in bytes) of "short" messages (must be >= 1)
Regards
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-6
Hello,
Currently I am investigating the new nonblocking collectives introduced in
MPI-3 which are implemented in Open MPI 1.7.1. As a first try I took
MPI_Ibcast.
According to the MPI-3 spec my understanding is that MPI_Ibcast + MPI_Wait
should be equivalent to a MPI_Bcast - except, that the a
which won't work because the app is
> >> being direct-launched.
> >>
> >> Alternatively, he could launch using "mpirun" and then it should work
> >> just fine.
> >>
> >> On Wed, Oct 10, 2012 at 7:59 AM, Nathan Hjelm wrote:
&g
/optimized-nopanasas
--prefix=$HOME/bin/mpi/openmpi/1.7a1r27416
I would be very happy if anybdy has an idea, what I could have missed during
installation/runtime.
Thanks in advance.
Best regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse
25 matches
Mail list logo