[OMPI users] catching SIGTERM in Open MPI 5.x

2025-05-07 Thread Christoph Niethammer
Hello, I am trying to implement some graceful application shutdown in case mpirun receives a SIGTERM. With Open MPI 4.x, this works just fine, and SIGTERM is forwarded. With Open MPI 5.x I now struggle as prte seems not to forward SIGTERM by default. If I try to include this in the list of SIGN

Re: [OMPI users] Error using rankfile to bind multiple cores on the same node for threaded OpenMPI application

2022-02-02 Thread Christoph Niethammer via users
The linked pastebin includes the following version information: [1,0]:package:Open MPI spackapps@eu-c7-042-03 Distribution [1,0]:ompi:version:full:4.0.2 [1,0]:ompi:version:repo:v4.0.2 [1,0]:ompi:version:release_date:Oct 07, 2019 [1,0]:orte:version:full:4.0.2 [1,0]:orte:version:repo:v4.0.2 [1,0]:or

Re: [OMPI users] weird mpi error report: Type mismatch between arguments

2021-02-17 Thread Christoph Niethammer via users
here. Best way to solve this is to update your application to the new mpif08 module. I know this may end up in a lot of work and finding bugs along the way. ;) Best regards Christoph Niethammer - Original Message - From: "Open MPI Users" To: "Open MPI Users" C

Re: [OMPI users] OMPI 4.0.4 how to use mpirun properly in numa architecture

2020-08-20 Thread Christoph Niethammer via users
Hello Carlo, If you execute multiple mpirun commands they will not know about each others resource bindings. E.g. if you bind to cores each mpirun will start with the same core to assign with again. This results then in over subscription of the cores, which slows down your programs - as you did

Re: [OMPI users] MPI test suite

2020-07-24 Thread Christoph Niethammer via users
Hi, MTT is a testing infrastructure to automate building MPI libraries and tests, running tests and collecting test results but does not come with MPI testsuites itself. Best Christoph - Original Message - From: "Open MPI Users" To: "Open MPI Users" Cc: "Joseph Schuchart" Sent: Frid

Re: [OMPI users] MPI test suite

2020-07-24 Thread Christoph Niethammer via users
Hello, What do you wanne test in detail? If you are interested in testing combinations of datatypes and communicators the mpi_test_suite [1] may be of interest for you. Best Christoph Niethammer [1] https://projects.hlrs.de/projects/mpitestsuite/ - Original Message - From: "

Re: [OMPI users] Using strace with Open MPI on Cray

2019-04-02 Thread Christoph Niethammer
local file (mpirun wrapper.sh, in which wrapper.sh sets the output file based on $PMIX_RANK or $$, and then exec strace ... Cheers, Gilles On Sat, Mar 30, 2019 at 6:29 PM Christoph Niethammer wrote: > > Hello, > > I was trying to investigate some processes with strace under Open M

Re: [OMPI users] Using strace with Open MPI on Cray

2019-03-31 Thread Christoph Niethammer
rapper.sh sets the output file based on $PMIX_RANK or $$, and then exec strace ... Cheers, Gilles On Sat, Mar 30, 2019 at 6:29 PM Christoph Niethammer wrote: > > Hello, > > I was trying to investigate some processes with strace under Open MPI. > However I have some issues when MPI I/O

Re: [OMPI users] Using strace with Open MPI on Cray

2019-03-30 Thread Christoph Niethammer
which help me to understand what is going on. Best Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer _

[OMPI users] Using strace with Open MPI on Cray

2019-03-30 Thread Christoph Niethammer
uot; calls. However, the program works fine without strace. I tried with Open MPI 3.x and 4.0.1 switching between ompi and romio on different operating systems (CentOS 7.6, SLES 12). I'd appreciate any hints which help me to understand what is going on. Best Christoph -- Christoph

Re: [OMPI users] Output redirection: missing output from all but one node

2018-02-09 Thread Christoph Niethammer
his happens and/or how to debug this? > > In case this helps, the NFS mount flags are: > (rw,nosuid,nodev,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,pro > to=tcp,timeo=600,retrans=2,sec=sys,mountaddr=,mountvers=3,mountport= ort>,mountproto=udp,local_lock=none,addr=) > &g

Re: [OMPI users] built-in memchecker support

2017-08-24 Thread Christoph Niethammer
t;than valgrind. Best Christoph Niethammer [1] http://www.springer.com/cda/content/document/cda_downloaddocument/9783642373480-c1.pdf?SGWID=0-0-45-1397615-p175067491 - Original Message - From: "Dave Love" To: "Open MPI Users" Sent: Thursday, August 24, 2017 1:22:

[OMPI users] MIMD execution with global "--map-by node"

2017-08-21 Thread Christoph Niethammer
n" according to the mpirun man page and how I can achieve the desired behaviour with Open MPI. Thanks for your help. Best Christoph Niethammer ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI I/O gives undefined behavior if the amount of bytes described by a filetype reaches 2^32

2017-04-28 Thread Christoph Niethammer
Hello, Which MPI Version are you using? This looks for me like it triggers https://github.com/open-mpi/ompi/issues/2399 You can check if you are running into this problem by playing around with the mca_io_ompio_cycle_buffer_size parameter. Best Christoph Niethammer -- Christoph Niethammer

Re: [OMPI users] Shared Windows and MPI_Accumulate

2017-03-06 Thread Christoph Niethammer
Hi, The behaviour is reproduceable on our systems: * Linux Cluster (Intel Xeon E5-2660 v3, Scientific Linux release 6.8 (Carbon), Kernel 2.6.32, nightly 2.x branch) The error is independent of the used btl combination on the cluster (Tested 'sm,self,vader', 'sm,self,openib', 'sm,self', 'vader,s

Re: [OMPI users] Regression: multiple memory regions in dynamic windows

2016-08-25 Thread Christoph Niethammer
Hello, The Error is not 100% reproducible for me every time but seems to disappear entirely if one excludes -mca osc ^rdma or -mca btl ^openib component. The error is present in 2.0.0 and also 2.0.1rc1. Best Christoph Niethammer - Original Message - From: "Joseph Schuchart

Re: [OMPI users] GCC 4.9 and MPI_F08?

2014-08-14 Thread Christoph Niethammer
_CHECK ::" #define OMPI_FORTRAN_IGNORE_TKR_TYPE #define OMPI_FORTRAN_HAVE_IGNORE_TKR 1 configure:10267: result: yes (mpif.h, mpi and mpi_f08 modules) configure:10417: checking which 'use mpi_f08' implementation to use configure:58804: checking which mpi_f08 implementation to build

[OMPI users] GNU 4.8.x and (no) mpi_f08 module

2014-02-06 Thread Christoph Niethammer
x27; bindings configure:56608: result: yes configure:57983: checking if building Fortran 'use mpi_f08' bindings configure:57990: result: no With Intel 14 the mpi_f08 module is build correctly. Any ideas where the problem could come from and how to solve it? Best regards Christoph Niethammer

[OMPI users] Open MPI and multiple Torque versions

2014-01-27 Thread Christoph Niethammer
Christoph Niethammer

Re: [OMPI users] Calling a variable from another processor

2014-01-16 Thread Christoph Niethammer
Hello, Find attached a minimal example - hopefully doing what you intended. Regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer

Re: [OMPI users] Calling a variable from another processor

2014-01-09 Thread Christoph Niethammer
. Regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer - Ursprüngliche Mail - Von: "Pradeep Jha" An: "

Re: [OMPI users] Configuration for rendezvous and eager protocols: two-sided comm

2013-12-16 Thread Christoph Niethammer
limit" (current value: <4096>, data source: default value) Maximum size (in bytes) of "short" messages (must be >= 1) Regards Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-6

[OMPI users] Open MPI 1.7.1 and nonblocking bcast questions

2013-04-24 Thread Christoph Niethammer
Hello, Currently I am investigating the new nonblocking collectives introduced in MPI-3 which are implemented in Open MPI 1.7.1. As a first try I took MPI_Ibcast. According to the MPI-3 spec my understanding is that MPI_Ibcast + MPI_Wait should be equivalent to a MPI_Bcast - except, that the a

Re: [OMPI users] Open MPI on Cray XE6 / Gemini

2012-10-17 Thread Christoph Niethammer
which won't work because the app is > >> being direct-launched. > >> > >> Alternatively, he could launch using "mpirun" and then it should work > >> just fine. > >> > >> On Wed, Oct 10, 2012 at 7:59 AM, Nathan Hjelm wrote: &g

[OMPI users] Open MPI on Cray XE6 / Gemini

2012-10-10 Thread Christoph Niethammer
/optimized-nopanasas --prefix=$HOME/bin/mpi/openmpi/1.7a1r27416 I would be very happy if anybdy has an idea, what I could have missed during installation/runtime. Thanks in advance. Best regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse