Re: [OMPI users] Regarding the usage of MPI-One sided communications in HPC applications

2025-01-17 Thread 'Joseph Schuchart' via Open MPI users
Hi Arun, The strength of RMA (low synchronization overhead) is also its main weakness (lack of synchronization). It's easy to move data between processes but hard to get the synchronization right so that processes read the right data. RMA has yet to find a good solution to the synchronization

Re: [OMPI users] Seg error when using v5.0.1

2024-01-30 Thread Joseph Schuchart via users
Hello, This looks like memory corruption. Do you have more details on what your app is doing? I don't see any MPI calls inside the call stack. Could you rebuild Open MPI with debug information enabled (by adding `--enable-debug` to configure)? If this error occurs on singleton runs (1 process

Re: [OMPI users] A make error when build openmpi-5.0.0 using the gcc 14.0.0 (experimental) compiler

2023-12-19 Thread Joseph Schuchart via users
Thanks for the report Jorge! I opened a ticket to track the build issues with GCC-14: https://github.com/open-mpi/ompi/issues/12169 Hopefully we will have Open MPI build with GCC-14 before it is released. Cheers, Joseph On 12/17/23 06:03, Jorge D'Elia via users wrote: Hi there, I already ove

Re: [OMPI users] MPI_Get is slow with structs containing padding

2023-03-30 Thread Joseph Schuchart via users
Hi Antoine, That's an interesting result. I believe the problem with datatypes with gaps is that MPI is not allowed to touch the gaps. My guess is that for the RMA version of the benchmark the implementation either has to revert back to an active message packing the data at the target and send

Re: [OMPI users] Tracing of openmpi internal functions

2022-11-16 Thread Joseph Schuchart via users
Arun, You can use a small wrapper script like this one to store the perf data in separate files: ``` $ cat perfwrap.sh #!/bin/bash exec perf record -o perf.data.$OMPI_COMM_WORLD_RANK $@ ``` Then do `mpirun -n ./perfwrap.sh ./a.out` to run all processes under perf. You can also select a subs

Re: [OMPI users] MPI_THREAD_MULTIPLE question

2022-09-10 Thread Joseph Schuchart via users
Timesir, It sounds like you're using the 4.0.x or 4.1.x release. The one-sided components were cleaned up in the upcoming 5.0.x release and the component in question (osc/pt2pt) was removed. You could also try to compile Open MPI 4.0.x/4.1.x against UCX and use osc/ucx (by passing `--mca osc

[OMPI users] 1st Future of MPI RMA Workshop: Call for Short Talks and Participation

2022-05-29 Thread Joseph Schuchart via users
lZ6r-DZww. All participation is free. *Organizing committee* Joseph Schuchart, University of Tennessee, Knoxville James Dinan, Nvidia Inc. Bill Gropp, University of Illinois Urbana Champaign

Re: [OMPI users] Check equality of a value in all MPI ranks

2022-02-17 Thread Joseph Schuchart via users
Hi Niranda, A pattern I have seen in several places is to allreduce the pair p = {-x,x} with MPI_MIN or MPI_MAX. If in the resulting pair p[0] == -p[1], then everyone has the same value. If not, at least one rank had a different value. Example: ``` bool is_same(int x) {   int p[2];   p[0] =

Re: [OMPI users] Using OSU benchmarks for checking Infiniband network

2022-02-11 Thread Joseph Schuchart via users
Analysis  and Profiling Tool  is provided: OSU-INAM Is there something equivalent using openMPI ? Best Denis *From:* users on behalf of Joseph Schuchart via users *Sent:* Tuesday, February 8, 2022 4:02:53 PM *To:* users

Re: [OMPI users] Using OSU benchmarks for checking Infiniband network

2022-02-08 Thread Joseph Schuchart via users
Hi Denis, Sorry if I missed it in your previous messages but could you also try running a different MPI implementation (MVAPICH) to see whether Open MPI is at fault or the system is somehow to blame for it? Thanks Joseph On 2/8/22 03:06, Bertini, Denis Dr. via users wrote: Hi Thanks for a

Re: [OMPI users] OpenMPI and maker - Multiple messages

2021-02-18 Thread Joseph Schuchart via users
Thomas, The post you are referencing suggests to run mpiexec -mca btl ^openib -n 40 maker -help but you are running mpiexec -mca btl ^openib -N 5 gcc --version which will run 5 instances of GCC. The out put you're seeing is totally to be expected. I don't think anyone here can help you wit

Re: [OMPI users] Issue with MPI_Get_processor_name() in Cygwin

2021-02-09 Thread Joseph Schuchart via users
Martin, The name argument to MPI_Get_processor_name is a character string of length at least MPI_MAX_PROCESSOR_NAME, which in OMPI is 256. You are providing a character string of length 200, so OMPI is free to write past the end of your string and into some of your stack variables, hence you

Re: [OMPI users] mpirun on Kubuntu 20.4.1 hangs

2020-10-22 Thread Joseph Schuchart via users
Hi Jorge, Can you try to get a stack trace of mpirun using the following command in a separate terminal? sudo gdb -batch -ex "thread apply all bt" -p $(ps -C mpirun -o pid= | head -n 1) Maybe that will give some insight where mpirun is hanging. Cheers, Joseph On 10/21/20 9:58 PM, Jorge SI

Re: [OMPI users] Limiting IP addresses used by OpenMPI

2020-09-01 Thread Joseph Schuchart via users
Charles, What is the machine configuration you're running on? It seems that there are two MCA parameter for the tcp btl: btl_tcp_if_include and btl_tcp_if_exclude (see ompi_info for details). There may be other knobs I'm not aware of. If you're using UCX then my guess is that UCX has its own

Re: [OMPI users] Is the mpi.3 manpage out of date?

2020-08-31 Thread Joseph Schuchart via users
Andy, Thanks for pointing this out. We have a merged a fix that corrects that stale comment in master :) Cheers Joseph On 8/25/20 8:36 PM, Riebs, Andy via users wrote: In searching to confirm my belief that recent versions of Open MPI support the MPI-3.1 standard, I was a bit surprised to fi

Re: [OMPI users] Silent hangs with MPI_Ssend and MPI_Irecv

2020-07-25 Thread Joseph Schuchart via users
Hi Sean, Thanks for the report! I have a few questions/suggestions: 1) What version of Open MPI are you using? 2) What is your network? It sounds like you are on an IB cluster using btl/openib (which is essentially discontinued). Can you try the Open MPI 4.0.4 release with UCX instead of openi

Re: [OMPI users] MPI test suite

2020-07-24 Thread Joseph Schuchart via users
You may want to look into MTT: https://github.com/open-mpi/mtt Cheers Joseph On 7/23/20 8:28 PM, Zhang, Junchao via users wrote: Hello,   Does OMPI have a test suite that can let me validate MPI implementations from other vendors?   Thanks --Junchao Zhang

Re: [OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread Joseph Schuchart via users
Hi John, Depending on your platform the default behavior of Open MPI is to mmap a shared backing file that is either located in a session directory under /dev/shm or under $TMPDIR (I believe under Linux it is /dev/shm). You will find a set of files there that are used to back shared memory. Th

Re: [OMPI users] Coordinating (non-overlapping) local stores with remote puts form using passive RMA synchronization

2020-06-02 Thread Joseph Schuchart via users
Hi Stephen, Let me try to answer your questions inline (I don't have extensive experience with the separate model and from my experience most implementations support the unified model, with some exceptions): On 5/31/20 1:31 AM, Stephen Guzik via users wrote: Hi, I'm trying to get a better u

Re: [OMPI users] RMA in openmpi

2020-04-27 Thread Joseph Schuchart via users
e saying, but I just wanted to confirm. Thanks again Claire On 27/04/2020, 07:50, "Joseph Schuchart via users" wrote: Claire, > Is it possible to use the one-sided communication without combining it with synchronization calls? What exactly do y

Re: [OMPI users] RMA in openmpi

2020-04-26 Thread Joseph Schuchart via users
Claire, > Is it possible to use the one-sided communication without combining it with synchronization calls? What exactly do you mean by "synchronization calls"? MPI_Win_fence is indeed synchronizing (basically flush+barrier) but MPI_Win_lock (and the passive target synchronization interface

[OMPI users] Question about UCX progress throttling

2020-02-07 Thread Joseph Schuchart via users
safe to always set them to 1? Thanks Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de

Re: [OMPI users] mpirun --output-filename behavior

2019-11-01 Thread Joseph Schuchart via users
tderr, but unfortunately also in the resulting output file. Where does Open MPI intercept stdout/stderr to tee it to the output file? Cheers Joseph Cheers, Gilles On 11/1/2019 7:43 AM, Joseph Schuchart via users wrote: On 10/30/19 2:06 AM, Jeff Squyres (jsquyres) via users wrote: Oh,

Re: [OMPI users] mpirun --output-filename behavior

2019-10-31 Thread Joseph Schuchart via users
On 10/30/19 2:06 AM, Jeff Squyres (jsquyres) via users wrote: Oh, did the prior behavior *only* output to the file and not to stdout/stderr?  Huh. I guess a workaround for that would be:     mpirun  ... > /dev/null Just to throw in my $0.02: I recently found that the output to stdout/std

[OMPI users] CPC only supported when the first QP is a PP QP?

2019-08-05 Thread Joseph Schuchart via users
node GUID -- mlx5_0 08003800013c773b ``` Any help would be much appreciated! Thanks, Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax:

Re: [OMPI users] growing memory use from MPI application

2019-06-20 Thread Joseph Schuchart via users
Noam, Another idea: check for stale files in /dev/shm/ (or a subdirectory that looks like it belongs to UCX/OpenMPI) and SysV shared memory using `ipcs -m`. Joseph On 6/20/19 3:31 PM, Noam Bernstein via users wrote: On Jun 20, 2019, at 4:44 AM, Charles A Taylor > w

Re: [OMPI users] Latencies of atomic operations on high-performance networks

2019-05-09 Thread Joseph Schuchart via users
both node types. Joseph [1] https://github.com/open-mpi/ompi/issues/6536 On 5/9/19 9:10 AM, Benson Muite via users wrote: Hi, Have you tried anything with OpenMPI 4.0.1? What are the specifications of the Infiniband system you are using? Benson On 5/9/19 9:37 AM, Joseph Schuchart via users

Re: [OMPI users] Latencies of atomic operations on high-performance networks

2019-05-08 Thread Joseph Schuchart via users
on't help you with IB if you are using UCX unless you set this (master only right now): btl_uct_transports=dc_mlx5 btl=self,vader,uct osc=^ucx Though there may be a way to get osc/ucx to enable the same sort of optimization. I don't know. -Nathan On Nov 06, 2018, at 09:38 AM,

Re: [OMPI users] Cannot catch std::bac_alloc?

2019-04-03 Thread Joseph Schuchart
Zhen, The "problem" you're running into is memory overcommit [1]. The system will happily hand you a pointer to memory upon calling malloc without actually allocating the pages (that's the first step in std::vector::resize) and then terminate your application as soon as it tries to actually a

Re: [OMPI users] Latencies of atomic operations on high-performance networks

2018-11-08 Thread Joseph Schuchart
you using? -Nathan On Nov 08, 2018, at 11:10 AM, Joseph Schuchart wrote: While using the mca parameter in a real application I noticed a strange effect, which took me a while to figure out: It appears that on the Aries network the accumulate operations are not atomic anymore. I am attaching a

Re: [OMPI users] Latencies of atomic operations on high-performance networks

2018-11-08 Thread Joseph Schuchart
that I am missing? Thanks in advance, Joseph On 11/6/18 1:15 PM, Joseph Schuchart wrote: Thanks a lot for the quick reply, setting osc_rdma_acc_single_intrinsic=true does the trick for both shared and exclusive locks and brings it down to <2us per operation. I hope that the info key will

Re: [OMPI users] Latencies of atomic operations on high-performance networks

2018-11-06 Thread Joseph Schuchart
nly right now): btl_uct_transports=dc_mlx5 btl=self,vader,uct osc=^ucx Though there may be a way to get osc/ucx to enable the same sort of optimization. I don't know. -Nathan On Nov 06, 2018, at 09:38 AM, Joseph Schuchart wrote: All, I am currently experimenting with MPI atomic operat

[OMPI users] Latencies of atomic operations on high-performance networks

2018-11-06 Thread Joseph Schuchart
ecause the hardware operations are not actually atomic? I'd be grateful for any insight on this. Cheers, Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: s

[OMPI users] pt2pt osc required for single-node runs?

2018-09-06 Thread Joseph Schuchart
All, I installed Open MPI 3.1.2 on my laptop today (up from 3.0.0, which worked fine) and ran into the following error when trying to create a window: ``` -- The OSC pt2pt component does not support MPI_THREAD_MULTIPLE in

Re: [OMPI users] MPI Windows: performance of local memory access

2018-05-24 Thread Joseph Schuchart
/5193 -Nathan On May 24, 2018, at 7:09 AM, Nathan Hjelm wrote: Ok, thanks for testing that. I will open a PR for master changing the default backing location to /dev/shm on linux. Will be PR’d to v3.0.x and v3.1.x. -Nathan On May 24, 2018, at 6:46 AM, Joseph Schuchart wrote: Thank you all for

Re: [OMPI users] MPI Windows: performance of local memory access

2018-05-24 Thread Joseph Schuchart
04 PM, Nathan Hjelm wrote: What Open MPI version are you using? Does this happen when you run on a single node or multiple nodes? -Nathan On May 23, 2018, at 4:45 AM, Joseph Schuchart wrote: All, We are observing some strange/interesting performance issues in accessing memory that has been all

Re: [OMPI users] MPI Windows: performance of local memory access

2018-05-23 Thread Joseph Schuchart
this happen when you run on a single node or multiple nodes? -Nathan On May 23, 2018, at 4:45 AM, Joseph Schuchart wrote: All, We are observing some strange/interesting performance issues in accessing memory that has been allocated through MPI_Win_allocate. I am attaching our test case,

[OMPI users] MPI Windows: performance of local memory access

2018-05-23 Thread Joseph Schuchart
cket: mpirun -n 12 --bind-to socket --map-by ppr:12:socket - 24 cores / 2 sockets: mpirun -n 24 --bind-to socket and verified the binding using --report-bindings. Any help or comment would be much appreciated. Cheers Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center St

Re: [OMPI users] MPI-3 RMA on Cray XC40

2018-05-17 Thread Joseph Schuchart
v3.1.x but will be in v4.0.0. -Nathan On May 9, 2018, at 1:26 AM, Joseph Schuchart wrote: Nathan, Thank you, I can confirm that it works as expected with master on our system. I will stick to this version then until 3.1.1 is out. Joseph On 05/08/2018 05:34 PM, Nathan Hjelm wrote: Looks l

Re: [OMPI users] MPI-3 RMA on Cray XC40

2018-05-11 Thread Joseph Schuchart
ate with master. I will also be bringing some code that greatly improves the multi-threaded RMA performance on Aries systems (at least with benchmarks— github.com/hpc/rma-mt). That will not make it into v3.1.x but will be in v4.0.0. -Nathan On May 9, 2018, at 1:26 AM, Joseph Schuchart wrote:

Re: [OMPI users] MPI-3 RMA on Cray XC40

2018-05-09 Thread Joseph Schuchart
is to bring all the master changes into v3.1.1. This includes a number of bug fixes. -Nathan On May 08, 2018, at 08:25 AM, Joseph Schuchart wrote: Nathan, Thanks for looking into that. My test program is attached. Best Joseph On 05/08/2018 02:56 PM, Nathan Hjelm wrote: I will take a look

Re: [OMPI users] MPI-3 RMA on Cray XC40

2018-05-08 Thread Joseph Schuchart
Nathan, Thanks for looking into that. My test program is attached. Best Joseph On 05/08/2018 02:56 PM, Nathan Hjelm wrote: I will take a look today. Can you send me your test program? -Nathan On May 8, 2018, at 2:49 AM, Joseph Schuchart wrote: All, I have been experimenting with using

Re: [OMPI users] User-built OpenMPI 3.0.1 segfaults when storing into an atomic 128-bit variable

2018-05-03 Thread Joseph Schuchart
cess rank 0 with PID 0 on node kamenice exited on signal 9 (Killed). -- ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users -- Dipl.-Inf. Joseph Schuchart High Performance Computing Cent

[OMPI users] Window memory alignment not suitable for long double

2018-03-09 Thread Joseph Schuchart
[1] https://www.mail-archive.com/users@lists.open-mpi.org/msg30621.html -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de

Re: [OMPI users] Output redirection: missing output from all but one node

2018-02-13 Thread Joseph Schuchart
Joseph Schuchart wrote: All, I am trying to debug my MPI application using good ol' printf and I am running into an issue with Open MPI's output redirection (using --output-filename). The system I'm running on is an IB cluster with the home directory mounted through NFS. 1) Some

[OMPI users] Output redirection: missing output from all but one node

2018-02-09 Thread Joseph Schuchart
e commands with MPICH, which gives me the expected output for all processes on all nodes. Any help would be much appreciated! Cheers, Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711

Re: [OMPI users] Progress issue with dynamic windows

2017-11-01 Thread Joseph Schuchart
9:49 PM, Joseph Schuchart wrote: All, I came across what I consider another issue regarding progress in Open MPI: consider one process (P1) polling locally on a regular window (W1) for a local value to change (using MPI_Win_lock+MPI_Get+MPI_Win_unlock) while a second process (P2) tries to read

[OMPI users] Progress issue with dynamic windows

2017-11-01 Thread Joseph Schuchart
(Linux Mint 18.2, gcc 5.4.1, Linux 4.10.0-38-generic). Many thanks in advance! Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de #include #include

Re: [OMPI users] Issues with Large Window Allocations

2017-09-09 Thread Joseph Schuchart
is has been tedious and the user in general shouldn't have >to care about how shared memory is allocated (and administrators don't >always seem to care, see above). > >Any feedback is highly appreciated. > >Joseph > > &g

Re: [OMPI users] Issues with Large Window Allocations

2017-09-08 Thread Joseph Schuchart
ious and the user in general shouldn't have to care about how shared memory is allocated (and administrators don't always seem to care, see above). Any feedback is highly appreciated. Joseph On 09/04/2017 03:13 PM, Joseph Schuchart wrote: Jeff, all, Unfortunately, I (as a user) have no

Re: [OMPI users] Issues with Large Window Allocations

2017-09-04 Thread Joseph Schuchart
children) in the case of Open MPI, MPI tasks are siblings, so this is not an option. You are right, it doesn't work the way I expected. Should have tested it before :) Best Joseph Cheers, Gilles On Mon, Sep 4, 2017 at 10:13 PM, Joseph Schuchart wrote: Jeff, all, Unfortunately, I (as a

Re: [OMPI users] Issues with Large Window Allocations

2017-09-04 Thread Joseph Schuchart
hugetlbfs working. Jeff On Tue, Aug 29, 2017 at 6:15 AM, Joseph Schuchart <mailto:schuch...@hlrs.de>> wrote: Jeff, all, Thanks for the clarification. My measurements show that global memory allocations do not require the backing file if there is only one process per

Re: [OMPI users] Issues with Large Window Allocations

2017-08-29 Thread Joseph Schuchart
ingle-process job because MPI_Win_allocate_shared(MPI_COMM_SELF) ~= MPI_Alloc_mem(). However, it would help debugging if MPI implementers at least had an option to take the code path that allocates shared memory even when np=1. Jeff On Thu, Aug 24, 2017 at 7:41 AM, Joseph Schuchart <mailt

Re: [OMPI users] Issues with Large Window Allocations

2017-08-24 Thread Joseph Schuchart
karound, i suggest you use this as the shared-memory backing directory /* i am afk and do not remember the syntax, ompi_info --all | grep backing is likely to help */ Cheers, Gilles On Thu, Aug 24, 2017 at 10:31 PM, Joseph Schuchart wrote: All, I have been experimenting with large window a

[OMPI users] Issues with Large Window Allocations

2017-08-24 Thread Joseph Schuchart
provide additional details if needed. Best Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de #include #include #include #include #define MEM_PER_N

[OMPI users] Remote progress in MPI_Win_flush_local

2017-06-23 Thread Joseph Schuchart
All, We employ the following pattern to send signals between processes: ``` int com_rank, root = 0; // allocate MPI window MPI_Win win = allocate_win(); // do some computation ... // Process 0 waits for a signal if (com_rank == root) { do { MPI_Fetch_and_op(NULL, &res, MPI_INT, com_r

Re: [OMPI users] MPI_Win_allocate: Memory alignment

2017-05-17 Thread Joseph Schuchart
01 PM, Joseph Schuchart wrote: Hi, We have been experiencing strange crashes in our application that mostly works on memory allocated through MPI_Win_allocate and MPI_Win_allocate_shared. We eventually realized that the application crashes if it is compiled with -O3 or -Ofast and run with an odd numb

Re: [OMPI users] [Open MPI Announce] Open MPI v2.1.1 released

2017-05-10 Thread Joseph Schuchart
is relying on it or stumbling across that. Verified using 2.1.1 just now. Cheers Joseph P.S.: The reply-to field in the announcement email states us...@open-mpi.org where it should be users@lists.open-mpi.org :) -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HL

Re: [OMPI users] Shared Windows and MPI_Accumulate

2017-03-09 Thread Joseph Schuchart
lock (see above) as well. Hope this helps. Regards, Steffen On 03/01/2017 04:03 PM, Joseph Schuchart wrote: Hi all, We are seeing issues in one of our applications, in which processes in a shared communicator allocate a shared MPI window and execute MPI_Accumulate simultaneously on it to iterativ

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-03-08 Thread Joseph Schuchart
t I doubt MPICH errors on this code.). Jeff On Mon, Mar 6, 2017 at 8:30 AM, Joseph Schuchart <mailto:schuch...@hlrs.de>> wrote: Ping :) I would really appreciate any input on my question below. I crawled through the standard but cannot seem to find the wording

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-03-06 Thread Joseph Schuchart
us. Best regards, Joseph On 02/20/2017 09:23 AM, Joseph Schuchart wrote: Nathan, Thanks for your clarification. Just so that I understand where my misunderstanding of this matter comes from: can you please point me to the place in the standard that prohibits thread-concurrent window

[OMPI users] Shared Windows and MPI_Accumulate

2017-03-01 Thread Joseph Schuchart
hread-multiple and --with-threads but the application is not multi-threaded. Please let me know if you need any other information. Cheers Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-68

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-02-20 Thread Joseph Schuchart
(target) or MPI_Win_flush_all(). If your program is doing that it is not a valid MPI program. If you want to ensure a particular put operation is complete try MPI_Rput instead. -Nathan On Feb 19, 2017, at 2:34 PM, Joseph Schuchart wrote: All, We are trying to combine MPI_Put and MPI_Win_flush

[OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-02-19 Thread Joseph Schuchart
if this is a valid use case and whether I can provide you with additional information if required. Many thanks in advance! Cheers Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711

Re: [OMPI users] OMPI users] MPI_THREAD_MULTIPLE: Fatal error on MPI_Win_create

2017-02-19 Thread Joseph Schuchart
oseph, Would you mind trying again with export OMPI_MCA_osc=^pt2pt export OMPI_MCA_osc_base_verbose=10 If it still does not work, then please post the output Cheers, Gilles Joseph Schuchart wrote: Hi Howard, Thanks for your quick reply and your suggestions. I exported both variables as you sugg

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error on MPI_Win_create

2017-02-19 Thread Joseph Schuchart
ut in to this OMPI release because this part of the code has known problems when used multi threaded. Joseph Schuchart mailto:schuch...@hlrs.de>> schrieb am Sa. 18. Feb. 2017 um 04:02: All, I am seeing a fatal error with OpenMPI 2.0.2 if requesting support for MPI_THREAD_M

[OMPI users] MPI_THREAD_MULTIPLE: Fatal error on MPI_Win_create

2017-02-18 Thread Joseph Schuchart
able-mpi-thread-multiple and --prefix configure parameters. I am attaching the output of ompi_info. Please let me know if you need any additional information. Cheers, Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart

Re: [OMPI users] MPI_Win_allocate: Memory alignment

2017-02-15 Thread Joseph Schuchart
___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch

[OMPI users] MPI_Win_allocate: Memory alignment

2017-02-14 Thread Joseph Schuchart
1.10.5 and 2.0.2. I also tested with MPICH, which provides correct alignment. Cheers, Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de #include #include

Re: [OMPI users] Valgrind errors related to MPI_Win_allocate_shared

2016-11-21 Thread Joseph Schuchart
n-mpi/ompi/pull/2418.patch Cheers, Gilles On Tue, Nov 15, 2016 at 1:52 AM, Joseph Schuchart wrote: Hi Luke, Thanks for your reply. From my understanding, the wrappers mainly help catch errors on the MPI API level. The errors I reported are well below the API layer (please correct me if I&#x

Re: [OMPI users] Valgrind errors related to MPI_Win_allocate_shared

2016-11-15 Thread Joseph Schuchart
iwrap from your installation. If it’s not there then you can rebuild valgrind with CC=mpicc to have it built. Hope this helps move you towards a solution. Luke On Nov 14, 2016, at 5:49 AM, Joseph Schuchart wrote: All, I am investigating an MPI application using Valgrind and see a load of mem

[OMPI users] Valgrind errors related to MPI_Win_allocate_shared

2016-11-14 Thread Joseph Schuchart
i-valgrind.supp ./mpi_dynamic_win_free Best regards, Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de ==5740== Memcheck, a memory error detector ==5

Re: [OMPI users] Regression: multiple memory regions in dynamic windows

2016-08-26 Thread Joseph Schuchart
://github.com/open-mpi/ompi/commit/e53de7ecbe9f034ab92c832330089cf7065181dc.patch -Nathan On Aug 25, 2016, at 07:31 AM, Joseph Schuchart wrote: Gilles, Thanks for your fast reply. I did some last minute changes to the example code and didn't fully check the consistency of the output. Also, thank

Re: [OMPI users] Regression: multiple memory regions in dynamic windows

2016-08-25 Thread Joseph Schuchart
you meant to use disp_set2 I will try to reproduce this crash. which compiler (vendor and version) are you using ? which compiler options do you pass to mpicc ? Cheers, Gilles On Thursday, August 25, 2016, Joseph Schuchart <mailto:schuch...@hlrs.de>> wrote: All, It seems there is a

[OMPI users] Regression: multiple memory regions in dynamic windows

2016-08-25 Thread Joseph Schuchart
le is not standard compliant. Best regards Joseph -- Dipl.-Inf. Joseph Schuchart High Performance Computing Center Stuttgart (HLRS) Nobelstr. 19 D-70569 Stuttgart Tel.: +49(0)711-68565890 Fax: +49(0)711-6856832 E-Mail: schuch...@hlrs.de /* * mpi_dynamic_win.cc * * Created on: Aug