https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40362

Thomas Koenig <tkoenig at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |tkoenig at gcc dot gnu.org

--- Comment #16 from Thomas Koenig <tkoenig at gcc dot gnu.org> ---
Looking at a simple OMP program, I now get conflicts reported by
valgrind which I do not see here:

program main
  implicit none
  integer, parameter :: n = 100
  real, dimension(n) :: a
  real, dimension(n,n) :: b
  integer :: i, j, n1, n2
  call random_number (a)
!$omp parallel private(i,j)
!$omp do
  do j=1,n
     do i=1,n
        b(i,j) = a(i) * a(j)
     end do
  end do
!$omp end parallel
  read (*,*) n1, n2
  print *,b(n1, n2)
end program main
$ gfortran -g -fopenmp -fcheck=all do.f90
$ echo 10 10 | valgrind --tool=helgrind ./a.out  2>&1 | head -60
==4359== Helgrind, a thread error detector
==4359== Copyright (C) 2007-2017, and GNU GPL'd, by OpenWorks LLP et al.
==4359== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==4359== Command: ./a.out
==4359== 
==4359== ---Thread-Announcement------------------------------------------
==4359== 
==4359== Thread #1 is the program's root thread
==4359== 
==4359== ---Thread-Announcement------------------------------------------
==4359== 
==4359== Thread #16 was created
==4359==    at 0x5F9BE4E: clone (in /lib64/libc-2.22.so)
==4359==    by 0x5C993AF: create_thread (in /lib64/libpthread-2.22.so)
==4359==    by 0x5C9AECA: pthread_create@@GLIBC_2.2.5 (in
/lib64/libpthread-2.22.so)
==4359==    by 0x4C313C9: pthread_create_WRK (hg_intercepts.c:427)
==4359==    by 0x4C324BF: pthread_create@* (hg_intercepts.c:460)
==4359==    by 0x561167A: gomp_team_start (team.c:836)
==4359==    by 0x5609CFC: GOMP_parallel (parallel.c:169)
==4359==    by 0x400B45: MAIN__ (do.f90:8)
==4359==    by 0x400D1E: main (do.f90:18)
==4359== 
==4359== ----------------------------------------------------------------
==4359== 
==4359== Possible data race during write of size 4 at 0x645BC00 by thread #1
==4359== Locks held: none
==4359==    at 0x56138DB: gomp_barrier_wait_end (bar.c:40)
==4359==    by 0x56138DB: gomp_barrier_wait_end (bar.c:35)
==4359==    by 0x5611AA0: gomp_simple_barrier_wait (simple-bar.h:60)
==4359==    by 0x5611AA0: gomp_team_start (team.c:850)
==4359==    by 0x5609CFC: GOMP_parallel (parallel.c:169)
==4359==    by 0x400B45: MAIN__ (do.f90:8)
==4359==    by 0x400D1E: main (do.f90:18)
==4359== 
==4359== This conflicts with a previous read of size 4 by thread #16
==4359== Locks held: none
==4359==    at 0x561392B: gomp_barrier_wait_start (bar.h:98)
==4359==    by 0x561392B: gomp_barrier_wait (bar.c:56)
==4359==    by 0x5611032: gomp_simple_barrier_wait (simple-bar.h:60)
==4359==    by 0x5611032: gomp_thread_start (team.c:117)
==4359==    by 0x4C315BD: mythread_wrapper (hg_intercepts.c:389)
==4359==    by 0x5C9A723: start_thread (in /lib64/libpthread-2.22.so)
==4359==    by 0x5F9BE8C: clone (in /lib64/libc-2.22.so)
==4359==  Address 0x645bc00 is 128 bytes inside a block of size 192 alloc'd
==4359==    at 0x4C2B831: malloc (vg_replace_malloc.c:309)
==4359==    by 0x56040E8: gomp_malloc (alloc.c:38)
==4359==    by 0x5611278: gomp_get_thread_pool (pool.h:42)
==4359==    by 0x5611278: get_last_team (team.c:150)
==4359==    by 0x5611278: gomp_new_team (team.c:169)
==4359==    by 0x5609CE5: GOMP_parallel (parallel.c:169)
==4359==    by 0x400B45: MAIN__ (do.f90:8)
==4359==    by 0x400D1E: main (do.f90:18)
==4359==  Block was alloc'd by thread #1
==4359== 
==4359== ----------------------------------------------------------------
==4359== 
==4359== Possible data race during write of size 4 at 0x645BBC4 by thread #1
==4359== Locks held: none
==4359==    at 0x56138E1: gomp_barrier_wait_end (bar.c:41)
==4359==    by 0x56138E1: gomp_barrier_wait_end (bar.c:35)

... and so on.

I would have to dig a bit deeper into this to see if this is valid.

This is with current trunk and valgrind 3.15.0 on x86_64-pc-linux-gnu.

Reply via email to