On 19/09/20 11:50 +0100, Mike Crowe wrote:
On 29/05/20 07:17 +0100, Mike Crowe via Libstdc++ wrote:
> > diff --git a/libstdc++-v3/include/bits/atomic_futex.h 
b/libstdc++-v3/include/bits/atomic_futex.h
> > index 5f95ade..aa137a7 100644
> > --- a/libstdc++-v3/include/bits/atomic_futex.h
> > +++ b/libstdc++-v3/include/bits/atomic_futex.h
> > @@ -219,8 +219,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >      _M_load_when_equal_for(unsigned __val, memory_order __mo,
> >         const chrono::duration<_Rep, _Period>& __rtime)
> >      {
> > +     using __dur = typename __clock_t::duration;
> >       return _M_load_when_equal_until(__val, __mo,
> > -                                     __clock_t::now() + __rtime);
> > +                 __clock_t::now() + 
chrono::__detail::ceil<__dur>(__rtime));
> >      }
> >
> >    // Returns false iff a timeout occurred.
> > @@ -233,7 +234,8 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
> >       do {
> >         const __clock_t::time_point __s_entry = __clock_t::now();
> >         const auto __delta = __atime - __c_entry;
> > -       const auto __s_atime = __s_entry + __delta;
> > +       const auto __s_atime = __s_entry +
> > +           chrono::__detail::ceil<_Duration>(__delta);

On Friday 11 September 2020 at 18:22:04 +0100, Jonathan Wakely wrote:
I'm testing the attached patch to fix the C++11 constexpr error, but
while re-looking at the uses of __detail::ceil I noticed this is using
_Duration as the target type. Shouldn't that be __clock_t::duration
instead? Why do we care about the duration of the user's time_point
here, rather than the preferred duration of the clock we're about to
wait against?

I think you're right. I've attached a patch to fix it and also add a test
that would have failed at least some of the time if run on a machine with
an uptime greater than 208.5 days with:

void test_pr91486_wait_until(): Assertion 'float_steady_clock::call_count <= 3' 
failed.

If we implement the optimisation to not re-check against the custom clock
when the wait is complete if is_steady == true then the test would have
started failing due to the wait not being long enough.

(I used a couple of the GCC farm machines that have high uptimes to test
this.)

Also pushed to master. Thanks!


Thanks.

Mike.

From fa4decc00698785fb9e07aa36c0d862414ca5ff9 Mon Sep 17 00:00:00 2001
From: Mike Crowe <m...@mcrowe.com>
Date: Wed, 16 Sep 2020 16:55:11 +0100
Subject: [PATCH 2/2] libstdc++: Use correct duration for atomic_futex wait on
custom clock [PR 91486]

As Jonathan Wakely pointed out[1], my change in commit
f9ddb696a289cc48d24d3d23c0b324cb88de9573 should have been rounding to
the target clock duration type rather than the input clock duration type
in __atomic_futex_unsigned::_M_load_when_equal_until just as (e.g.)
condition_variable does.

As well as fixing this, let's create a rather contrived test that fails
with the previous code, but unfortunately only when run on a machine
with an uptime of over 208.5 days, and even then not always.

[1] https://gcc.gnu.org/pipermail/libstdc++/2020-September/051004.html

libstdc++-v3/ChangeLog:

        * include/bits/atomic_futex.h:
       (__atomic_futex_unsigned::_M_load_when_equal_until): Use
       target clock duration type when rounding.

       * testsuite/30_threads/async/async.cc: (test_pr91486_wait_for)
       rename from test_pr91486.  (float_steady_clock) new class for
       test.  (test_pr91486_wait_until) new test.
---
libstdc++-v3/include/bits/atomic_futex.h      |  2 +-
.../testsuite/30_threads/async/async.cc       | 62 ++++++++++++++++++-
2 files changed, 61 insertions(+), 3 deletions(-)

diff --git a/libstdc++-v3/include/bits/atomic_futex.h 
b/libstdc++-v3/include/bits/atomic_futex.h
index aa137a7b64e..6093be0fbc7 100644
--- a/libstdc++-v3/include/bits/atomic_futex.h
+++ b/libstdc++-v3/include/bits/atomic_futex.h
@@ -235,7 +235,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
          const __clock_t::time_point __s_entry = __clock_t::now();
          const auto __delta = __atime - __c_entry;
          const auto __s_atime = __s_entry +
-             chrono::__detail::ceil<_Duration>(__delta);
+             chrono::__detail::ceil<__clock_t::duration>(__delta);
          if (_M_load_when_equal_until(__val, __mo, __s_atime))
            return true;
          __c_entry = _Clock::now();
diff --git a/libstdc++-v3/testsuite/30_threads/async/async.cc 
b/libstdc++-v3/testsuite/30_threads/async/async.cc
index 46f8d2f327d..1c779bfbcad 100644
--- a/libstdc++-v3/testsuite/30_threads/async/async.cc
+++ b/libstdc++-v3/testsuite/30_threads/async/async.cc
@@ -157,7 +157,7 @@ void test04()
  }
}

-void test_pr91486()
+void test_pr91486_wait_for()
{
  future<void> f1 = async(launch::async, []() {
      std::this_thread::sleep_for(std::chrono::seconds(1));
@@ -171,6 +171,63 @@ void test_pr91486()
  VERIFY( elapsed_steady >= std::chrono::seconds(1) );
}

+// This is a clock with a very recent epoch which ensures that the difference
+// between now() and one second in the future is representable in a float so
+// that when the generic clock version of
+// __atomic_futex_unsigned::_M_load_when_equal_until calculates the delta it
+// gets a duration of 1.0f.  When chrono::steady_clock has moved sufficiently
+// far from its epoch (about 208.5 days in my testing - about 2^54ns because
+// there's a promotion to double happening too somewhere) adding 1.0f to the
+// current time has no effect.  Using this clock ensures that
+// __atomic_futex_unsigned::_M_load_when_equal_until is using
+// chrono::__detail::ceil correctly so that the function actually sleeps rather
+// than spinning.
+struct float_steady_clock
+{
+  using duration = std::chrono::duration<float>;
+  using rep = typename duration::rep;
+  using period = typename duration::period;
+  using time_point = std::chrono::time_point<float_steady_clock, duration>;
+  static constexpr bool is_steady = true;
+
+  static chrono::steady_clock::time_point epoch;
+  static int call_count;
+
+  static time_point now()
+  {
+    ++call_count;
+    auto real = std::chrono::steady_clock::now();
+    return time_point{real - epoch};
+  }
+};
+
+chrono::steady_clock::time_point float_steady_clock::epoch = 
chrono::steady_clock::now();
+int float_steady_clock::call_count = 0;
+
+void test_pr91486_wait_until()
+{
+  future<void> f1 = async(launch::async, []() {
+      std::this_thread::sleep_for(std::chrono::seconds(1));
+    });
+
+  float_steady_clock::time_point const now = float_steady_clock::now();
+
+  std::chrono::duration<float> const wait_time = std::chrono::seconds(1);
+  float_steady_clock::time_point const expire = now + wait_time;
+  VERIFY( expire > now );
+
+  auto const start_steady = chrono::steady_clock::now();
+  auto status = f1.wait_until(expire);
+  auto const elapsed_steady = chrono::steady_clock::now() - start_steady;
+
+  // This checks that we didn't come back too soon
+  VERIFY( elapsed_steady >= std::chrono::seconds(1) );
+
+  // This checks that wait_until didn't busy wait checking the clock more times
+  // than necessary.
+  VERIFY( float_steady_clock::call_count <= 3 );
+}
+
int main()
{
  test01();
@@ -179,6 +236,7 @@ int main()
  test03<std::chrono::steady_clock>();
  test03<steady_clock_copy>();
  test04();
-  test_pr91486();
+  test_pr91486_wait_for();
+  test_pr91486_wait_until();
  return 0;
}
--
2.28.0


Reply via email to