On Fri, May 24, 2013 at 08:26:32PM -0700, Benjamin De Kosnik wrote: > It scraps the renaming/aliasing approach, and just creates a > compatibility-chrono.cc that mimics the default configuration in 4.8.0.
Yeah, I think that is reasonable, with one nit, see below. > Users who specially-configured a build with --enable-libstdcxx-time > configure options in 4.8.0 are breaking an experimental > C++ ABI anyway, so I'm not going to solve for anything but the default > case. > > For chrono.cc, there is an inline namespace that mangles system_clock > and steady_clock in a new way to prevent ambiguous uses. In addition, I > make a controversial decision and just forward-declare clocks that > libstdc++ cannot do correctly, instead of using typedefs that clearly > are only going to get us into trouble as we change things down the > road. That's probably only appropriate for trunk. > + // To support the (forward) evolving definition of the library's > + // defined clocks, wrap inside inline namespace so that these > + // types are uniquely mangled. This way, new code can use the > + // current clocks, while the library can contain old definitions. > + // At some point, when these clocks settle down, the inlined > + // namespaces can be removed. > + // XXX GLIBCXX_ABI Deprecated > + inline namespace _V2 { > + > /// system_clock > struct system_clock > { In this _V2 inline namespace, I think we should change that: struct system_clock { #ifdef _GLIBCXX_USE_CLOCK_REALTIME typedef chrono::nanoseconds duration; #elif defined(_GLIBCXX_USE_GETTIMEOFDAY) typedef chrono::microseconds duration; #else typedef chrono::seconds duration; #endif into: struct system_clock { typedef chrono::nanoseconds duration; Won't the templates then DTRT in: return time_point(duration(chrono::seconds(tv.tv_sec) + chrono::microseconds(tv.tv_usec))); (multiply microseconds by 1000 and seconds by 1000000000) return system_clock::from_time_t(__sec); (multiply seconds by 1000000000)? While this change isn't really necessary for Linux, where usually _GLIBCXX_USE_CLOCK_REALTIME will be in 4.8.1+ defined and thus the nanoseconds resolution will be used anyway, on other OSes it might be already a problem, say on Solaris or FreeBSD GCC 4.8.1 will have by default likely microseconds resolution, while on the trunk nanoseconds. By making it always count in nanoseconds, even if on some OSes the low 3 or 9 decimal digits will be always zero, we'd prepared to change the system_clock::now() implementation any time to provide better resolution, as a libstdc++ internal implementation detail. Or is this undesirable for users for some reason? > @@ -742,10 +751,18 @@ _GLIBCXX_END_NAMESPACE_VERSION > now() noexcept; > }; > #else > - typedef system_clock steady_clock; > + // Forward declare only. > + struct steady_clock; > #endif > > +#ifdef _GLIBCXX_USE_CLOCK_REALTIME > typedef system_clock high_resolution_clock; > +#else > + // Forward declare only. > + struct high_resolution_clock; > +#endif > + > + } // end inline namespace _V2 Yeah, I bet this hunk should be left out from 4.8.1 version of the patch. Perhaps we also want some testcase that will roughly test it, like grab time(NULL), std::chrono::system_clock::now() and std::chrono::steady_clock::now(), then sleep(3), grab both clocks again and time(NULL) last, then if the difference between time() is at least 2 seconds and less than say 10, verify the difference between the clocks is say in a 1 to 20 seconds interval (yeah, it could fail badly if one changes system time in between once or several times)? But maybe it would be too flakey. Jakub