http://gcc.gnu.org/bugzilla/show_bug.cgi?id=61075
Jonathan Wakely <redi at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Host|Linux 3.13.5-gentoo #10 | |SMP Fri Apr 25 16:12:35 | |CEST 2014 x86_64 Intel(R) | |Xeon(R) CPU W3690 @ 3.47GHz | |GenuineIntel GNU/Linux | --- Comment #3 from Jonathan Wakely <redi at gcc dot gnu.org> --- I'm not sure if this is easily fixable. When running in parallel we split the range into N sub-ranges, accumulate over each sub-range, then accumulate the results. This means that we need an "init" value to start accumulating each sub-range, which we get by dereferencing the first iterator in the sub-range. Therefore the parallal accumulate has an additional requirement not present on the serial accumulate: is_convertible<iterator_traits<Iter>::value_type, T> (It also implicitly assumes that the functor is associative.) It might be possible to make it work if we relax the specification, requiring the functor to be commutative and saying it is unspecified whether we call binary_op(init, *first) or binary_op(*first, init), but then the algorithm isn't really std::accumulate (it becomes more like the std::experimental::reduce algorithm from the http://open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3850.pdf draft)