I can't see any clue on how my commit can have had an impact on below code.

I don't think libstdc++ hash table has any relation with gcc hash table.

Still, I'm rebuilding gcc at my revision to confirm.

On 10/11/21 1:05 am, H.J. Lu wrote:
On Mon, Nov 8, 2021 at 1:37 PM François Dumont via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
Yet another version this time with only 1 guard implementation. The
predicate to invalidate the safe iterators has been externalized.

Ok to commit ?

This may have broken GCC bootstrap on Linux/x86-64:

https://gcc.gnu.org/pipermail/gcc-regression/2021-November/075734.html

In file included from ../../src-master/gcc/sanopt.c:22:
In member function ‘hash_table<Descriptor, Lazy,
Allocator>::value_type* hash_table<Descriptor, Lazy,
Allocator>::alloc_entries(size_t) const [with Descriptor =
hash_map<tree_node*, auto_vec<gimple*> >::hash_entry; bool Lazy =
false; Allocator = xcallocator]’,
     inlined from ‘void hash_table<Descriptor, Lazy,
Allocator>::expand() [with Descriptor = hash_map<tree_node*,
auto_vec<gimple*> >::hash_entry; bool Lazy = false; Allocator =
xcallocator]’ at ../../src-master/gcc/hash-table.h:802:40:
../../src-master/gcc/system.h:784:34: error: section type conflict
with ‘void hash_table<Descriptor, Lazy, Allocator>::expand() [with
Descriptor = hash_map<tree_node*, auto_vec<gimple*> >::hash_entry;
bool Lazy = false; Allocator = xcallocator]’
   784 |    ((void)(!(EXPR) ? fancy_abort (__FILE__, __LINE__,
__FUNCTION__), 0 : 0))
       |                      ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../src-master/gcc/hash-table.h:715:3: note: in expansion of macro
‘gcc_assert’
   715 |   gcc_assert (nentries != NULL);
       |   ^~~~~~~~~~
In file included from ../../src-master/gcc/coretypes.h:482,
                  from ../../src-master/gcc/sanopt.c:23:
../../src-master/gcc/hash-table.h: In member function ‘void
hash_table<Descriptor, Lazy, Allocator>::expand() [with Descriptor =
hash_map<tree_node*, auto_vec<gimple*> >::hash_entry; bool Lazy =
false; Allocator = xcallocator]’:
../../src-master/gcc/hash-table.h:779:1: note: ‘void
hash_table<Descriptor, Lazy, Allocator>::expand() [with Descriptor =
hash_map<tree_node*, auto_vec<gimple*> >::hash_entry; bool Lazy =
false; Allocator = xcallocator]’ was declared here
   779 | hash_table<Descriptor, Lazy, Allocator>::expand ()
       | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On 06/11/21 2:51 pm, François Dumont wrote:
You were right to delay your reply. Here is a new version with less
code duplication and a bug fix in the new _UContMergeGuard where we
were using it->second rather than it->first to get the key.

Note also that the change to _M_merge_multi implementation is also
important because otherwise we might be trying to extract a key from a
destructed node.

     libstdc++: [_GLIBCXX_DEBUG] Implement unordered container merge

     The _GLIBCXX_DEBUG unordered containers need a dedicated merge
implementation
     so that any existing iterator on the transfered nodes is properly
invalidated.

     Add typedef/using declaration for everything used as-is from
normal implementation.

     libstdc++-v3/ChangeLog:

             * include/bits/hashtable_policy.h (__distance_fw): Replace
class keyword with
             typename.
             * include/bits/hashtable.h
(_Hashtable<>::_M_merge_unique): Remove noexcept
             qualification. Use const_iterator for node
extraction/reinsert.
             (_Hashtable<>::_M_merge_multi): Likewise. Compute new hash
code before extract.
             * include/debug/safe_container.h (_Safe_container<>): Make
all methods
             protected.
             * include/debug/safe_unordered_container.h
(_Safe_unordered_container<>::_UContMergeGuard<_ExtractKey, _Source>):
New.
(_Safe_unordered_container<>::_S_uc_guard<_ExtractKey, _Source>): New.
(_Safe_unordered_container<>::_UMContMergeGuard<_ExtractKey,
_Source>): New.
(_Safe_unordered_container<>::_S_umc_guard<_ExtractKey, _Source>): New.
(_Safe_unordered_container<>::_M_invalide_all): Make public.
             (_Safe_unordered_container<>::_M_invalide_if): Likewise.
(_Safe_unordered_container<>::_M_invalide_local_if): Likewise.
             * include/debug/unordered_map
             (unordered_map<>::mapped_type, pointer, const_pointer):
New typedef.
             (unordered_map<>::reference, const_reference,
difference_type): New typedef.
             (unordered_map<>::get_allocator, empty, size, max_size):
Add usings.
             (unordered_map<>::bucket_count, max_bucket_count, bucket):
Add usings.
             (unordered_map<>::hash_function, key_equal, count,
contains): Add usings.
             (unordered_map<>::operator[], at, rehash, reserve): Add
usings.
             (unordered_map<>::merge): New.
             (unordered_multimap<>::mapped_type, pointer,
const_pointer): New typedef.
             (unordered_multimap<>::reference, const_reference,
difference_type): New typedef.
             (unordered_multimap<>::get_allocator, empty, size,
max_size): Add usings.
             (unordered_multimap<>::bucket_count, max_bucket_count,
bucket): Add usings.
             (unordered_multimap<>::hash_function, key_equal, count,
contains): Add usings.
             (unordered_multimap<>::rehash, reserve): Add usings.
             (unordered_multimap<>::merge): New.
             * include/debug/unordered_set
             (unordered_set<>::mapped_type, pointer, const_pointer):
New typedef.
             (unordered_set<>::reference, const_reference,
difference_type): New typedef.
             (unordered_set<>::get_allocator, empty, size, max_size):
Add usings.
             (unordered_set<>::bucket_count, max_bucket_count, bucket):
Add usings.
             (unordered_set<>::hash_function, key_equal, count,
contains): Add usings.
             (unordered_set<>::rehash, reserve): Add usings.
             (unordered_set<>::merge): New.
             (unordered_multiset<>::mapped_type, pointer,
const_pointer): New typedef.
             (unordered_multiset<>::reference, const_reference,
difference_type): New typedef.
             (unordered_multiset<>::get_allocator, empty, size,
max_size): Add usings.
             (unordered_multiset<>::bucket_count, max_bucket_count,
bucket): Add usings.
             (unordered_multiset<>::hash_function, key_equal, count,
contains): Add usings.
             (unordered_multiset<>::rehash, reserve): Add usings.
             (unordered_multiset<>::merge): New.
             *
testsuite/23_containers/unordered_map/debug/merge1_neg.cc: New test.
             *
testsuite/23_containers/unordered_map/debug/merge2_neg.cc: New test.
             *
testsuite/23_containers/unordered_map/debug/merge3_neg.cc: New test.
             *
testsuite/23_containers/unordered_map/debug/merge4_neg.cc: New test.
             *
testsuite/23_containers/unordered_multimap/debug/merge1_neg.cc: New test.
             *
testsuite/23_containers/unordered_multimap/debug/merge2_neg.cc: New test.
             *
testsuite/23_containers/unordered_multimap/debug/merge3_neg.cc: New test.
             *
testsuite/23_containers/unordered_multimap/debug/merge4_neg.cc: New test.
             *
testsuite/23_containers/unordered_multiset/debug/merge1_neg.cc: New test.
             *
testsuite/23_containers/unordered_multiset/debug/merge2_neg.cc: New test.
             *
testsuite/23_containers/unordered_multiset/debug/merge3_neg.cc: New test.
             *
testsuite/23_containers/unordered_multiset/debug/merge4_neg.cc: New test.
             *
testsuite/23_containers/unordered_set/debug/merge1_neg.cc: New test.
             *
testsuite/23_containers/unordered_set/debug/merge2_neg.cc: New test.
             *
testsuite/23_containers/unordered_set/debug/merge3_neg.cc: New test.
             *
testsuite/23_containers/unordered_set/debug/merge4_neg.cc: New test.
             * testsuite/util/testsuite_abi.h: [_GLIBCXX_DEBUG] Use
normal unordered container implementation.

Tested under Linux x86_64.

Ok to commit ?

François

On 25/10/21 8:08 pm, François Dumont wrote:
New patch with the proposed workaround below.

I also slightly change the _M_merge_multi implementation so that if
the new hash code computation raise an exception the node is simply
not extracted rather than extracted and then released. This way, if
it takes place on the 1st moved node the _GLIBCXX_DEBUG mode won't
try to invalidate anything because the source size won't have changed.

Ok to commit ?

François


On 16/10/21 4:52 pm, Jonathan Wakely wrote:

On Sat, 16 Oct 2021, 14:49 François Dumont via Libstdc++,
<libstd...@gcc.gnu.org <mailto:libstdc%2b...@gcc.gnu.org>> wrote:

     Hi

          Here is the new proposal. My only concern is that we are
     also using
     hash or equal_to functors in the guard destructor.



Can we catch any exception there, invalidate all iterators, and not
rethrow the exception?


          I am going to enhance merge normal implementation to make
     use of
     the cached hash code when hash functors are the same between the
     source
     and destination of nodes. Maybe I'll be able to make use of it
     in Debug
     implementation too.

     François


     On 14/10/21 10:23 am, Jonathan Wakely wrote:
     > On Wed, 13 Oct 2021 at 18:10, François Dumont via Libstdc++
     > <libstd...@gcc.gnu.org <mailto:libstdc%2b...@gcc.gnu.org>> wrote:
     >> Hi
     >>
     >>       libstdc++: [_GLIBCXX_DEBUG] Implement unordered
     container merge
     >>
     >>       The _GLIBCXX_DEBUG unordered containers need a
     dedicated merge
     >> implementation
     >>       so that any existing iterator on the transfered nodes
     is properly
     >> invalidated.
     >>
     >>       Add typedef/using declaration for everything used as-is
     from normal
     >> implementation.
     >>
     >>       libstdc++-v3/ChangeLog:
     >>
     >>               * include/debug/safe_container.h
     (_Safe_container<>): Make
     >> all methods
     >>               protected.
     >>               * include/debug/safe_unordered_container.h
     >>  (_Safe_unordered_container<>::_M_invalide_all): Make public.
     >>  (_Safe_unordered_container<>::_M_invalide_if): Likewise.
     >> (_Safe_unordered_container<>::_M_invalide_local_if): Likewise.
     >>               * include/debug/unordered_map
     >>  (unordered_map<>::mapped_type, pointer, const_pointer): New
     >> typedef.
     >>  (unordered_map<>::reference, const_reference,
     >> difference_type): New typedef.
     >>  (unordered_map<>::get_allocator, empty, size, max_size):
     >> Add usings.
     >>  (unordered_map<>::bucket_count, max_bucket_count, bucket):
     >> Add usings.
     >>  (unordered_map<>::hash_function, key_equal, count,
     >> contains): Add usings.
     >>  (unordered_map<>::operator[], at, rehash, reserve): Add usings.
     >>               (unordered_map<>::merge): New.
     >>  (unordered_multimap<>::mapped_type, pointer,
     >> const_pointer): New typedef.
     >>  (unordered_multimap<>::reference, const_reference,
     >> difference_type): New typedef.
     >>  (unordered_multimap<>::get_allocator, empty, size,
     >> max_size): Add usings.
     >>  (unordered_multimap<>::bucket_count, max_bucket_count,
     >> bucket): Add usings.
     >>  (unordered_multimap<>::hash_function, key_equal, count,
     >> contains): Add usings.
     >>  (unordered_multimap<>::rehash, reserve): Add usings.
     >>  (unordered_multimap<>::merge): New.
     >>               * include/debug/unordered_set
     >>  (unordered_set<>::mapped_type, pointer, const_pointer): New
     >> typedef.
     >>  (unordered_set<>::reference, const_reference,
     >> difference_type): New typedef.
     >>  (unordered_set<>::get_allocator, empty, size, max_size):
     >> Add usings.
     >>  (unordered_set<>::bucket_count, max_bucket_count, bucket):
     >> Add usings.
     >>  (unordered_set<>::hash_function, key_equal, count,
     >> contains): Add usings.
     >>               (unordered_set<>::rehash, reserve): Add usings.
     >>               (unordered_set<>::merge): New.
     >>  (unordered_multiset<>::mapped_type, pointer,
     >> const_pointer): New typedef.
     >>  (unordered_multiset<>::reference, const_reference,
     >> difference_type): New typedef.
     >>  (unordered_multiset<>::get_allocator, empty, size,
     >> max_size): Add usings.
     >>  (unordered_multiset<>::bucket_count, max_bucket_count,
     >> bucket): Add usings.
     >>  (unordered_multiset<>::hash_function, key_equal, count,
     >> contains): Add usings.
     >>  (unordered_multiset<>::rehash, reserve): Add usings.
     >>  (unordered_multiset<>::merge): New.
     >>               *
     >> testsuite/23_containers/unordered_map/debug/merge1_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_map/debug/merge2_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_map/debug/merge3_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_map/debug/merge4_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multimap/debug/merge1_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multimap/debug/merge2_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multimap/debug/merge3_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multimap/debug/merge4_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multiset/debug/merge1_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multiset/debug/merge2_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multiset/debug/merge3_neg.cc:
     New test.
     >>               *
     >>
     testsuite/23_containers/unordered_multiset/debug/merge4_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_set/debug/merge1_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_set/debug/merge2_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_set/debug/merge3_neg.cc:
     New test.
     >>               *
     >> testsuite/23_containers/unordered_set/debug/merge4_neg.cc:
     New test.
     >>               * testsuite/util/testsuite_abi.h:
     [_GLIBCXX_DEBUG] Use
     >> normal unordered container implementation.
     >>
     >> Tested under Linux x86_64.
     >>
     >> Ok to commit ?
     > Yes, thanks. But ...
     >
     > This looks like an improvement over what we have now, but not 100%
     > correct. The merge functions can exit via exception (if any hash
     > function or equality predicate throws), and if that happens
     the safe
     > iterators will not get invalidated. I think we need to call
     > _Base::merge in a try-block, and do the iterator invalidation
     whether
     > we return normally or via an exception.
     >
     > Something like:
     >
     >    template<typename _H2, typename _P2>
     >      void
     >      merge(unordered_set<_Value, _H2, _P2, _Alloc>& __source)
     >      {
     >        struct _Guard
     >        {
     >          _Guard(unordered_set& __source) noexcept
     >          : _M_source(__source), _M_size(__source.size())
     >          { }
     >
     >          ~_Guard()
     >          {
     >            const size_type __size = _M_source.size();
     >            if (__size != _M_size)
     >              {
     >                if (__size == 0)
     >                  _M_source._M_invalidate_all();
     >                else
     >                  {
     >                    auto __pred = [&__source](auto __it)
     >                                  { return
     __source.count(*__it) == 0; };
     > __source._M_invalidate_if(__pred);
     > __source._M_invalidate_local_if(__pred);
     >                  }
     >              }
     >          }
     >
     >          _Guard(const _Guard&) = delete;
     >
     >          unordered_set& _M_source;
     >          const size_type _M_size;
     >        };
     >        _Guard __guard(__source);
     >        _Base::merge(__source._M_base());
     >      }
     >



Reply via email to