https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104191

--- Comment #2 from frankhb1989 at gmail dot com ---
(In reply to Jonathan Wakely from comment #1)
> (In reply to frankhb1989 from comment #0)
> > and it should be solely determined by the internal node count type.
> 
> What is the internal node count type? You mean the size_type? That would be
> wrong, because there's no way you can create
> numeric_limits<size_type>::max() nodes, as each node requires tens of bytes.
> 

For the current std::list implementation with _GLIBCXX_USE_CXX11_ABI enabled,
it is the type of _List_node_header::_M_size, which is explicitly declared as
std::size_t. Otherwise, it is the size_type. Both are effectively size_t.

Allocating such many nodes are generally impossible to succeed with usual
allocators, but what if allocators with fancy pointers which can point to
objects allocated out of the main memory in the usual flat address space (e.g.
via Microsoft Windows's address windowing extensions)?

There is one more related problem: can the allocator's size_type greater than
size_t? If so, the implementations of max_size() are immediately more flawed
due to the truncation of allocator's size_type to container's size_type
(size_t), e.g. when the allocator's max_size() is implemented by default in the
allocator traits (since the fix of LWG 2466).

As per https://stackoverflow.com/a/12499353, the standard used to allow such
size_type. But I then found the referenced wording are invalidated by the side
effects of the changes in LWG 2384 and P0593R6. I'm not confident they are
intentional.

> I agree using the allocator's max_size() is wrong, but it doesn't really
> matter because all max_size() functions on containers and allocators are
> useless in practice.

This is true for most user code. But it still seems too easy to break, as the
case shows.

Further, it exposes some internal consistency in the implementation. MSVC's
std::list checks the size and throw std::length_error in the insertion to avoid
the inconsistency. I don't think such cost should be introduced here.

The simplest fix is just to return numeric_limits<size_t>::max() in container's
max_size()... well, only if PR 78448 is not taken into account. It
catastrophically blocks the way of this simple fix.

Perhaps there could be an LWG issue to propose changes on the definitions of
size() and max_size() to get rid of the range limit of difference_type at least
for containers not requiring contiguous memory (and
[container.requirements.general]/5 kicks in instead)... then the simple fix can
be applied.

Reply via email to