On Mon, 28 Apr 2025 15:23:18 GMT, kabutz <d...@openjdk.org> wrote:

> We logged several bugs on the LinkedBlockingDeque. This aggregates them into 
> a single bug report and PR.
> 
> 1. LinkedBlockingDeque does not immediately throw InterruptedException in 
> put/take
> 
> The LinkedBlockingDeque does not behave consistently with other concurrency 
> components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with 
> a thread that is interrupted, it does not immediately throw an 
> InterruptedException, the way that ArrayBlockingQueue and LInkedBlockingQueue 
> does, because instead of lockInterruptibly(), we call lock(). It will only 
> throw an InterruptedException if the queue is full (on put) or empty (on 
> take). Since interruptions are frequently used as a shutdown mechanism, this 
> might prevent code from ever shutting down.
> 
> 2. LinkedBlockingDeque.clear() should preserve weakly-consistent iterators
> 
> LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by 
> linking f.prev and f.next back to f, allowing the iterators to continue from 
> the first or last respectively. This would be consistent with how the other 
> node-based weakly consistent queues LinkedBlockingQueue LinkedTransferQueue, 
> ConcurrentLinkedQueue/Deque work.
> 
> The LBD already supports self-linking, since that is done by the 
> unlinkFirst() and unlinkLast() methods, and the iterators and spliterator 
> thus all support self-linking.
> 
> This can be fixed very easily by linking both f.prev and f.next back to f.
> 
> 3. LinkedBlockingDeque offer() creates nodes even if capacity has been reached
> 
> In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are 
> dynamically created upon each insertion unless this would bring the deque 
> above capacity." However, in the current implementation, nodes are always 
> created, even if the deque is full. This is because count is non-volatile, 
> and we only check inside the linkFirst/Last() methods whether the queue is 
> full. At this point we have already locked and have created the Node. 
> Instead, the count could be volatile, and we could check before locking.
> 
> In the current version, calling offer() on a full LinkedBlockingDeque creates 
> unnecessary objects and contention. Similarly for poll() and peek(), we could 
> exit prior to locking by checking the count field.
> 
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
> 
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then 
> add that chain in bulk to the existing nodes. We count the nodes in "int n" 
> and then whilst holding the lock, we check that we haven't exceed...

src/java.base/share/classes/java/util/concurrent/LinkedBlockingDeque.java line 
865:

> 863:         long n = 0;
> 864:         for (E e : c) {
> 865:             Objects.requireNonNull(e);

This makes me wonder: Does it make sense to create new nodes if we don't track 
if they will still fit into the capacity?

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/24925#discussion_r2079229217

Reply via email to