divijvaidya commented on PR #13285: URL: https://github.com/apache/kafka/pull/13285#issuecomment-1450075461
I am afraid the latest approach will also not work. This is because there is a possibility that `ArrayBuffer` is internally performing size expansion while we read at `dataPlaneProcessors.size` or when iterating on it using the `map`. This concurrent access could lead to undefined results (notably, this would not have been a problem if we were using a fixed size array). Also, note that the current implementation works for `controlPlaneAcceptorOpt` since they do not access the `processors` arrayBuffer outside the lock. When I mentioned, option 1 of using fine grained locking, I actually implied locking on processors object instead of locking on entire SocketServer object. If we go down this path, we will have to change other places in the file to acquire this processor lock when mutation and we have also have to ensure that deadlock doesn't occur when trying to acquire SocketServer lock and Processors lock. Hence, my suggestion would be to opt for a lock-free concurrent access data structure for storing processors. Here's our requirement for such a data structure: - we don't mutate the data structure frequently, so even if writes are slow, we are ok with that. - we require lock-free concurrent reads since we perform a read with every connection setup and every time we emit a metric - the size of the data structure is going to be small, in tens to low hundreds entries - the data structure should be able to expand it's size since we allow dynamic shrinking and expanding Based on the above, we can choose to use a ConcurrentHashMap or a CopyOnWriteArrayList for storing processors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org