patrickstuedi commented on a change in pull request #10798:
URL: https://github.com/apache/kafka/pull/10798#discussion_r703439679



##########
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java
##########
@@ -505,6 +506,14 @@ private void closeOpenIterators() {
         }
     }
 
+    private ByteBuffer createDirectByteBufferAndPut(byte[] bytes) {
+        ByteBuffer directBuffer = ByteBuffer.allocateDirect(bytes.length);

Review comment:
       Re (1) if all use cases are single threaded then yes we can allocate 
some buffer(s) as part of the store. Otherwise, if you need to support multiple 
concurrent ops then you could pre-populate a queue with a N buffers, and N 
becomes the maximum number of concurrent requests you can server. If you're 
queue is empty then a request would have to wait until at least one of the 
outstanding requests completes and add its buffer to the queue. Again that 
might all not be needed given the API is single-threaded. 
    
   Re (2), is there a max-size, maybe given by the maximum Kafka message size 
that is configured (if such a limit exists and is not too big)? 
   
   If we don't want to change the API (I guess it would be the RocksDBStore 
interface that would be changed which is not exposed I think, but still) then 
splitting this work into part I where we copy heap to direct buffers, and then 
a part II where we directly serialize into direct buffers is a way to go.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to