viirya commented on issue #885:
URL: 
https://github.com/apache/datafusion-comet/issues/885#issuecomment-2318704751

   I think it is because the query execution is always triggered from JVM side 
(the producer). If the array and schema structures are allocated by native, the 
process will become:
   
   1. JVM side calls native side to get array and schema structures for new 
batch
   2. JVM fills array and schema structures
   3. JVM calls native side again to execute the query
   
   > On the native side, the allocated base structures were kept in the 
[ffi_arrays field of the execution 
context](https://github.com/apache/datafusion-comet/blob/0.2.0/native/core/src/execution/jni_api.rs#L311-L312),
 and will be released when the next batch is produced or when the execution 
context is released by Native.releasePlan.
   
   For output batch, we can provide array and schema from JVM to native when 
JVM calls native side to execute the query, and use them for importing. This 
won't change query process but have additional JNI parameters in above step 3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to