acking-you commented on issue #15720:
URL: https://github.com/apache/datafusion/issues/15720#issuecomment-2859288602

   After implementing reuse Rows, it was found that there was no improvement in 
the overall execution of `SortPreservingMergeExec`. @Dandandan 
   
   Therefore, I measured the runtime of the `convert_batch` function and 
performed some analysis.
   
   ### large string view
   Time taken before optimization:
   ```
   RowCursorStream::convert_batch took 256.61774ms
   RowCursorStream::convert_batch took 250.452429ms
   RowCursorStream::convert_batch took 253.081069ms
   ```
   
   Time taken after optimization:
   ```
   RowCursorStream::convert_batch took 399.365625ms
   RowCursorStream::convert_batch took 555.25968ms
   RowCursorStream::convert_batch took 568.33112ms
   ```
   
   ### large string
   before:
   ```
   RowCursorStream::convert_batch took 257.428132ms
   RowCursorStream::convert_batch took 250.226582ms
   RowCursorStream::convert_batch took 256.96748ms
   ```
   
   after:
   ```
   RowCursorStream::convert_batch took 256.529257ms
   RowCursorStream::convert_batch took 259.770932ms
   RowCursorStream::convert_batch took 264.070756ms
   ```
   
   ### Summary
   There is almost no difference. The reasons are analyzed as follows:
   - The optimization from `Vec` reserve is to prevent continuous memory 
allocation and data copying. Since Rows uses `Vec<u8>` to represent all 
continuous row data, the copying overhead is very large.
   - However, in the current situation, Rows always performs a reservation and 
then appends to construct the entire Rows. The copying overhead caused by not 
specifying capacity does not exist.
   - The current issue is that Rows reallocates new memory each time via 
`Vec::with_capacity`. This is what this issue aimed to address, but through 
practice, it was found that the overhead of memory allocation here is almost 
negligible (no significant change for either large strings or u64).
   
   However, some interesting data was found, which might lead to further 
performance improvement analysis:
   1. For a single column and three columns of `u64` type, the time taken is 
`28ms/64ms`.
   2. For a single column and three columns of `large string` type, the time 
taken is `212ms/2.1s`.
   3. For a single column and three columns of `large string view` type, the 
time taken is `124ms/1.1s`.
   4. The overhead of calling `RowCursorStream::convert_batch` is around 250ms 
for both `large string` and `large string view`.
   
   When the `ORDER BY` column type is a relatively long string, the overhead of 
constructing Rows is still quite large. For example, the aforementioned 250ms 
accounts for 1/4 of the time for string view and 1/8 for string.
   
   This part of the overhead cannot be improved by reusing Rows. I believe the 
fundamental cause of the overhead is memory copying, specifically the overhead 
of reading data to construct Rows in `convert_columns`.
   
   ### Attempts currently being made
   
   An optimization idea I have is to identify cases where the entire `ORDER BY` 
column is very long and, in such scenarios, skip constructing Rows and instead 
perform comparisons directly using the original Array. I am currently 
experimenting to see if this method can speed things up.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to