2010YOUY01 commented on code in PR #14823: URL: https://github.com/apache/datafusion/pull/14823#discussion_r1967214234
########## datafusion/physical-plan/src/sorts/sort.rs: ########## @@ -414,6 +419,66 @@ impl ExternalSorter { Ok(used) } + /// Reconstruct `self.in_mem_batches` to organize the payload buffers of each + /// `StringViewArray` in sequential order by calling `gc()` on them. + /// + /// # Rationale + /// After (merge-based) sorting, all batches will be sorted into a single run, + /// but physically this sorted run is chunked into many small batches. For + /// `StringViewArray`s inside each sorted run, their inner buffers are not + /// re-constructed by default, leading to non-sequential payload locations + /// (permutated by `interleave()` Arrow kernel). A single payload buffer might + /// be shared by multiple `RecordBatch`es. + /// When writing each batch to disk, the writer has to write all referenced buffers, + /// because they have to be read back one by one to reduce memory usage. This + /// causes extra disk reads and writes, and potentially execution failure. + /// + /// # Example + /// Before sorting: + /// batch1 -> buffer1 + /// batch2 -> buffer2 + /// + /// sorted_batch1 -> buffer1 + /// -> buffer2 + /// sorted_batch2 -> buffer1 + /// -> buffer2 + /// + /// Then when spilling each batch, the writer has to write all referenced buffers + /// repeatedly. + fn organize_stringview_arrays(&mut self) -> Result<()> { + let mut organized_batches = Vec::with_capacity(self.in_mem_batches.len()); + + for batch in self.in_mem_batches.drain(..) { + let mut new_columns: Vec<Arc<dyn Array>> = + Vec::with_capacity(batch.num_columns()); + + let mut arr_mutated = false; + for array in batch.columns() { + if let Some(string_view_array) = + array.as_any().downcast_ref::<StringViewArray>() + { + let new_array = string_view_array.gc(); Review Comment: Good point, there is some inefficiency here, I filed https://github.com/apache/arrow-rs/issues/7184. Once done on `arrow` side, we can remove the copies here to speed it up. IMO it won't cause regression for datafusion, it's only done when the spill is triggered, and if we don't copy here it can cause larger inefficiency and correctness issues. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org