EmilyMatt commented on PR #19695:
URL: https://github.com/apache/datafusion/pull/19695#issuecomment-3723707998

   > I have a question: let's say we have only 100MB of memory left, and there 
is a 1GB batch arriving at the `SortExec`, and this PR makes it possible to 
sort this batch in memory and write it to a single spill file.
   > 
   > Sorting it in memory and incrementally appending it to the spill file 
still needs extra memory. The amount should be the memory usage of the sort 
columns in the original large batch, so in the worst case it is also around 
1GB. This should not be possible. Are we trying to ignore the memory limit and 
sort and spill anyway in this PR?
   > 
   > I believe that, for internal operators, outputting batches in `batch_size` 
is a convention. This convention can greatly simplify operator implementation; 
otherwise, all operators have to consider extremely large or extremely small 
input batches, which would make long-term maintenance very hard.
   > 
   > The root cause of this issue, I think, is that `AggregateExec` is not 
respecting this convention and can potentially output batches that are much 
larger than `batch_size`. What do you think about moving this fix to 
`AggregateExec` instead, so it has internal spilling to ensure it does not 
output large batches? This seems like an issue that should be addressed outside 
`SortExec`.
   
   @2010YOUY01 I agree, this PR does not exactly resolve the issue I opened.
   While it makes sense to do this in aggregate exec, and in every other 
operator that does Emit, it does not resolve the underlying problem.
   
   @Nachiket-Roy  
   sort_batch_chunked is eager, meaning, it will still peak at 2x the batch 
memory.
   The idea is to calculate the sort indices, then everytime do a take on the 
first batch_size indices, and spill them immediately, that way we only need the 
original batch size reservation, + an additional smaller chunk for the 
batch_size rows we do a take on.
   
   We can use some estimation method on how much memory that is, but generally 
we're talking about a few megabytes of extra memory, nothing compared to the 
practically unlimited size of the original batch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to