steveloughran commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2256520565

   * updated PR has the new field; do need to document it though.
   * pulled out fault injector class for reuse
   
   based on @shameersss1 comments I've reviewed S3ABlockOutputStream aborting
   
   * use our own FutureIO to wait for results; this unwraps exceptions
     for us
   * On InterruptedIOException, upload is aborted but no attempt made to cancel
     the requests (things are being interrupted, after all).
   * Atomic bool stopFutureUploads is used to signal to future uploads that they
     should stop uploading but still clean up data.
   * when the awaiting for future IO operation is interrupted, no attempt
     is made to cancel/interrupt the uploads, but that flag is still set.
     
   Now unsure about what is the best policy to avoid ever leaking buffers
   if an upload is cancelled.
   
   1. Should we ever use future.cancel()? or just set stopFutureUploads knowing 
the uploads are skipped
   2. would we want the upload stream to somehow trigger a failure which gets 
through the SDK (i.e. no retries?) and then exits?
    
   We could do this now we have our own content provider: raise a 
nonrecoverable AwsClientException...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to