steveloughran commented on PR #3204:
URL: https://github.com/apache/parquet-java/pull/3204#issuecomment-2841482161

   * the stream flush() ensures that all buffered data is pushed into whatever 
the final upload buffers of the client are, so in close() is guaranteed to have 
all its data.
   
   * hflush() pushes it out all HDFS data nodes. Which is done immediately 
after in close().
   * s3afs tells people off for hflush(), as its part of the Syncable 
interface, whose hsync() method is absolutely unsupported -code trying to use 
that (hbase, some spark streaming committers) do not get the semantics they 
required.
   
   In https://github.com/apache/hadoop/pull/7662 I downgrade 
S3ABlockOutputStream.hflush() to a just increment stats and log@debug (it's not 
the critical method), but that'll be 3.4.2+ only
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to