hvanhovell commented on code in PR #52610:
URL: https://github.com/apache/spark/pull/52610#discussion_r2429749495
##########
sql/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectAddArtifactsHandler.scala:
##########
@@ -259,11 +279,25 @@ class SparkConnectAddArtifactsHandler(val
responseObserver: StreamObserver[AddAr
def close(): Unit = {
if (artifactSummary == null) {
- checksumOut.close()
+ // Close streams defensively: even if outer stream close fails, try
inner streams.
+ // Normally checksumOut.close() cascades to inner streams, but if it
throws an
+ // exception, we still attempt to close the inner streams to prevent
resource leaks.
+ var closeException: Throwable = null
+ try {
+ checksumOut.close()
Review Comment:
While I file it commendable that you are trying to close the inner input
streams, I think this is slightly over engineered. If you look at the
implementation of both CountingOutputStream and CountingOutputStream, then
you'll see that close is guaranteed to be called on their inner stream. So we
don't really have to go through this dance. Alternatively you close the
`fileOut` directly.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]