ForVic commented on code in PR #51882:
URL: https://github.com/apache/spark/pull/51882#discussion_r2298889743


##########
core/src/main/scala/org/apache/spark/SparkContext.scala:
##########
@@ -2934,8 +2934,15 @@ class SparkContext(config: SparkConf) extends Logging {
     _driverLogger.foreach(_.startSync(_hadoopConfiguration))
   }
 
-  /** Post the application end event */
+  /** Post the application end event and report the final heartbeat */
   private def postApplicationEnd(exitCode: Int): Unit = {
+    try {
+      _heartbeater.doReportHeartbeat()

Review Comment:
   Yes, the goal here is best effort, given we are already in a 'failure' 
scenario it may not always succeed, but when it does, we get a better signal. 
We already report memory statistics on every `EXECUTOR_HEARTBEAT_INTERVAL` 
seconds 
https://github.com/apache/spark/blob/bc36a7db43f287af536bb2767d7d9f1d70bc799f/core/src/main/scala/org/apache/spark/SparkContext.scala#L617
   
   regardless of memory pressure.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to