[ https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16388881#comment-16388881 ]
Sahil Takiar commented on HIVE-18831: ------------------------------------- Before this patch the console output would look like: {code} Job failed with org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script. FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. {code} Now it looks like: {code} FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed due to Spark task failures: Job failed with org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script. {code} So pretty much just combined these two lines and cleaned up the error message a bit. > Differentiate errors that are thrown by Spark tasks > --------------------------------------------------- > > Key: HIVE-18831 > URL: https://issues.apache.org/jira/browse/HIVE-18831 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Sahil Takiar > Priority: Major > Attachments: HIVE-18831.1.patch > > > We propagate exceptions from Spark task failures to the client well, but we > don't differentiate between errors from HS2 / RSC vs. errors thrown by > individual tasks. > Main motivation is that when the client sees a propagated Spark exception its > difficult to know what part of the excution threw the exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)