Github user davies commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3003#discussion_r19636342
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -210,25 +213,26 @@ private[spark] class Executor(
             val resultSize = serializedDirectResult.limit
     
             // directSend = sending directly back to the driver
    -        val (serializedResult, directSend) = {
    -          if (resultSize >= akkaFrameSize - AkkaUtils.reservedSizeBytes) {
    +        val serializedResult = {
    +          if (resultSize > maxResultSize) {
    +            logInfo(s"Finished $taskName (TID $taskId). result is too 
large (${resultSize} bytes),"
    --- End diff --
    
    This logging happens on executor, is not user-faced directly in cluster 
mode.
    
    The logging in driver is more important, one error logging and a message 
for abort(), I will improve those twos, but keep this one simple. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to