Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/152#discussion_r10637484
  
    --- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
    @@ -35,13 +35,21 @@ private[spark] case class 
GetMapOutputStatuses(shuffleId: Int)
       extends MapOutputTrackerMessage
     private[spark] case object StopMapOutputTracker extends 
MapOutputTrackerMessage
     
    -private[spark] class MapOutputTrackerMasterActor(tracker: 
MapOutputTrackerMaster)
    +private[spark] class MapOutputTrackerMasterActor(tracker: 
MapOutputTrackerMaster, conf: SparkConf)
       extends Actor with Logging {
    +  val maxAkkaFrameSize = AkkaUtils.maxFrameSizeBytes(conf)
    +
       def receive = {
         case GetMapOutputStatuses(shuffleId: Int) =>
           val hostPort = sender.path.address.hostPort
           logInfo("Asked to send map output locations for shuffle " + 
shuffleId + " to " + hostPort)
    -      sender ! tracker.getSerializedMapOutputStatuses(shuffleId)
    +      val mapOutputStatuses = 
tracker.getSerializedMapOutputStatuses(shuffleId)
    +      val serializedSize = mapOutputStatuses.size
    +      if (serializedSize > maxAkkaFrameSize) {
    +        throw new SparkException(s"Map output statuses were 
$serializedSize bytes which " +
    +          s"exceeds spark.akka.frameSize ($maxAkkaFrameSize bytes).")
    --- End diff --
    
    What are you expecting to happen to this thrown exception?  From what I'm 
seeing, nothing will happen except that this actor gets killed and a new one 
restarted.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to