Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11886#discussion_r56989619
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -443,13 +443,12 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf)
statuses = mapStatuses.getOrElse(shuffleId, Array[MapStatus]())
epochGotten = epoch
}
- }
- // If we got here, we failed to find the serialized locations in the
cache, so we pulled
- // out a snapshot of the locations as "statuses"; let's serialize and
return that
- val bytes = MapOutputTracker.serializeMapStatuses(statuses)
- logInfo("Size of output statuses for shuffle %d is %d
bytes".format(shuffleId, bytes.length))
- // Add them into the table only if the epoch hasn't changed while we
were working
- epochLock.synchronized {
+
+ // If we got here, we failed to find the serialized locations in the
cache, so we pulled
--- End diff --
Are you sure this is related to your problem? you've put the serialization
into code holding the lock now, which has a downside. I'm not clear from your
description what the problem is. What do you mean by "serialize Mapstatus is in
serial model from different shuffle stage."
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]