Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11886#discussion_r57118942
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -443,13 +443,12 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf)
statuses = mapStatuses.getOrElse(shuffleId, Array[MapStatus]())
epochGotten = epoch
}
- }
- // If we got here, we failed to find the serialized locations in the
cache, so we pulled
- // out a snapshot of the locations as "statuses"; let's serialize and
return that
- val bytes = MapOutputTracker.serializeMapStatuses(statuses)
- logInfo("Size of output statuses for shuffle %d is %d
bytes".format(shuffleId, bytes.length))
- // Add them into the table only if the epoch hasn't changed while we
were working
- epochLock.synchronized {
+
+ // If we got here, we failed to find the serialized locations in the
cache, so we pulled
--- End diff --
OK, to paraphrase: everyone finds the cache is empty one by one very
quickly, and everyone proceeds to load the cached value. Isn't it simpler to
put this load in the `cachedSerializedStatuses.get(shuffleId) match ... None
=>` block then?
This code is pretty old but CC @mateiz or @JoshRosen in case they have an
opinion since they touched it last
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]