changgyoopark-db commented on code in PR #49584:
URL: https://github.com/apache/spark/pull/49584#discussion_r1936815612


##########
sql/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SessionHolder.scala:
##########
@@ -440,46 +443,64 @@ case class SessionHolder(userId: String, sessionId: 
String, session: SparkSessio
    * `spark.connect.session.planCache.enabled` is true.
    * @param rel
    *   The relation to transform.
-   * @param cachePlan
-   *   Whether to cache the result logical plan.
    * @param transform
    *   Function to transform the relation into a logical plan.
    * @return
-   *   The logical plan.
+   *   The logical plan and a flag indicating that the plan cache was hit.
    */
-  private[connect] def usePlanCache(rel: proto.Relation, cachePlan: Boolean)(
-      transform: proto.Relation => LogicalPlan): LogicalPlan = {
-    val planCacheEnabled = Option(session)
-      
.forall(_.sessionState.conf.getConf(Connect.CONNECT_SESSION_PLAN_CACHE_ENABLED, 
true))
-    // We only cache plans that have a plan ID.
-    val hasPlanId = rel.hasCommon && rel.getCommon.hasPlanId
-
-    def getPlanCache(rel: proto.Relation): Option[LogicalPlan] =
-      planCache match {
-        case Some(cache) if planCacheEnabled && hasPlanId =>
-          Option(cache.getIfPresent(rel)) match {
-            case Some(plan) =>
-              logDebug(s"Using cached plan for relation '$rel': $plan")
-              Some(plan)
-            case None => None
-          }
-        case _ => None
-      }
-    def putPlanCache(rel: proto.Relation, plan: LogicalPlan): Unit =
-      planCache match {
-        case Some(cache) if planCacheEnabled && hasPlanId =>
-          cache.put(rel, plan)
-        case _ =>
+  private[connect] def usePlanCache(rel: proto.Relation)(
+      transform: proto.Relation => LogicalPlan): (LogicalPlan, Boolean) =
+    planCache match {
+      case Some(cache) if planCacheEnabled(rel) =>
+        Option(cache.getIfPresent(rel)) match {

Review Comment:
   That's a good point, but unfortunately, it's a guava cache that lacks 
computeIfAbsent +. You still can't avoid concurrent recomputation with 
computeIfAbsent since the key will be available once the value is ready; 
avoiding duplicate work requires a mutex.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to