zhengruifeng commented on code in PR #50013:
URL: https://github.com/apache/spark/pull/50013#discussion_r1970737074


##########
sql/connect/server/src/main/scala/org/apache/spark/sql/connect/ml/MLHandler.scala:
##########
@@ -125,6 +127,15 @@ private[connect] object MLHandler extends Logging {
         val dataset = MLUtils.parseRelationProto(fitCmd.getDataset, 
sessionHolder)
         val estimator =
           MLUtils.getEstimator(sessionHolder, estimatorProto, 
Some(fitCmd.getParams))
+
+        // pre-training model size check
+        val maxSize = 
conf.getConf(Connect.CONNECT_SESSION_ML_CACHE_SINGLE_ITEM_SIZE)
+        if (maxSize > 0) {
+          val estimatedSize = estimator.estimateModelSize(dataset)

Review Comment:
   That depends, we are trying to estimate an upper bound of this size. E.g.
   
   > - Given a LogisticRegression estimator, assume the coefficients are dense, 
even though the actual fitted model might be sparse (by L1 penalty).
   > - Given a tree model, assume all underlying trees are complete binary 
trees, even though some branches might be pruned or truncated.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to