lifulong opened a new pull request, #52701:
URL: https://github.com/apache/spark/pull/52701

   …proximate quantile computation, significantly improving merge performance
   
   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: 
https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: 
https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a 
faster review.
     7. If you want to add a new configuration, please read the guideline first 
for naming configurations in
        
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the 
guideline first in
        'common/utils/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   Use datasketches qualifie to replace spark default GK algorithm for speed up 
ApproximatePercentile performance
   https://datasketches.apache.org/
   https://github.com/apache/datasketches-java
   i found that spark has use datasketches before, but why not replace 
approximate qualifie with datasketches?
   
   ### Why are the changes needed?
   https://issues.apache.org/jira/browse/SPARK-47836
   https://issues.apache.org/jira/browse/SPARK-46706
   https://issues.apache.org/jira/browse/SPARK-40499
   multipe issues has reported spark3.x ApproximatePercentile performance 
problem, which introduce from this bug 
fix:https://issues.apache.org/jira/browse/SPARK-29336
   the performance problem is because GK algorithm is not designed for 
distruibuted system, it's merge performance is bad, higher upstream stage 
parallelism leads to worse performance.
   <img width="2554" height="1137" alt="image" 
src="https://github.com/user-attachments/assets/a22b050a-4c42-458f-8ad6-831439e41dcc";
 />
   
   
   Use our produce env spark job as example, it deal with 60 billion records as 
source input, then sample with ratio 0.06, group by key (key has 4 distinct 
records), then calculate 1 to 100 percentile with accuracy 999 for 40 columns 
with spark conf spark.sql.shuffle.partitions=2000, each executor memory is 28g 
cores is 6
   run with spark-2.4.3 the final merge stage cost is 5min
   run with spark-3.5.2 the final merge stage cost  is 2.8h
   <img width="1084" height="143" alt="image" 
src="https://github.com/user-attachments/assets/fd4195f3-49d2-4dcb-9154-ef7f3bb1e069";
 />
   
   adjust spark.sql.shuffle.partitions to 500
   run with spark-3.5.2 the final merge stage cost  is 11min, but because the 
data is big, the upstream stage time cost will be increase a lot, and more data 
is spill to disk
   <img width="2471" height="450" alt="image" 
src="https://github.com/user-attachments/assets/f725f376-1132-40a1-9007-a4a704fdff98";
 />
   
   
   when use datasketches qualifie 
   run with spark-3.5.2 the final merge stage cost  is less than 1min with conf 
spark.sql.shuffle.partitions=2000
   <img width="1212" height="118" alt="image" 
src="https://github.com/user-attachments/assets/78db9fe9-3455-4c0b-90cc-86606a02acd1";
 />
   
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
new features, bug fixes, or other behavior changes. Documentation-only updates 
are not considered user-facing changes.
   
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description and/or an example to show the 
behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   No
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   var values = (1 to 100).toArray
   var percents = (1 to 100).toArray
   val all_quantiles = percents.indices.map(i => (i+1).toDouble / 
percents.length).toArray
   val all_quantiles_str = s"ARRAY(${all_quantiles.toList.mkString(",")})"
   for (n <- 0 until 5) {
     var df = spark.sparkContext.makeRDD(values).toDF("value").repartition(5)
     df.createOrReplaceTempView("data_table")
     var sql = s"select PERCENTILE_APPROX(cast(value as DOUBLE), 
$all_quantiles_str, 90) as values from data_table"
     val all_answers = spark.sql(sql).collect
     val all_answered_ranks = all_answers.map(ans => 
values.indexOf(ans)).toArray
     val error = all_answered_ranks.zipWithIndex.map({ case (answer, expected) 
=> Math.abs(expected - answer) }).toArray
     val max_error = error.max
     print(max_error + "\n")
   }
   test code above  the max_error is always 1, which is good than expect
   
   var values = (1 to 10000).toArray
   var percents = (1 to 100).toArray
   val all_quantiles = percents.indices.map(i => (i+1).toDouble / 
percents.length).toArray
   val all_quantiles_str = s"ARRAY(${all_quantiles.toList.mkString(",")})"
   for (n <- 0 until 5) {
     var df = spark.sparkContext.makeRDD(values).toDF("value").repartition(5)
     df.createOrReplaceTempView("data_table")
     var sql = s"select PERCENTILE_APPROX(cast(value as DOUBLE), 
$all_quantiles_str, 9999) as values from data_table"
     val all_answers = spark.sql(sql).collect
     val all_answered_ranks = all_answers.map(ans => 
values.indexOf(ans)).toArray
     val error = all_answered_ranks.zipWithIndex.map({ case (answer, expected) 
=> Math.abs(expected*100 - answer) }).toArray
     val max_error = error.max
     print(max_error + "\n")
   }
   
   test code above  the max_error is always 1, which is as expect
   
   also test with user produce env job for performance check
   
   
   ### Was this patch authored or co-authored using generative AI tooling?
   <!--
   If generative AI tooling has been used in the process of authoring this 
patch, please include the
   phrase: 'Generated-by: ' followed by the name of the tool and its version.
   If no, write 'No'.
   Please refer to the [ASF Generative Tooling 
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
   -->
   No


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to