[ https://issues.apache.org/jira/browse/SPARK-51666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ASF GitHub Bot updated SPARK-51666: ----------------------------------- Labels: pull-request-available (was: ) > Fix sparkStageCompleted executorRunTime metric calculation > ----------------------------------------------------------- > > Key: SPARK-51666 > URL: https://issues.apache.org/jira/browse/SPARK-51666 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 4.1.0 > Reporter: Weichen Xu > Priority: Major > Labels: pull-request-available > > Fix sparkStageCompleted executorRunTime metric calculation: > In case of when a spark task uses multiple CPU’s, the CPU seconds should > capture the total execution seconds across all CPU’s. i.e. if a stage set > cpus-of-task to be 48, if we used 10 seconds on each CPU, the total CPU > seconds for that stage should be 10 seconds X 1 Tasks X 48 CPU = 480 > CPU-seconds. If another task only used 1 CPU then its total CPU seconds is 10 > seconds X 1 CPU = 10 CPU-Seconds. > *This is very important fix since spark introduces stage level scheduling (so > that different stage tasks are configured with different number of CPUs) , > without this fix, the data pipeline revenue calculation spreads DBUs evenly > across these tasks.* -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org