Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10835#discussion_r50643047
--- Diff:
core/src/test/scala/org/apache/spark/executor/TaskMetricsSuite.scala ---
@@ -17,12 +17,345 @@
package org.apache.spark.executor
-import org.apache.spark.SparkFunSuite
+import org.apache.spark._
+import org.apache.spark.storage.{BlockId, BlockStatus, StorageLevel,
TestBlockId}
+
class TaskMetricsSuite extends SparkFunSuite {
- test("[SPARK-5701] updateShuffleReadMetrics: ShuffleReadMetrics not
added when no shuffle deps") {
- val taskMetrics = new TaskMetrics()
- taskMetrics.mergeShuffleReadMetrics()
- assert(taskMetrics.shuffleReadMetrics.isEmpty)
+ import AccumulatorParam._
+ import InternalAccumulator._
+ import StorageLevel._
+ import TaskMetricsSuite._
+
+ test("create") {
--- End diff --
Minor nit, but I think that this test is trying to do too much: the fact
that you had to resort to naming things like `internalAccums1`, 2, etc.
suggests to me that maybe this could be split into much tinier unit tests for
each of the separate properties.
For example, why not split the "initial accums must be named" assert into a
separate test, like
```scala
test("initial accums must be named") {
val unnamedAccum = new Accumulator(0, IntAccumulatorParam, None, internal
= true)
intercept[AssertionError] { new TaskMetrics(Seq(unnamedAccum)) }
}
```
This is more intuitively understandable to me than what you have now, where
several distinct tests are intermingled with the actual bits used by each test
spread kind of far apart.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]