Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/12184#discussion_r59015588
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1510,6 +1511,30 @@ class Dataset[T] private[sql](
}
/**
+ * Returns a Java list that contains randomly split [[Dataset]] with the
provided weights.
+ *
+ * @param weights weights for splits, will be normalized if they don't
sum to 1.
+ * @param seed Seed for sampling.
+ *
+ * @group typedrel
+ * @since 2.0.0
+ */
+ def randomSplitAsList(weights: Array[Double], seed: Long):
java.util.List[Dataset[T]] = {
+ // It is possible that the underlying dataframe doesn't guarantee the
ordering of rows in its
+ // constituent partitions each time a split is materialized which
could result in
+ // overlapping splits. To prevent this, we explicitly sort each input
partition to make the
+ // ordering deterministic.
+ val sorted = Sort(logicalPlan.output.map(SortOrder(_, Ascending)),
global = false, logicalPlan)
--- End diff --
Why duplicate the implementation? This should just call the method above
and translate the result.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]