It can be easily done using an RDD. rdd.zipwithIndex.partitionBy(YourCustomPartitioner) should give you your items. Here YourCustomPartitioner will know how to pick sample items from each partition.
If you want to stick to Dataframe you can always repartition the data after you apply the limit. ..Manas -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Problem-using-limit-clause-in-spark-sql-tp25789p25797.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org