> > In our case, the ROW has about 80 columns which exceeds the case class > limit. > Starting with Spark 1.1 you'll be able to also use the applySchema API <https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala#L126> .
- [Spark SQL] How to select first row in each GROUP BY grou... Fengyun RAO
- Re: [Spark SQL] How to select first row in each GROU... Fengyun RAO
- Re: [Spark SQL] How to select first row in each ... Silvio Fiorito
- Re: [Spark SQL] How to select first row in e... Fengyun RAO
- Re: [Spark SQL] How to select first row ... Michael Armbrust