Perfect ! That's what I was looking for. Thanks Sun !
On Tue, Aug 2, 2016 at 6:58 PM, Sun Rui <sunrise_...@163.com> wrote: > import org.apache.spark.sql.catalyst.encoders.RowEncoder > implicit val encoder = RowEncoder(df.schema) > df.mapPartitions(_.take(1)) > > On Aug 3, 2016, at 04:55, Dragisa Krsmanovic <dragi...@ticketfly.com> > wrote: > > I am trying to use mapPartitions on DataFrame. > > Example: > > import spark.implicits._ > val df: DataFrame = Seq((1,"one"), (2, "two")).toDF("id", "name") > df.mapPartitions(_.take(1)) > > I am getting: > > Unable to find encoder for type stored in a Dataset. Primitive types > (Int, String, etc) and Product types (case classes) are supported by > importing spark.implicits._ Support for serializing other types will be > added in future releases. > > Since DataFrame is Dataset[Row], I was expecting encoder for Row to be > there. > > What's wrong with my code ? > > > -- > > Dragiša Krsmanović | Platform Engineer | Ticketfly > > dragi...@ticketfly.com > > @ticketfly <https://twitter.com/ticketfly> | ticketfly.com/blog | > facebook.com/ticketfly > > > -- Dragiša Krsmanović | Platform Engineer | Ticketfly dragi...@ticketfly.com @ticketfly <https://twitter.com/ticketfly> | ticketfly.com/blog | facebook.com/ticketfly