You need either
|.map { row =>
(row(0).asInstanceOf[Float], row(1).asInstanceOf[Float], ...)
}
|
or
|.map {case Row(f0:Float, f1:Float, ...) =>
(f0, f1)
}
|
On 3/23/15 9:08 AM, Minnow Noir wrote:
I'm following some online tutorial written in Python and trying to
convert a Spark SQL table object to an RDD in Scala.
The Spark SQL just loads a simple table from a CSV file. The tutorial
says to convert the table to an RDD.
The Python is
products_rdd = sqlContext.table("products").map(lambda row:
(float(row[0]),float(row[1]),float(row[2]),float(row[3]),
float(row[4]),float(row[5]),float(row[6]),float(row[7]),float(row[8]),float(row[9]),float(row[10]),float(row[11])))
The Scala is *not*
val productsRdd = sqlContext.table("products").map( row => (
row(0).toFloat,row(1).toFloat,row(2).toFloat,row(3).toFloat,
row(4).toFloat,row(5).toFloat,row(6).toFloat,row(7).toFloat,row(8).toFloat,
row(9).toFloat,row(10).toFloat,row(11).toFloat
))
I know this, because Spark says that for each of the row(x).toFloat
calls,
"error: value toFloat is not a member of Any"
Does anyone know the proper syntax for this?
Thank you