Hello all:
I am attempting to persist a parquet file comprised of a SchemaRDD of nested
case classes... 

Creating a schemaRDD object seems to work fine, but exception is thrown when
I attempt to persist this object to a parquet file...

my code:

  case class Trivial(trivial: String = "trivial", lt: LessTrivial)
  case class LessTrivial(i: Int = 1)

  val conf = new SparkConf()
    .setMaster( """local[1]""")
    .setAppName("test")

  implicit val sc = new SparkContext(conf)
  val sqlContext = new org.apache.spark.sql.SQLContext(sc)

  import sqlContext._

  val rdd = sqlContext.createSchemaRDD(sc.parallelize(Seq(Trivial("s",
LessTrivial(1)), Trivial("T", LessTrivial(2))))) //no exceptions.

  rdd.saveAsParquetFile("trivial.parquet1") //exception:
java.lang.RuntimeException: Unsupported datatype
StructType(List(StructField(i,IntegerType,true)))


Is persisting SchemaRDDs containing nested case classes supported for
Parquet files?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-Nested-CaseClass-Parquet-failure-tp8377.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to