Re: Is there an efficient way to append new data to a registered Spark SQL Table?

2014-12-11 Thread Rakesh Nair
In addition, it is inefficient to insert a single row every > > time. > > > > I do know that somebody build a similar system that I want (ad-hoc > > query service to a on growing system log). So, there must be an efficient > > way. Anyone knows? > > > > > > - > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > > -- Regards Rakesh Nair

Compare performance of sqlContext.jsonFile and sqlContext.jsonRDD

2014-12-10 Thread Rakesh Nair
ma. 2. Save the schema somewhere. 3. For later data sets, create RDD[String] and then use "jsonRDD" method to convert the RDD[String] to SchemaRDD. 2. What is the best way to store a schema or rather how can i serialize StructType and store it in hdfs, so that i can load it later. -- Regards Rakesh Nair