Thanks Michael.
I used Parquet files and it could able to solve my initial problem to some
extent (i.e. loading data from one context and reading it from another
context).
But there I could see another issue. I need to load the parquet file every
time I create the JavaSQLContext using parquetFil
It can, but currently that method uses the default hive serde which is not
very robust (does not deal well with \n in strings) and probably is not
super fast. You'll also need to be using a HiveContext for it to work.
On Tue, Nov 4, 2014 at 8:20 PM, vdiwakar.malladi wrote:
> Thanks Michael for
Thanks Michael for your response.
Just now, i saw saveAsTable method on JavaSchemaRDD object (in Spark 1.1.0
API). But I couldn't find the corresponding documentation. Will that help?
Please let me know.
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560
Temporary tables are local to the context that creates them (just like
RDDs). I'd recommend saving the data out as Parquet to share it between
contexts.
On Tue, Nov 4, 2014 at 3:18 AM, vdiwakar.malladi wrote:
> Hi,
>
> There is a need in my application to query the loaded data into
> sparkconte