Thanks Takuya . works like a Charm
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-Language-Integrated-query-OR-clause-and-IN-clause-tp9298p9303.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,
any suggestions on how to implement OR clause and IN clause in the SparkSQL
language integrated queries.
For example:
'SELECT name FROM people WHERE age >= 10 AND month = 2' can be written as
val teenagers = people.where('age >= 10).where('month === 2).select('name)
How do we write 'SELE
Hi,
Yes . I am caching the RDD's by calling cache method..
May i ask, how you are sharing RDD's across jobs in same context? By the RDD
name. I tried printing the RDD's of the Spark context, and when the
referenceTracking is enabled, i get empty list after the clean up.
Thanks,
Prem
--
Vie
Michael,
Thanks for the response. Yes, Moving the Case class solved the issue.
Thanks,
Prem
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-registerAsTable-No-TypeTag-available-Error-tp7623p9183.html
Sent from the Apache Spark User List mailing li
Hi,
I using spark 1.0.0 , using Ooyala Job Server, for a low latency query
system. Basically a long running context is created, which enables to run
multiple jobs under the same context, and hence sharing of the data.
It was working fine in 0.9.1. However in spark 1.0 release, the RDD's
created
Hi,
I am trying to run the spark sql example provided on the example
https://spark.apache.org/docs/latest/sql-programming-guide.html as a
standalone program.
When i try to run the compile the program, i am getting the below error
Done updating.
Compiling 1 Scala source to
C:\Work\Dev\scala\wo