SQLContext and "stable identifier required"

2016-05-03 Thread Koert Kuipers
with the introduction of SparkSession SQLContext changed from being a lazy val to a def. however this is troublesome if you want to do: import someDataset.sqlContext.implicits._ because it is no longer a stable identifier, i think? i get: stable identifier required, but someDataset.sqlContext.imp

Re: SQLContext and "stable identifier required"

2016-05-03 Thread Ted Yu
Have you tried the following ? scala> import spark.implicits._ import spark.implicits._ scala> spark res0: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@323d1fa2 Cheers On Tue, May 3, 2016 at 9:16 AM, Koert Kuipers wrote: > with the introduction of SparkSession SQLCont

Re: SQLContext and "stable identifier required"

2016-05-03 Thread Koert Kuipers
yes it works fine if i switch to using the implicits on the SparkSession (which is a val) but do we want to break the old way of doing the import? On Tue, May 3, 2016 at 12:56 PM, Ted Yu wrote: > Have you tried the following ? > > scala> import spark.implicits._ > import spark.implicits._ > > s

Re: SQLContext and "stable identifier required"

2016-05-03 Thread Koert Kuipers
sure i can do that On Tue, May 3, 2016 at 1:21 PM, Reynold Xin wrote: > Probably not. Want to submit a pull request? > > > On Tuesday, May 3, 2016, Koert Kuipers wrote: > >> yes it works fine if i switch to using the implicits on the SparkSession >> (which is a val) >> >> but do we want to brea

Re: SQLContext and "stable identifier required"

2016-05-03 Thread Reynold Xin
Probably not. Want to submit a pull request? On Tuesday, May 3, 2016, Koert Kuipers wrote: > yes it works fine if i switch to using the implicits on the SparkSession > (which is a val) > > but do we want to break the old way of doing the import? > > On Tue, May 3, 2016 at 12:56 PM, Ted Yu > wro

Unable To Find Proto Buffer Class Error With RDD

2016-05-03 Thread kyle
Hi, I ran into an issue when using proto buffer in spark RDD. I googled this and found it seems to be a known compatible issue. Have anyone ran into the same issue before and found any solutions? The detailed description could be found in this link. https://qnalist.com/questions/5156782/unable-t

Re: Ever increasing physical memory for a Spark Application in YARN

2016-05-03 Thread Nitin Goyal
Hi Daniel, I could indeed discover the problem in my case and it turned out to be a bug at parquet side and I had raised and contributed to the following issue :- https://issues.apache.org/jira/browse/PARQUET-353 Hope this helps! Thanks -Nitin On Mon, May 2, 2016 at 9:15 PM, Daniel Darabos <

Re: Cross Validator to work with K-Fold value of 1?

2016-05-03 Thread Yanbo Liang
Here is the JIRA and PR for supporting PolynomialExpansion with degree 1, and it has been merged. https://issues.apache.org/jira/browse/SPARK-13338 https://github.com/apache/spark/pull/11216 2016-05-02 9:20 GMT-07:00 Nick Pentreath : > There is a JIRA and PR around for supporting polynomial expa