Ah right. This is because I'm running Java 8. This was fixed in
SPARK-3329 
(https://github.com/apache/spark/commit/2b7ab814f9bde65ebc57ebd04386e56c97f06f4a#diff-7bfd8d7c8cbb02aa0023e4c3497ee832).
Consider back-porting it if other reasons arise, but this is specific
to tests and to Java 8.

On Thu, Nov 13, 2014 at 8:01 PM, Andrew Or <and...@databricks.com> wrote:
> Yeah, this seems to be somewhat environment specific too. The same test has
> been passing here for a while:
> https://amplab.cs.berkeley.edu/jenkins/job/Spark-1.1-Maven-pre-YARN/hadoop.version=1.0.4,label=centos/lastBuild/consoleFull
>
> 2014-11-13 11:26 GMT-08:00 Michael Armbrust <mich...@databricks.com>:
>
>> Hey Sean,
>>
>> Thanks for pointing this out.  Looks like a bad test where we should be
>> doing Set comparison instead of Array.
>>
>> Michael
>>
>> On Thu, Nov 13, 2014 at 2:05 AM, Sean Owen <so...@cloudera.com> wrote:
>>>
>>> LICENSE and NOTICE are fine. Signature and checksum is fine. I
>>> unzipped and built the plain source distribution, which built.
>>>
>>> However I am seeing a consistent test failure with "mvn -DskipTests
>>> clean package; mvn test". In the Hive module:
>>>
>>> - SET commands semantics for a HiveContext *** FAILED ***
>>>   Expected Array("spark.sql.key.usedfortestonly=test.val.0",
>>>
>>> "spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0"),
>>> but got
>>> Array("spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0",
>>> "spark.sql.key.usedfortestonly=test.val.0") (HiveQuerySuite.scala:544)
>>>
>>> Anyone else seeing this?
>>>
>>>
>>> On Thu, Nov 13, 2014 at 8:18 AM, Krishna Sankar <ksanka...@gmail.com>
>>> wrote:
>>> > +1
>>> > 1. Compiled OSX 10.10 (Yosemite) mvn -Pyarn -Phadoop-2.4
>>> > -Dhadoop.version=2.4.0 -DskipTests clean package 10:49 min
>>> > 2. Tested pyspark, mlib
>>> > 2.1. statistics OK
>>> > 2.2. Linear/Ridge/Laso Regression OK
>>> > 2.3. Decision Tree, Naive Bayes OK
>>> > 2.4. KMeans OK
>>> > 2.5. rdd operations OK
>>> > 2.6. recommendation OK
>>> > 2.7. Good work ! In 1.1.0, there was an error and my program used to
>>> > hang
>>> > (over memory allocation) consistently running validation using
>>> > itertools,
>>> > compute optimum rank, lambda,numofiterations/rmse; data - movielens
>>> > medium
>>> > dataset (1 million records) . It works well in 1.1.1 !
>>> > Cheers
>>> > <k/>
>>> > P.S: Missed Reply all, first time
>>> >
>>> > On Wed, Nov 12, 2014 at 8:35 PM, Andrew Or <and...@databricks.com>
>>> > wrote:
>>> >
>>> >> I will start the vote with a +1
>>> >>
>>> >> 2014-11-12 20:34 GMT-08:00 Andrew Or <and...@databricks.com>:
>>> >>
>>> >> > Please vote on releasing the following candidate as Apache Spark
>>> >> > version
>>> >> 1
>>> >> > .1.1.
>>> >> >
>>> >> > This release fixes a number of bugs in Spark 1.1.0. Some of the
>>> >> > notable
>>> >> > ones are
>>> >> > - [SPARK-3426] Sort-based shuffle compression settings are
>>> >> > incompatible
>>> >> > - [SPARK-3948] Stream corruption issues in sort-based shuffle
>>> >> > - [SPARK-4107] Incorrect handling of Channel.read() led to data
>>> >> truncation
>>> >> > The full list is at http://s.apache.org/z9h and in the CHANGES.txt
>>> >> > attached.
>>> >> >
>>> >> > The tag to be voted on is v1.1.1-rc1 (commit 72a4fdbe):
>>> >> > http://s.apache.org/cZC
>>> >> >
>>> >> > The release files, including signatures, digests, etc can be found
>>> >> > at:
>>> >> > http://people.apache.org/~andrewor14/spark-1.1.1-rc1/
>>> >> >
>>> >> > Release artifacts are signed with the following key:
>>> >> > https://people.apache.org/keys/committer/andrewor14.asc
>>> >> >
>>> >> > The staging repository for this release can be found at:
>>> >> >
>>> >> > https://repository.apache.org/content/repositories/orgapachespark-1034/
>>> >> >
>>> >> > The documentation corresponding to this release can be found at:
>>> >> > http://people.apache.org/~andrewor14/spark-1.1.1-rc1-docs/
>>> >> >
>>> >> > Please vote on releasing this package as Apache Spark 1.1.1!
>>> >> >
>>> >> > The vote is open until Sunday, November 16, at 04:30 UTC and passes
>>> >> > if
>>> >> > a majority of at least 3 +1 PMC votes are cast.
>>> >> > [ ] +1 Release this package as Apache Spark 1.1.1
>>> >> > [ ] -1 Do not release this package because ...
>>> >> >
>>> >> > To learn more about Apache Spark, please see
>>> >> > http://spark.apache.org/
>>> >> >
>>> >> > Cheers,
>>> >> > Andrew
>>> >> >
>>> >>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to