First pass of feedback on the RC: all the sigs, hashes, etc are fine.
Licensing is up to date to the best of my knowledge.

I'm hitting test failures, some of which may be spurious. Just putting
them out there to see if they ring bells. This is Java 8 on Ubuntu 16.


- spilling with compression *** FAILED ***
  java.lang.Exception: Test failed with compression using codec
org.apache.spark.io.SnappyCompressionCodec:
assertion failed: expected cogroup to spill, but did not
  at scala.Predef$.assert(Predef.scala:170)
  at org.apache.spark.TestUtils$.assertSpilled(TestUtils.scala:170)
  at 
org.apache.spark.util.collection.ExternalAppendOnlyMapSuite.org$apache$spark$util$collection$ExternalAppendOnlyMapSuite$$testSimpleSpilling(ExternalAppendOnlyMapSuite.scala:263)
...

I feel like I've seen this before, and see some possibly relevant
fixes, but they're in 2.0.0 already:
https://github.com/apache/spark/pull/10990
Is this something where a native library needs to be installed or something?


- to UTC timestamp *** FAILED ***
  "2016-03-13 [02]:00:00.0" did not equal "2016-03-13 [10]:00:00.0"
(DateTimeUtilsSuite.scala:506)

I know, we talked about this for the 1.6.2 RC, but I reproduced this
locally too. I will investigate, could still be spurious.


StateStoreSuite:
- maintenance *** FAILED ***
  The code passed to eventually never returned normally. Attempted 627
times over 10.000180116 seconds. Last failure message:
StateStoreSuite.this.fileExists(provider, 1L, false) was true earliest
file not deleted. (StateStoreSuite.scala:395)

No idea.


- offset recovery *** FAILED ***
  The code passed to eventually never returned normally. Attempted 197
times over 10.040864806 seconds. Last failure message:
strings.forall({
    ((x$1: Any) => DirectKafkaStreamSuite.collectedData.contains(x$1))
  }) was false. (DirectKafkaStreamSuite.scala:250)

Also something that was possibly fixed already for 2.0.0 and that I
just back-ported into 1.6. Could be just a very similar failure.

On Wed, Jun 22, 2016 at 2:26 AM, Reynold Xin <r...@databricks.com> wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 2.0.0. The vote is open until Friday, June 24, 2016 at 19:00 PDT and passes
> if a majority of at least 3+1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.0.0
> [ ] -1 Do not release this package because ...
>
>
> The tag to be voted on is v2.0.0-rc1
> (0c66ca41afade6db73c9aeddd5aed6e5dcea90df).
>
> This release candidate resolves ~2400 issues:
> https://s.apache.org/spark-2.0.0-rc1-jira
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.0.0-rc1-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1187/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.0.0-rc1-docs/
>
>
> =======================================
> == How can I help test this release? ==
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions from 1.x.
>
> ================================================
> == What justifies a -1 vote for this release? ==
> ================================================
> Critical bugs impacting major functionalities.
>
> Bugs already present in 1.x, missing features, or bugs related to new
> features will not necessarily block this release. Note that historically
> Spark documentation has been published on the website separately from the
> main release so we do not need to block the release due to documentation
> errors either.
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to