I'm getting the following when trying to run ./dev/run-tests (not happening
on master) from the extracted source tar. Anyone else seeing this?

error: Could not access 'fc0a1475ef'
**********************************************************************
File "./dev/run-tests.py", line 69, in
__main__.identify_changed_files_from_git_commits
Failed example:
    [x.name for x in determine_modules_for_files(
identify_changed_files_from_git_commits("fc0a1475ef",
target_ref="5da21f07"))]
Exception raised:
    Traceback (most recent call last):
      File "/Users/nick/miniconda2/lib/python2.7/doctest.py", line 1315, in
__run
        compileflags, 1) in test.globs
      File "<doctest __main__.identify_changed_files_from_git_commits[0]>",
line 1, in <module>
        [x.name for x in determine_modules_for_files(
identify_changed_files_from_git_commits("fc0a1475ef",
target_ref="5da21f07"))]
      File "./dev/run-tests.py", line 86, in
identify_changed_files_from_git_commits
        universal_newlines=True)
      File "/Users/nick/miniconda2/lib/python2.7/subprocess.py", line 573,
in check_output
        raise CalledProcessError(retcode, cmd, output=output)
    CalledProcessError: Command '['git', 'diff', '--name-only',
'fc0a1475ef', '5da21f07']' returned non-zero exit status 1
error: Could not access '50a0496a43'
**********************************************************************
File "./dev/run-tests.py", line 71, in
__main__.identify_changed_files_from_git_commits
Failed example:
    'root' in [x.name for x in determine_modules_for_files(
 identify_changed_files_from_git_commits("50a0496a43",
target_ref="6765ef9"))]
Exception raised:
    Traceback (most recent call last):
      File "/Users/nick/miniconda2/lib/python2.7/doctest.py", line 1315, in
__run
        compileflags, 1) in test.globs
      File "<doctest __main__.identify_changed_files_from_git_commits[1]>",
line 1, in <module>
        'root' in [x.name for x in determine_modules_for_files(
 identify_changed_files_from_git_commits("50a0496a43",
target_ref="6765ef9"))]
      File "./dev/run-tests.py", line 86, in
identify_changed_files_from_git_commits
        universal_newlines=True)
      File "/Users/nick/miniconda2/lib/python2.7/subprocess.py", line 573,
in check_output
        raise CalledProcessError(retcode, cmd, output=output)
    CalledProcessError: Command '['git', 'diff', '--name-only',
'50a0496a43', '6765ef9']' returned non-zero exit status 1
**********************************************************************
1 items had failures:
   2 of   2 in __main__.identify_changed_files_from_git_commits
***Test Failed*** 2 failures.



On Fri, 24 Jun 2016 at 06:59 Yin Huai <yh...@databricks.com> wrote:

> -1 because of https://issues.apache.org/jira/browse/SPARK-16121.
>
> This jira was resolved after 2.0.0-RC1 was cut. Without the fix, Spark
> SQL effectively only uses the driver to list files when loading datasets
> and the driver-side file listing is very slow for datasets having many
> files and partitions. Since this bug causes a serious performance
> regression, I am giving -1.
>
> On Thu, Jun 23, 2016 at 1:25 AM, Pete Robbins <robbin...@gmail.com> wrote:
>
>> I'm also seeing some of these same failures:
>>
>> - spilling with compression *** FAILED ***
>> I have seen this occassionaly
>>
>> - to UTC timestamp *** FAILED ***
>> This was fixed yesterday in branch-2.0 (
>> https://issues.apache.org/jira/browse/SPARK-16078)
>>
>> - offset recovery *** FAILED ***
>> Haven't seen this for a while and thought the flaky test was fixed but it
>> popped up again in one of our builds.
>>
>> StateStoreSuite:
>> - maintenance *** FAILED ***
>> Just seen this has been failing for last 2 days on one build machine
>> (linux amd64)
>>
>>
>> On 23 June 2016 at 08:51, Sean Owen <so...@cloudera.com> wrote:
>>
>>> First pass of feedback on the RC: all the sigs, hashes, etc are fine.
>>> Licensing is up to date to the best of my knowledge.
>>>
>>> I'm hitting test failures, some of which may be spurious. Just putting
>>> them out there to see if they ring bells. This is Java 8 on Ubuntu 16.
>>>
>>>
>>> - spilling with compression *** FAILED ***
>>>   java.lang.Exception: Test failed with compression using codec
>>> org.apache.spark.io.SnappyCompressionCodec:
>>> assertion failed: expected cogroup to spill, but did not
>>>   at scala.Predef$.assert(Predef.scala:170)
>>>   at org.apache.spark.TestUtils$.assertSpilled(TestUtils.scala:170)
>>>   at org.apache.spark.util.collection.ExternalAppendOnlyMapSuite.org
>>> $apache$spark$util$collection$ExternalAppendOnlyMapSuite$$testSimpleSpilling(ExternalAppendOnlyMapSuite.scala:263)
>>> ...
>>>
>>> I feel like I've seen this before, and see some possibly relevant
>>> fixes, but they're in 2.0.0 already:
>>> https://github.com/apache/spark/pull/10990
>>> Is this something where a native library needs to be installed or
>>> something?
>>>
>>>
>>> - to UTC timestamp *** FAILED ***
>>>   "2016-03-13 [02]:00:00.0" did not equal "2016-03-13 [10]:00:00.0"
>>> (DateTimeUtilsSuite.scala:506)
>>>
>>> I know, we talked about this for the 1.6.2 RC, but I reproduced this
>>> locally too. I will investigate, could still be spurious.
>>>
>>>
>>> StateStoreSuite:
>>> - maintenance *** FAILED ***
>>>   The code passed to eventually never returned normally. Attempted 627
>>> times over 10.000180116 seconds. Last failure message:
>>> StateStoreSuite.this.fileExists(provider, 1L, false) was true earliest
>>> file not deleted. (StateStoreSuite.scala:395)
>>>
>>> No idea.
>>>
>>>
>>> - offset recovery *** FAILED ***
>>>   The code passed to eventually never returned normally. Attempted 197
>>> times over 10.040864806 seconds. Last failure message:
>>> strings.forall({
>>>     ((x$1: Any) => DirectKafkaStreamSuite.collectedData.contains(x$1))
>>>   }) was false. (DirectKafkaStreamSuite.scala:250)
>>>
>>> Also something that was possibly fixed already for 2.0.0 and that I
>>> just back-ported into 1.6. Could be just a very similar failure.
>>>
>>> On Wed, Jun 22, 2016 at 2:26 AM, Reynold Xin <r...@databricks.com>
>>> wrote:
>>> > Please vote on releasing the following candidate as Apache Spark
>>> version
>>> > 2.0.0. The vote is open until Friday, June 24, 2016 at 19:00 PDT and
>>> passes
>>> > if a majority of at least 3+1 PMC votes are cast.
>>> >
>>> > [ ] +1 Release this package as Apache Spark 2.0.0
>>> > [ ] -1 Do not release this package because ...
>>> >
>>> >
>>> > The tag to be voted on is v2.0.0-rc1
>>> > (0c66ca41afade6db73c9aeddd5aed6e5dcea90df).
>>> >
>>> > This release candidate resolves ~2400 issues:
>>> > https://s.apache.org/spark-2.0.0-rc1-jira
>>> >
>>> > The release files, including signatures, digests, etc. can be found at:
>>> > http://people.apache.org/~pwendell/spark-releases/spark-2.0.0-rc1-bin/
>>> >
>>> > Release artifacts are signed with the following key:
>>> > https://people.apache.org/keys/committer/pwendell.asc
>>> >
>>> > The staging repository for this release can be found at:
>>> >
>>> https://repository.apache.org/content/repositories/orgapachespark-1187/
>>> >
>>> > The documentation corresponding to this release can be found at:
>>> >
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.0.0-rc1-docs/
>>> >
>>> >
>>> > =======================================
>>> > == How can I help test this release? ==
>>> > =======================================
>>> > If you are a Spark user, you can help us test this release by taking an
>>> > existing Spark workload and running on this release candidate, then
>>> > reporting any regressions from 1.x.
>>> >
>>> > ================================================
>>> > == What justifies a -1 vote for this release? ==
>>> > ================================================
>>> > Critical bugs impacting major functionalities.
>>> >
>>> > Bugs already present in 1.x, missing features, or bugs related to new
>>> > features will not necessarily block this release. Note that
>>> historically
>>> > Spark documentation has been published on the website separately from
>>> the
>>> > main release so we do not need to block the release due to
>>> documentation
>>> > errors either.
>>> >
>>> >
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>>
>>
>

Reply via email to