Hi Ted,

We haven't observed a StreamingContextSuite failure on our test
infrastructure recently. Given that we cannot reproduce it even locally it
is unlikely that this uncovers a real bug. Even if it does I would not
block the release on it because many in the community are waiting for a few
important fixes. In general, there will always be outstanding issues in
Spark that we cannot address in every release.

-Andrew

2015-06-29 14:29 GMT-07:00 Ted Yu <yuzhih...@gmail.com>:

> The test passes when run alone on my machine as well.
>
> Please run test suite.
>
> Thanks
>
> On Mon, Jun 29, 2015 at 2:01 PM, Tathagata Das <
> tathagata.das1...@gmail.com> wrote:
>
>> @Ted, I ran the following two commands.
>>
>> mvn -Phadoop-2.4 -Dhadoop.version=2.7.0 -Pyarn -Phive -DskipTests clean
>> package
>> mvn -Phadoop-2.4 -Dhadoop.version=2.7.0 -Pyarn -Phive
>> -DwildcardSuites=org.apache.spark.streaming.StreamingContextSuite test
>>
>> Using Java version "1.7.0_51", the tests passed normally.
>>
>>
>>
>> On Mon, Jun 29, 2015 at 1:05 PM, Krishna Sankar <ksanka...@gmail.com>
>> wrote:
>>
>>> +1 (non-binding, of course)
>>>
>>> 1. Compiled OSX 10.10 (Yosemite) OK Total time: 13:26 min
>>>      mvn clean package -Pyarn -Phadoop-2.6 -DskipTests
>>> 2. Tested pyspark, mllib
>>> 2.1. statistics (min,max,mean,Pearson,Spearman) OK
>>> 2.2. Linear/Ridge/Laso Regression OK
>>> 2.3. Decision Tree, Naive Bayes OK
>>> 2.4. KMeans OK
>>>        Center And Scale OK
>>> 2.5. RDD operations OK
>>>       State of the Union Texts - MapReduce, Filter,sortByKey (word count)
>>> 2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK
>>>        Model evaluation/optimization (rank, numIter, lambda) with
>>> itertools OK
>>> 3. Scala - MLlib
>>> 3.1. statistics (min,max,mean,Pearson,Spearman) OK
>>> 3.2. LinearRegressionWithSGD OK
>>> 3.3. Decision Tree OK
>>> 3.4. KMeans OK
>>> 3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK
>>> 3.6. saveAsParquetFile OK
>>> 3.7. Read and verify the 4.3 save(above) - sqlContext.parquetFile,
>>> registerTempTable, sql OK
>>> 3.8. result = sqlContext.sql("SELECT
>>> OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER
>>> JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK
>>> 4.0. Spark SQL from Python OK
>>> 4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'")
>>> OK
>>> 5.0. Packages
>>> 5.1. com.databricks.spark.csv - read/write OK
>>>
>>> Cheers
>>> <k/>
>>>
>>> On Tue, Jun 23, 2015 at 10:37 PM, Patrick Wendell <pwend...@gmail.com>
>>> wrote:
>>>
>>>> Please vote on releasing the following candidate as Apache Spark
>>>> version 1.4.1!
>>>>
>>>> This release fixes a handful of known issues in Spark 1.4.0, listed
>>>> here:
>>>> http://s.apache.org/spark-1.4.1
>>>>
>>>> The tag to be voted on is v1.4.1-rc1 (commit 60e08e5):
>>>> https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=
>>>> 60e08e50751fe3929156de956d62faea79f5b801
>>>>
>>>> The release files, including signatures, digests, etc. can be found at:
>>>> http://people.apache.org/~pwendell/spark-releases/spark-1.4.1-rc1-bin/
>>>>
>>>> Release artifacts are signed with the following key:
>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>
>>>> The staging repository for this release can be found at:
>>>> [published as version: 1.4.1]
>>>> https://repository.apache.org/content/repositories/orgapachespark-1118/
>>>> [published as version: 1.4.1-rc1]
>>>> https://repository.apache.org/content/repositories/orgapachespark-1119/
>>>>
>>>> The documentation corresponding to this release can be found at:
>>>> http://people.apache.org/~pwendell/spark-releases/spark-1.4.1-rc1-docs/
>>>>
>>>> Please vote on releasing this package as Apache Spark 1.4.1!
>>>>
>>>> The vote is open until Saturday, June 27, at 06:32 UTC and passes
>>>> if a majority of at least 3 +1 PMC votes are cast.
>>>>
>>>> [ ] +1 Release this package as Apache Spark 1.4.1
>>>> [ ] -1 Do not release this package because ...
>>>>
>>>> To learn more about Apache Spark, please see
>>>> http://spark.apache.org/
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>
>>>>
>>>
>>
>

Reply via email to