The Scala 2.11 issue should be fixed, but doesn't need to be a blocker,
since Maven builds fine. The sbt build is more aggressive to make sure we
catch warnings.



On Wed, Aug 26, 2015 at 10:01 AM, Sean Owen <so...@cloudera.com> wrote:

> My quick take: no blockers at this point, except for one potential
> issue. Still some 'critical' bugs worth a look. The release seems to
> pass tests but i get a lot of spurious failures; it took about 16
> hours of running tests to get everything to pass at least once.
>
>
> Current score: 56 issues targeted at 1.5.0, of which 14 bugs, of which
> no blockers and 8 critical.
>
> This one might be a blocker as it seems to mean that SBT + Scala 2.11
> does not compile:
> https://issues.apache.org/jira/browse/SPARK-10227
>
> pretty simple issue, but weigh in on the PR:
> https://github.com/apache/spark/pull/8433
>
> For reference here are the Critical ones:
>
> Key Component Summary Assignee
> SPARK-6484 Spark Core Ganglia metrics xml reporter doesn't escape
> correctly Josh Rosen
> SPARK-6701 Tests, YARN Flaky test: o.a.s.deploy.yarn.YarnClusterSuite
> Python application
> SPARK-7420 Tests Flaky test: o.a.s.streaming.JobGeneratorSuite "Do not
> clear received block data too soon" Tathagata Das
> SPARK-8119 Spark Core HeartbeatReceiver should not adjust application
> executor resources Andrew Or
> SPARK-8414 Spark Core Ensure ContextCleaner actually triggers clean
> ups Andrew Or
> SPARK-8447 Shuffle Test external shuffle service with all shuffle managers
> SPARK-10224 Streaming BlockGenerator may lost data in the last block
> SPARK-10287 SQL After processing a query using JSON data, Spark SQL
> continuously refreshes metadata of the table
> Total: 8 issues
>
>
> I'm seeing the following tests fail intermittently, with "-Phive
> -Phive-thriftserver -Phadoop-2.6" on Ubuntu 15 / Java 7:
>
> - security mismatch password *** FAILED ***
>   Expected exception java.io.IOException to be thrown, but
> java.nio.channels.CancelledKeyException was thrown.
> (ConnectionManagerSuite.scala:123)
>
>
> DAGSchedulerSuite:
> ...
> - misbehaved resultHandler should not crash DAGScheduler and
> SparkContext *** FAILED ***
>   java.lang.UnsupportedOperationException: taskSucceeded() called on a
> finished JobWaiter was not instance of
> org.apache.spark.scheduler.DAGSchedulerSuiteDummyException
> (DAGSchedulerSuite.scala:861)
>
> HeartbeatReceiverSuite:
> ...
> - normal heartbeat *** FAILED ***
>   3 did not equal 2 (HeartbeatReceiverSuite.scala:104)
>
>
> - Unpersisting HttpBroadcast on executors only in distributed mode ***
> FAILED ***
>   ...
> - Unpersisting HttpBroadcast on executors and driver in distributed
> mode *** FAILED ***
>   ...
> - Unpersisting TorrentBroadcast on executors only in distributed mode
> *** FAILED ***
>   ...
> - Unpersisting TorrentBroadcast on executors and driver in distributed
> mode *** FAILED ***
>
>
> StreamingContextSuite:
> ...
> - stop gracefully *** FAILED ***
>   1749735 did not equal 1190429 Received records = 1749735, processed
> records = 1190428 (StreamingContextSuite.scala:279)
>
>
> DirectKafkaStreamSuite:
> - offset recovery *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 193
> times over 10.010808486 seconds. Last failure message:
> strings.forall({
>     ((elem: Any) => DirectKafkaStreamSuite.collectedData.contains(elem))
>   }) was false. (DirectKafkaStreamSuite.scala:249)
>
> On Wed, Aug 26, 2015 at 5:28 AM, Reynold Xin <r...@databricks.com> wrote:
> > Please vote on releasing the following candidate as Apache Spark version
> > 1.5.0. The vote is open until Friday, Aug 29, 2015 at 5:00 UTC and
> passes if
> > a majority of at least 3 +1 PMC votes are cast.
> >
> > [ ] +1 Release this package as Apache Spark 1.5.0
> > [ ] -1 Do not release this package because ...
> >
> > To learn more about Apache Spark, please see http://spark.apache.org/
> >
> >
> > The tag to be voted on is v1.5.0-rc2:
> >
> https://github.com/apache/spark/tree/727771352855dbb780008c449a877f5aaa5fc27a
> >
> > The release files, including signatures, digests, etc. can be found at:
> > http://people.apache.org/~pwendell/spark-releases/spark-1.5.0-rc2-bin/
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/pwendell.asc
> >
> > The staging repository for this release (published as 1.5.0-rc2) can be
> > found at:
> > https://repository.apache.org/content/repositories/orgapachespark-1141/
> >
> > The staging repository for this release (published as 1.5.0) can be found
> > at:
> > https://repository.apache.org/content/repositories/orgapachespark-1140/
> >
> > The documentation corresponding to this release can be found at:
> > http://people.apache.org/~pwendell/spark-releases/spark-1.5.0-rc2-docs/
> >
> >
> > =======================================
> > How can I help test this release?
> > =======================================
> > If you are a Spark user, you can help us test this release by taking an
> > existing Spark workload and running on this release candidate, then
> > reporting any regressions.
> >
> >
> > ================================================
> > What justifies a -1 vote for this release?
> > ================================================
> > This vote is happening towards the end of the 1.5 QA period, so -1 votes
> > should only occur for significant regressions from 1.4. Bugs already
> present
> > in 1.4, minor regressions, or bugs related to new features will not block
> > this release.
> >
> >
> > ===============================================================
> > What should happen to JIRA tickets still targeting 1.5.0?
> > ===============================================================
> > 1. It is OK for documentation patches to target 1.5.0 and still go into
> > branch-1.5, since documentations will be packaged separately from the
> > release.
> > 2. New features for non-alpha-modules should target 1.6+.
> > 3. Non-blocker bug fixes should target 1.5.1 or 1.6.0, or drop the target
> > version.
> >
> >
> > ==================================================
> > Major changes to help you focus your testing
> > ==================================================
> >
> > As of today, Spark 1.5 contains more than 1000 commits from 220+
> > contributors. I've curated a list of important changes for 1.5. For the
> > complete list, please refer to Apache JIRA changelog.
> >
> > RDD/DataFrame/SQL APIs
> >
> > - New UDAF interface
> > - DataFrame hints for broadcast join
> > - expr function for turning a SQL expression into DataFrame column
> > - Improved support for NaN values
> > - StructType now supports ordering
> > - TimestampType precision is reduced to 1us
> > - 100 new built-in expressions, including date/time, string, math
> > - memory and local disk only checkpointing
> >
> > DataFrame/SQL Backend Execution
> >
> > - Code generation on by default
> > - Improved join, aggregation, shuffle, sorting with cache friendly
> > algorithms and external algorithms
> > - Improved window function performance
> > - Better metrics instrumentation and reporting for DF/SQL execution plans
> >
> > Data Sources, Hive, Hadoop, Mesos and Cluster Management
> >
> > - Dynamic allocation support in all resource managers (Mesos, YARN,
> > Standalone)
> > - Improved Mesos support (framework authentication, roles, dynamic
> > allocation, constraints)
> > - Improved YARN support (dynamic allocation with preferred locations)
> > - Improved Hive support (metastore partition pruning, metastore
> connectivity
> > to 0.13 to 1.2, internal Hive upgrade to 1.2)
> > - Support persisting data in Hive compatible format in metastore
> > - Support data partitioning for JSON data sources
> > - Parquet improvements (upgrade to 1.7, predicate pushdown, faster
> metadata
> > discovery and schema merging, support reading non-standard legacy Parquet
> > files generated by other libraries)
> > - Faster and more robust dynamic partition insert
> > - DataSourceRegister interface for external data sources to specify short
> > names
> >
> > SparkR
> >
> > - YARN cluster mode in R
> > - GLMs with R formula, binomial/Gaussian families, and elastic-net
> > regularization
> > - Improved error messages
> > - Aliases to make DataFrame functions more R-like
> >
> > Streaming
> >
> > - Backpressure for handling bursty input streams.
> > - Improved Python support for streaming sources (Kafka offsets, Kinesis,
> > MQTT, Flume)
> > - Improved Python streaming machine learning algorithms (K-Means, linear
> > regression, logistic regression)
> > - Native reliable Kinesis stream support
> > - Input metadata like Kafka offsets made visible in the batch details UI
> > - Better load balancing and scheduling of receivers across cluster
> > - Include streaming storage in web UI
> >
> > Machine Learning and Advanced Analytics
> >
> > - Feature transformers: CountVectorizer, Discrete Cosine transformation,
> > MinMaxScaler, NGram, PCA, RFormula, StopWordsRemover, and VectorSlicer.
> > - Estimators under pipeline APIs: naive Bayes, k-means, and isotonic
> > regression.
> > - Algorithms: multilayer perceptron classifier, PrefixSpan for sequential
> > pattern mining, association rule generation, 1-sample Kolmogorov-Smirnov
> > test.
> > - Improvements to existing algorithms: LDA, trees/ensembles, GMMs
> > - More efficient Pregel API implementation for GraphX
> > - Model summary for linear and logistic regression.
> > - Python API: distributed matrices, streaming k-means and linear models,
> > LDA, power iteration clustering, etc.
> > - Tuning and evaluation: train-validation split and multiclass
> > classification evaluator.
> > - Documentation: document the release version of public API methods
> >
>

Reply via email to