PR is here: https://github.com/apache/spark/pull/18112 
<https://github.com/apache/spark/pull/18112>


> On May 25, 2017, at 10:28 AM, Michael Allman <mich...@videoamp.com> wrote:
> 
> Michael,
> 
> If you haven't started cutting the new RC, I'm working on a documentation PR 
> right now I'm hoping we can get into Spark 2.2 as a migration note, even if 
> it's just a mention: https://issues.apache.org/jira/browse/SPARK-20888 
> <https://issues.apache.org/jira/browse/SPARK-20888>.
> 
> Michael
> 
> 
>> On May 22, 2017, at 11:39 AM, Michael Armbrust <mich...@databricks.com 
>> <mailto:mich...@databricks.com>> wrote:
>> 
>> I'm waiting for SPARK-20814 
>> <https://issues.apache.org/jira/browse/SPARK-20814> at Marcelo's request and 
>> I'd also like to include SPARK-20844 
>> <https://issues.apache.org/jira/browse/SPARK-20844>.  I think we should be 
>> able to cut another RC midweek.
>> 
>> On Fri, May 19, 2017 at 11:53 AM, Nick Pentreath <nick.pentre...@gmail.com 
>> <mailto:nick.pentre...@gmail.com>> wrote:
>> All the outstanding ML QA doc and user guide items are done for 2.2 so from 
>> that side we should be good to cut another RC :)
>> 
>> 
>> On Thu, 18 May 2017 at 00:18 Russell Spitzer <russell.spit...@gmail.com 
>> <mailto:russell.spit...@gmail.com>> wrote:
>> Seeing an issue with the DataScanExec and some of our integration tests for 
>> the SCC. Running dataframe read and writes from the shell seems fine but the 
>> Redaction code seems to get a "None" when doing 
>> SparkSession.getActiveSession.get in our integration tests. I'm not sure why 
>> but i'll dig into this later if I get a chance.
>> 
>> Example Failed Test
>> https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311
>>  
>> <https://github.com/datastax/spark-cassandra-connector/blob/v2.0.1/spark-cassandra-connector/src/it/scala/com/datastax/spark/connector/sql/CassandraSQLSpec.scala#L311>
>> 
>> ```[info]   org.apache.spark.SparkException: Job aborted due to stage 
>> failure: Task serialization failed: java.util.NoSuchElementException: 
>> None.get
>> [info] java.util.NoSuchElementException: None.get
>> [info]       at scala.None$.get(Option.scala:347)
>> [info]       at scala.None$.get(Option.scala:345)
>> [info]       at org.apache.spark.sql.execution.DataSourceScanExec$class.org 
>> <http://class.org/>$apache$spark$sql$execution$DataSourceScanExec$$redact(DataSourceScanExec.scala:70)
>> [info]       at 
>> org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:54)
>> [info]       at 
>> org.apache.spark.sql.execution.DataSourceScanExec$$anonfun$4.apply(DataSourceScanExec.scala:52)
>> ``` 
>> 
>> Again this only seems to repo in our IT suite so i'm not sure if this is a 
>> real issue. 
>> 
>> 
>> On Tue, May 16, 2017 at 1:40 PM Joseph Bradley <jos...@databricks.com 
>> <mailto:jos...@databricks.com>> wrote:
>> All of the ML/Graph/SparkR QA blocker JIRAs have been resolved.  Thanks 
>> everyone who helped out on those!
>> 
>> We still have open ML/Graph/SparkR JIRAs targeted at 2.2, but they are 
>> essentially all for documentation.
>> 
>> Joseph
>> 
>> On Thu, May 11, 2017 at 3:08 PM, Marcelo Vanzin <van...@cloudera.com 
>> <mailto:van...@cloudera.com>> wrote:
>> Since you'll be creating a new RC, I'd wait until SPARK-20666 is
>> fixed, since the change that caused it is in branch-2.2. Probably a
>> good idea to raise it to blocker at this point.
>> 
>> On Thu, May 11, 2017 at 2:59 PM, Michael Armbrust
>> <mich...@databricks.com <mailto:mich...@databricks.com>> wrote:
>> > I'm going to -1 given the outstanding issues and lack of +1s.  I'll create
>> > another RC once ML has had time to take care of the more critical problems.
>> > In the meantime please keep testing this release!
>> >
>> > On Tue, May 9, 2017 at 2:00 AM, Kazuaki Ishizaki <ishiz...@jp.ibm.com 
>> > <mailto:ishiz...@jp.ibm.com>>
>> > wrote:
>> >>
>> >> +1 (non-binding)
>> >>
>> >> I tested it on Ubuntu 16.04 and OpenJDK8 on ppc64le. All of the tests for
>> >> core have passed.
>> >>
>> >> $ java -version
>> >> openjdk version "1.8.0_111"
>> >> OpenJDK Runtime Environment (build
>> >> 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
>> >> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>> >> $ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7
>> >> package install
>> >> $ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
>> >> ...
>> >> Run completed in 15 minutes, 12 seconds.
>> >> Total number of tests run: 1940
>> >> Suites: completed 206, aborted 0
>> >> Tests: succeeded 1940, failed 0, canceled 4, ignored 8, pending 0
>> >> All tests passed.
>> >> [INFO]
>> >> ------------------------------------------------------------------------
>> >> [INFO] BUILD SUCCESS
>> >> [INFO]
>> >> ------------------------------------------------------------------------
>> >> [INFO] Total time: 16:51 min
>> >> [INFO] Finished at: 2017-05-09T17:51:04+09:00
>> >> [INFO] Final Memory: 53M/514M
>> >> [INFO]
>> >> ------------------------------------------------------------------------
>> >> [WARNING] The requested profile "hive" could not be activated because it
>> >> does not exist.
>> >>
>> >>
>> >> Kazuaki Ishizaki,
>> >>
>> >>
>> >>
>> >> From:        Michael Armbrust <mich...@databricks.com 
>> >> <mailto:mich...@databricks.com>>
>> >> To:        "dev@spark.apache.org <mailto:dev@spark.apache.org>" 
>> >> <dev@spark.apache.org <mailto:dev@spark.apache.org>>
>> >> Date:        2017/05/05 02:08
>> >> Subject:        [VOTE] Apache Spark 2.2.0 (RC2)
>> >> ________________________________
>> >>
>> >>
>> >>
>> >> Please vote on releasing the following candidate as Apache Spark version
>> >> 2.2.0. The vote is open until Tues, May 9th, 2017 at 12:00 PST and passes 
>> >> if
>> >> a majority of at least 3 +1 PMC votes are cast.
>> >>
>> >> [ ] +1 Release this package as Apache Spark 2.2.0
>> >> [ ] -1 Do not release this package because ...
>> >>
>> >>
>> >> To learn more about Apache Spark, please see http://spark.apache.org/ 
>> >> <http://spark.apache.org/>
>> >>
>> >> The tag to be voted on is v2.2.0-rc2
>> >> (1d4017b44d5e6ad156abeaae6371747f111dd1f9)
>> >>
>> >> List of JIRA tickets resolved can be found with this filter.
>> >>
>> >> The release files, including signatures, digests, etc. can be found at:
>> >> http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/ 
>> >> <http://home.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-bin/>
>> >>
>> >> Release artifacts are signed with the following key:
>> >> https://people.apache.org/keys/committer/pwendell.asc 
>> >> <https://people.apache.org/keys/committer/pwendell.asc>
>> >>
>> >> The staging repository for this release can be found at:
>> >> https://repository.apache.org/content/repositories/orgapachespark-1236/ 
>> >> <https://repository.apache.org/content/repositories/orgapachespark-1236/>
>> >>
>> >> The documentation corresponding to this release can be found at:
>> >> http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/ 
>> >> <http://people.apache.org/~pwendell/spark-releases/spark-2.2.0-rc2-docs/>
>> >>
>> >>
>> >> FAQ
>> >>
>> >> How can I help test this release?
>> >>
>> >> If you are a Spark user, you can help us test this release by taking an
>> >> existing Spark workload and running on this release candidate, then
>> >> reporting any regressions.
>> >>
>> >> What should happen to JIRA tickets still targeting 2.2.0?
>> >>
>> >> Committers should look at those and triage. Extremely important bug fixes,
>> >> documentation, and API tweaks that impact compatibility should be worked 
>> >> on
>> >> immediately. Everything else please retarget to 2.3.0 or 2.2.1.
>> >>
>> >> But my bug isn't fixed!??!
>> >>
>> >> In order to make timely releases, we will typically not hold the release
>> >> unless the bug in question is a regression from 2.1.1.
>> >>
>> >
>> 
>> 
>> 
>> --
>> Marcelo
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org 
>> <mailto:dev-unsubscr...@spark.apache.org>
>> 
>> 
>> 
>> 
>> -- 
>> Joseph Bradley
>> Software Engineer - Machine Learning
>> Databricks, Inc.
>>  <http://databricks.com/>
>> 
> 

Reply via email to