I am not sure any other process makes sense. What are you suggesting should
happen?

On Sat, Jul 2, 2016, 22:27 Jacek Laskowski <ja...@japila.pl> wrote:

> Hi,
>
> Thanks Sean! It makes sense.
>
> I'm not fully convinced that's how it should be, so I apologize if I
> ever ask about the version management in Spark again :)
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Sat, Jul 2, 2016 at 11:19 PM, Sean Owen <so...@cloudera.com> wrote:
> > Because a 2.0.0 release candidate is out. If for some reason the
> > release candidate becomes the 2.0.0 release, then anything merged to
> > branch-2.0 after it is necessarily fixed in 2.0.1 at best. At this
> > stage we know the RC1 will not be 2.0.0, so really that vote should be
> > formally cancelled. Then we just mark anything fixed for 2.0.1 as
> > fixed for 2.0.0 and make another RC.
> >
> > master is not what will be released as 2.0.0. branch-2.0 is what will
> > contain that release.
> >
> > On Sat, Jul 2, 2016 at 10:11 PM, Jacek Laskowski <ja...@japila.pl>
> wrote:
> >> Hi Sean, devs,
> >>
> >> How is this possible that Fix Version/s is 2.0.1 given 2.0.0 was not
> >> released yet? Why is that that master is not what's going to be
> >> released so eventually becomes 2.0.0? I don't get it. Appreciate any
> >> guidance. Thanks.
> >>
> >> Pozdrawiam,
> >> Jacek Laskowski
> >> ----
> >> https://medium.com/@jaceklaskowski/
> >> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> >> Follow me at https://twitter.com/jaceklaskowski
> >>
> >>
> >> On Sat, Jul 2, 2016 at 5:30 PM, Sean Owen (JIRA) <j...@apache.org>
> wrote:
> >>>
> >>>      [
> https://issues.apache.org/jira/browse/SPARK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> ]
> >>>
> >>> Sean Owen resolved SPARK-16345.
> >>> -------------------------------
> >>>        Resolution: Fixed
> >>>     Fix Version/s: 2.0.1
> >>>
> >>> Issue resolved by pull request 14015
> >>> [https://github.com/apache/spark/pull/14015]
> >>>
> >>>> Extract graphx programming guide example snippets from source files
> instead of hard code them
> >>>>
> ---------------------------------------------------------------------------------------------
> >>>>
> >>>>                 Key: SPARK-16345
> >>>>                 URL:
> https://issues.apache.org/jira/browse/SPARK-16345
> >>>>             Project: Spark
> >>>>          Issue Type: Improvement
> >>>>          Components: Documentation, Examples, GraphX
> >>>>    Affects Versions: 2.0.0
> >>>>            Reporter: Weichen Xu
> >>>>             Fix For: 2.0.1
> >>>>
> >>>>
> >>>> Currently, all example snippets in the graphx programming guide are
> hard-coded, which can be pretty hard to update and verify. On the contrary,
> ML document pages are using the include_example Jekyll plugin to extract
> snippets from actual source files under the examples sub-project. In this
> way, we can guarantee that Java and Scala code are compilable, and it would
> be much easier to verify these example snippets since they are part of
> complete Spark applications.
> >>>> The similar task is SPARK-11381.
> >>>
> >>>
> >>>
> >>> --
> >>> This message was sent by Atlassian JIRA
> >>> (v6.3.4#6332)
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
> >>> For additional commands, e-mail: issues-h...@spark.apache.org
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> >>
>

Reply via email to