+1 I think having a 4 month window instead of a 3 month window sounds good.

However I think figuring out a timeline for maintenance releases would
also be good. This is a common concern that comes up in many user
threads and it'll be better to have some structure around this. It
doesn't need to be strict, but something like the first maintenance
release for the latest 2.x.0 release within 2 months. And then a
second maintenance release within 6 months or something like that.

Thanks
Shivaram

On Tue, Sep 27, 2016 at 12:06 PM, Reynold Xin <r...@databricks.com> wrote:
> We are 2 months past releasing Spark 2.0.0, an important milestone for the
> project. Spark 2.0.0 deviated (took 6 month from the regular release cadence
> we had for the 1.x line, and we never explicitly discussed what the release
> cadence should look like for 2.x. Thus this email.
>
> During Spark 1.x, roughly every three months we make a new 1.x feature
> release (e.g. 1.5.0 comes out three months after 1.4.0). Development
> happened primarily in the first two months, and then a release branch was
> cut at the end of month 2, and the last month was reserved for QA and
> release preparation.
>
> During 2.0.0 development, I really enjoyed the longer release cycle because
> there was a lot of major changes happening and the longer time was critical
> for thinking through architectural changes as well as API design. While I
> don't expect the same degree of drastic changes in a 2.x feature release, I
> do think it'd make sense to increase the length of release cycle so we can
> make better designs.
>
> My strawman proposal is to maintain a regular release cadence, as we did in
> Spark 1.x, and increase the cycle from 3 months to 4 months. This
> effectively gives us ~50% more time to develop (in reality it'd be slightly
> less than 50% since longer dev time also means longer QA time). As for
> maintenance releases, I think those should still be cut on-demand, similar
> to Spark 1.x, but more aggressively.
>
> To put this into perspective, 4-month cycle means we will release Spark
> 2.1.0 at the end of Nov or early Dec (and branch cut / code freeze at the
> end of Oct).
>
> I am curious what others think.
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to