I created ZEPPELIN-3635 for dropping support before spark 1.6, if you have
any concerns, please comment on that jira.



Clemens Valiente <clemens.valie...@trivago.com>于2018年7月17日周二 下午4:05写道:

> As far as I know, the Cloudera distribution of hadoop still comes with
> Spark 1.6 out of the box, so I believe there are still quite a few people
> stuck on it.
>
> On Tue, 2018-07-17 at 10:40 +0900, Jongyoul Lee wrote:
>
> I think the current release is good enough to use Spark 1.6.x. For the
> future release, it would be better to focus on 2.x only.
>
> And older versions than 1.6, fully agreed. we should do it, personally.
>
> On Tue, Jul 17, 2018 at 10:16 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
>
> This might be a little risky. But it depends on how many people still use
> spark 1.6
> But at least I would suggest to disable support any spark before 1.6.
> There's many legacy code in zeppelin to support very old versions of spark
> (e.g. 1.5, 1.4)
> Actually we don't have travis job for any spark before 1.6, so we don't
> know whether these legacy code still works. Maintain these legacy code is a
> extra effort for community. so I would suggest to disable support on spark
> before 1.6 at least.
>
>
> Jongyoul Lee <jongy...@gmail.com>于2018年7月17日周二 上午9:10写道:
>
> Hi,
>
> Today, I found that Apache Spark 1.6.3 distribution was removed from
> Apache CDN. We can get the link for 1.6.3 from Apache Spark, but it's only
> available to be download from Apache archive. I'm not sure how many ppl
> still use Spark 1.6.3 in Apache Zeppelin, but in my opinion, it means Spark
> 1.6.3 is not active anymore.
>
> From now, AFAIK, about supporting versions of Apache Spark, we have
> followed for Spark's policy.
>
> I suggest that we also remove Spark 1.6.3 from the officially supported
> version for the next Apache Zeppelin major release - 0.9.0 or 1.0.0. If we
> could focus on support on Spark 2.x only, we could make SparkInterpreter
> more concretely.
>
> WDYT?
>
> Best regards,
> JL
>
> --
> 이종열, Jongyoul Lee, 李宗烈
> http://madeng.net
>
>
>
>
>

Reply via email to