答复: MLPC model can not be saved

2016-03-24 Thread HanPan
Hi Alexander,

 

 Thanks for your reply. The pull request shows that
MultilayerPerceptronClassifier implement default params writable interface.
I will try that.

 

Thanks

Pan

 

发件人: Ulanov, Alexander [mailto:alexander.ula...@hpe.com] 
发送时间: 2016年3月22日 1:38
收件人: HanPan; dev@spark.apache.org
主题: RE: MLPC model can not be saved

 

Hi Pan,

 

There is a pull request that is supposed to fix the issue:

https://github.com/apache/spark/pull/9854

 

There is a workaround for saving/loading a model (however I am not sure if
it will work for the pipeline): 

sc.parallelize(Seq(model), 1).saveAsObjectFile("path")

val sameModel = sc.objectFile[YourCLASS]("path").first()

 

 

Best regards, Alexander

 

From: HanPan [mailto:pa...@thinkingdata.cn] 
Sent: Sunday, March 20, 2016 8:32 PM
To: dev@spark.apache.org  
Cc: pa...@thinkingdata.cn  
Subject: MLPC model can not be saved

 

 

Hi Guys,

 

 I built a ML pipeline that includes multilayer perceptron
classifier, I got the following error message when I tried to save the
pipeline model. It seems like MLPC model can not be saved which means I have
no ways to save the trained model. Is there any way to save the model that I
can use it for future prediction.

 

 Exception in thread "main" java.lang.UnsupportedOperationException:
Pipeline write will fail on this Pipeline because it contains a stage which
does not implement Writable. Non-Writable stage: mlpc_2d8b74f6da60 of type
class
org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel

 at
org.apache.spark.ml.Pipeline$SharedReadWrite$$anonfun$validateStages$1.apply
(Pipeline.scala:218)

 at
org.apache.spark.ml.Pipeline$SharedReadWrite$$anonfun$validateStages$1.apply
(Pipeline.scala:215)

 at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala
:33)

 at
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)

 at
org.apache.spark.ml.Pipeline$SharedReadWrite$.validateStages(Pipeline.scala:
215)

 at
org.apache.spark.ml.PipelineModel$PipelineModelWriter.(Pipeline.scala:
325)

 at org.apache.spark.ml.PipelineModel.write(Pipeline.scala:309)

 at
org.apache.spark.ml.util.MLWritable$class.save(ReadWrite.scala:130)

 at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:280)

 at
cn.thinkingdata.nlp.spamclassifier.FFNNSpamClassifierPipeLine.main(FFNNSpamC
lassifierPipeLine.java:76)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62
)

 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)

 at java.lang.reflect.Method.invoke(Method.java:497)

 at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$ru
nMain(SparkSubmit.scala:731)

 at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

 at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

 

Thanks

Pan



[discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Reynold Xin
About a year ago we decided to drop Java 6 support in Spark 1.5. I am
wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
Spark 2.0 would require Java 8 to run).

Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
removed public downloads for JDK 7 in July 2015. In the past I've actually
been against dropping Java 8, but today I ran into an issue with the new
Dataset API not working well with Java 8 lambdas, and that changed my
opinion on this.

I've been thinking more about this issue today and also talked with a lot
people offline to gather feedback, and I actually think the pros outweighs
the cons, for the following reasons (in some rough order of importance):

1. It is complicated to test how well Spark APIs work for Java lambdas if
we support Java 7. Jenkins machines need to have both Java 7 and Java 8
installed and we must run through a set of test suites in 7, and then the
lambda tests in Java 8. This complicates build environments/scripts, and
makes them less robust. Without good testing infrastructure, I have no
confidence in building good APIs for Java 8.

2. Dataset/DataFrame performance will be between 1x to 10x slower in Java
7. The primary APIs we want users to use in Spark 2.x are
Dataset/DataFrame, and this impacts pretty much everything from machine
learning to structured streaming. We have made great progress in their
performance through extensive use of code generation. (In many dimensions
Spark 2.0 with DataFrames/Datasets looks more like a compiler than a
MapReduce or query engine.) These optimizations don't work well in Java 7
due to broken code cache flushing. This problem has been fixed by Oracle in
Java 8. In addition, Java 8 comes with better support for Unsafe and SIMD.

3. Scala 2.12 will come out soon, and we will want to add support for that.
Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a
fairly complicated compatibility matrix and testing infrastructure.

4. There are libraries that I've looked into in the past that support only
Java 8. This is more common in high performance libraries such as Aeron (a
messaging library). Having to support Java 7 means we are not able to use
these. It is not that big of a deal right now, but will become increasingly
more difficult as we optimize performance.


The downside of not supporting Java 7 is also obvious. Some organizations
are stuck with Java 7, and they wouldn't be able to use Spark 2.0 without
upgrading Java.


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Reynold Xin
One other benefit that I didn't mention is that we'd be able to use Java
8's Optional class to replace our built-in Optional.


On Thu, Mar 24, 2016 at 12:27 AM, Reynold Xin  wrote:

> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> Spark 2.0 would require Java 8 to run).
>
> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> removed public downloads for JDK 7 in July 2015. In the past I've actually
> been against dropping Java 8, but today I ran into an issue with the new
> Dataset API not working well with Java 8 lambdas, and that changed my
> opinion on this.
>
> I've been thinking more about this issue today and also talked with a lot
> people offline to gather feedback, and I actually think the pros outweighs
> the cons, for the following reasons (in some rough order of importance):
>
> 1. It is complicated to test how well Spark APIs work for Java lambdas if
> we support Java 7. Jenkins machines need to have both Java 7 and Java 8
> installed and we must run through a set of test suites in 7, and then the
> lambda tests in Java 8. This complicates build environments/scripts, and
> makes them less robust. Without good testing infrastructure, I have no
> confidence in building good APIs for Java 8.
>
> 2. Dataset/DataFrame performance will be between 1x to 10x slower in Java
> 7. The primary APIs we want users to use in Spark 2.x are
> Dataset/DataFrame, and this impacts pretty much everything from machine
> learning to structured streaming. We have made great progress in their
> performance through extensive use of code generation. (In many dimensions
> Spark 2.0 with DataFrames/Datasets looks more like a compiler than a
> MapReduce or query engine.) These optimizations don't work well in Java 7
> due to broken code cache flushing. This problem has been fixed by Oracle in
> Java 8. In addition, Java 8 comes with better support for Unsafe and SIMD.
>
> 3. Scala 2.12 will come out soon, and we will want to add support for
> that. Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a
> fairly complicated compatibility matrix and testing infrastructure.
>
> 4. There are libraries that I've looked into in the past that support only
> Java 8. This is more common in high performance libraries such as Aeron (a
> messaging library). Having to support Java 7 means we are not able to use
> these. It is not that big of a deal right now, but will become increasingly
> more difficult as we optimize performance.
>
>
> The downside of not supporting Java 7 is also obvious. Some organizations
> are stuck with Java 7, and they wouldn't be able to use Spark 2.0 without
> upgrading Java.
>
>
>


RE: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Raymond Honderdors
Very good points

Going to support java 8 looks like a good direction
2.0 would be a good release to start with that

Raymond Honderdors
Team Lead Analytics BI
Business Intelligence Developer
raymond.honderd...@sizmek.com
T +972.7325.3569
Herzliya

From: Reynold Xin [mailto:r...@databricks.com]
Sent: Thursday, March 24, 2016 9:37 AM
To: dev@spark.apache.org
Subject: Re: [discuss] ending support for Java 7 in Spark 2.0

One other benefit that I didn't mention is that we'd be able to use Java 8's 
Optional class to replace our built-in Optional.


On Thu, Mar 24, 2016 at 12:27 AM, Reynold Xin 
mailto:r...@databricks.com>> wrote:
About a year ago we decided to drop Java 6 support in Spark 1.5. I am wondering 
if we should also just drop Java 7 support in Spark 2.0 (i.e. Spark 2.0 would 
require Java 8 to run).

Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and removed 
public downloads for JDK 7 in July 2015. In the past I've actually been against 
dropping Java 8, but today I ran into an issue with the new Dataset API not 
working well with Java 8 lambdas, and that changed my opinion on this.

I've been thinking more about this issue today and also talked with a lot 
people offline to gather feedback, and I actually think the pros outweighs the 
cons, for the following reasons (in some rough order of importance):

1. It is complicated to test how well Spark APIs work for Java lambdas if we 
support Java 7. Jenkins machines need to have both Java 7 and Java 8 installed 
and we must run through a set of test suites in 7, and then the lambda tests in 
Java 8. This complicates build environments/scripts, and makes them less 
robust. Without good testing infrastructure, I have no confidence in building 
good APIs for Java 8.

2. Dataset/DataFrame performance will be between 1x to 10x slower in Java 7. 
The primary APIs we want users to use in Spark 2.x are Dataset/DataFrame, and 
this impacts pretty much everything from machine learning to structured 
streaming. We have made great progress in their performance through extensive 
use of code generation. (In many dimensions Spark 2.0 with DataFrames/Datasets 
looks more like a compiler than a MapReduce or query engine.) These 
optimizations don't work well in Java 7 due to broken code cache flushing. This 
problem has been fixed by Oracle in Java 8. In addition, Java 8 comes with 
better support for Unsafe and SIMD.

3. Scala 2.12 will come out soon, and we will want to add support for that. 
Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a fairly 
complicated compatibility matrix and testing infrastructure.

4. There are libraries that I've looked into in the past that support only Java 
8. This is more common in high performance libraries such as Aeron (a messaging 
library). Having to support Java 7 means we are not able to use these. It is 
not that big of a deal right now, but will become increasingly more difficult 
as we optimize performance.


The downside of not supporting Java 7 is also obvious. Some organizations are 
stuck with Java 7, and they wouldn't be able to use Spark 2.0 without upgrading 
Java.





Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mridul Muralidharan
+1
Agree, dropping support for java 7 is long overdue - and 2.0 would be
a logical release to do this on.

Regards,
Mridul


On Thu, Mar 24, 2016 at 12:27 AM, Reynold Xin  wrote:
> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> Spark 2.0 would require Java 8 to run).
>
> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> removed public downloads for JDK 7 in July 2015. In the past I've actually
> been against dropping Java 8, but today I ran into an issue with the new
> Dataset API not working well with Java 8 lambdas, and that changed my
> opinion on this.
>
> I've been thinking more about this issue today and also talked with a lot
> people offline to gather feedback, and I actually think the pros outweighs
> the cons, for the following reasons (in some rough order of importance):
>
> 1. It is complicated to test how well Spark APIs work for Java lambdas if we
> support Java 7. Jenkins machines need to have both Java 7 and Java 8
> installed and we must run through a set of test suites in 7, and then the
> lambda tests in Java 8. This complicates build environments/scripts, and
> makes them less robust. Without good testing infrastructure, I have no
> confidence in building good APIs for Java 8.
>
> 2. Dataset/DataFrame performance will be between 1x to 10x slower in Java 7.
> The primary APIs we want users to use in Spark 2.x are Dataset/DataFrame,
> and this impacts pretty much everything from machine learning to structured
> streaming. We have made great progress in their performance through
> extensive use of code generation. (In many dimensions Spark 2.0 with
> DataFrames/Datasets looks more like a compiler than a MapReduce or query
> engine.) These optimizations don't work well in Java 7 due to broken code
> cache flushing. This problem has been fixed by Oracle in Java 8. In
> addition, Java 8 comes with better support for Unsafe and SIMD.
>
> 3. Scala 2.12 will come out soon, and we will want to add support for that.
> Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a fairly
> complicated compatibility matrix and testing infrastructure.
>
> 4. There are libraries that I've looked into in the past that support only
> Java 8. This is more common in high performance libraries such as Aeron (a
> messaging library). Having to support Java 7 means we are not able to use
> these. It is not that big of a deal right now, but will become increasingly
> more difficult as we optimize performance.
>
>
> The downside of not supporting Java 7 is also obvious. Some organizations
> are stuck with Java 7, and they wouldn't be able to use Spark 2.0 without
> upgrading Java.
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Ram Sriharsha
+1, yes Java 7 has been end of life for a year now, 2.0 is a good time to
upgrade to Java 8

On Thu, Mar 24, 2016 at 12:42 AM, Raymond Honderdors <
raymond.honderd...@sizmek.com> wrote:

> Very good points
>
>
>
> Going to support java 8 looks like a good direction
>
> 2.0 would be a good release to start with that
>
>
>
> *Raymond Honderdors *
>
> *Team Lead Analytics BI*
>
> *Business Intelligence Developer *
>
> *raymond.honderd...@sizmek.com  *
>
> *T +972.7325.3569*
>
> *Herzliya*
>
>
>
> *From:* Reynold Xin [mailto:r...@databricks.com]
> *Sent:* Thursday, March 24, 2016 9:37 AM
> *To:* dev@spark.apache.org
> *Subject:* Re: [discuss] ending support for Java 7 in Spark 2.0
>
>
>
> One other benefit that I didn't mention is that we'd be able to use Java
> 8's Optional class to replace our built-in Optional.
>
>
>
>
>
> On Thu, Mar 24, 2016 at 12:27 AM, Reynold Xin  wrote:
>
> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> Spark 2.0 would require Java 8 to run).
>
>
>
> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> removed public downloads for JDK 7 in July 2015. In the past I've actually
> been against dropping Java 8, but today I ran into an issue with the new
> Dataset API not working well with Java 8 lambdas, and that changed my
> opinion on this.
>
>
>
> I've been thinking more about this issue today and also talked with a lot
> people offline to gather feedback, and I actually think the pros outweighs
> the cons, for the following reasons (in some rough order of importance):
>
>
>
> 1. It is complicated to test how well Spark APIs work for Java lambdas if
> we support Java 7. Jenkins machines need to have both Java 7 and Java 8
> installed and we must run through a set of test suites in 7, and then the
> lambda tests in Java 8. This complicates build environments/scripts, and
> makes them less robust. Without good testing infrastructure, I have no
> confidence in building good APIs for Java 8.
>
>
>
> 2. Dataset/DataFrame performance will be between 1x to 10x slower in Java
> 7. The primary APIs we want users to use in Spark 2.x are
> Dataset/DataFrame, and this impacts pretty much everything from machine
> learning to structured streaming. We have made great progress in their
> performance through extensive use of code generation. (In many dimensions
> Spark 2.0 with DataFrames/Datasets looks more like a compiler than a
> MapReduce or query engine.) These optimizations don't work well in Java 7
> due to broken code cache flushing. This problem has been fixed by Oracle in
> Java 8. In addition, Java 8 comes with better support for Unsafe and SIMD.
>
>
>
> 3. Scala 2.12 will come out soon, and we will want to add support for
> that. Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a
> fairly complicated compatibility matrix and testing infrastructure.
>
>
>
> 4. There are libraries that I've looked into in the past that support only
> Java 8. This is more common in high performance libraries such as Aeron (a
> messaging library). Having to support Java 7 means we are not able to use
> these. It is not that big of a deal right now, but will become increasingly
> more difficult as we optimize performance.
>
>
>
>
>
> The downside of not supporting Java 7 is also obvious. Some organizations
> are stuck with Java 7, and they wouldn't be able to use Spark 2.0 without
> upgrading Java.
>
>
>
>
>
>
>



-- 
Ram Sriharsha
Architect, Spark and Data Science
Hortonworks, 2550 Great America Way, 2nd Floor
Santa Clara, CA 95054
Ph: 408-510-8635
email: har...@apache.org

[image: https://www.linkedin.com/in/harsha340]
 



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Sean Owen
I generally favor this for the simplification. I didn't realize there
were actually some performance wins and important bug fixes.

I've had lots of trouble with scalac 2.10 + Java 8. I don't know if
it's still a problem since 2.11 + 8 seems OK, but for a long time the
sql/ modules would never compile in this config. If it's actually
required for 2.12, makes sense.

As ever my general stance is that nobody has to make a major-version
upgrade; Spark 1.6 does not stop working for those that need Java 7. I
also think it's reasonable for anyone to expect that major-version
upgrades require major-version dependency updates. Also remember that
not removing Java 7 support means committing to it here for a couple
more years. It's not just about the situation on release day.

On Thu, Mar 24, 2016 at 8:27 AM, Reynold Xin  wrote:
> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> Spark 2.0 would require Java 8 to run).
>
> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> removed public downloads for JDK 7 in July 2015. In the past I've actually
> been against dropping Java 8, but today I ran into an issue with the new
> Dataset API not working well with Java 8 lambdas, and that changed my
> opinion on this.
>
> I've been thinking more about this issue today and also talked with a lot
> people offline to gather feedback, and I actually think the pros outweighs
> the cons, for the following reasons (in some rough order of importance):
>
> 1. It is complicated to test how well Spark APIs work for Java lambdas if we
> support Java 7. Jenkins machines need to have both Java 7 and Java 8
> installed and we must run through a set of test suites in 7, and then the
> lambda tests in Java 8. This complicates build environments/scripts, and
> makes them less robust. Without good testing infrastructure, I have no
> confidence in building good APIs for Java 8.
>
> 2. Dataset/DataFrame performance will be between 1x to 10x slower in Java 7.
> The primary APIs we want users to use in Spark 2.x are Dataset/DataFrame,
> and this impacts pretty much everything from machine learning to structured
> streaming. We have made great progress in their performance through
> extensive use of code generation. (In many dimensions Spark 2.0 with
> DataFrames/Datasets looks more like a compiler than a MapReduce or query
> engine.) These optimizations don't work well in Java 7 due to broken code
> cache flushing. This problem has been fixed by Oracle in Java 8. In
> addition, Java 8 comes with better support for Unsafe and SIMD.
>
> 3. Scala 2.12 will come out soon, and we will want to add support for that.
> Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a fairly
> complicated compatibility matrix and testing infrastructure.
>
> 4. There are libraries that I've looked into in the past that support only
> Java 8. This is more common in high performance libraries such as Aeron (a
> messaging library). Having to support Java 7 means we are not able to use
> these. It is not that big of a deal right now, but will become increasingly
> more difficult as we optimize performance.
>
>
> The downside of not supporting Java 7 is also obvious. Some organizations
> are stuck with Java 7, and they wouldn't be able to use Spark 2.0 without
> upgrading Java.
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Reynold Xin
I actually talked quite a bit today with an engineer on the scala compiler
team tonight and the scala 2.10 + java 8 combo should be ok. The latest
Scala 2.10 release should have all the important fixes that are needed for
Java 8.

On Thu, Mar 24, 2016 at 1:01 AM, Sean Owen  wrote:

> I generally favor this for the simplification. I didn't realize there
> were actually some performance wins and important bug fixes.
>
> I've had lots of trouble with scalac 2.10 + Java 8. I don't know if
> it's still a problem since 2.11 + 8 seems OK, but for a long time the
> sql/ modules would never compile in this config. If it's actually
> required for 2.12, makes sense.
>
> As ever my general stance is that nobody has to make a major-version
> upgrade; Spark 1.6 does not stop working for those that need Java 7. I
> also think it's reasonable for anyone to expect that major-version
> upgrades require major-version dependency updates. Also remember that
> not removing Java 7 support means committing to it here for a couple
> more years. It's not just about the situation on release day.
>
> On Thu, Mar 24, 2016 at 8:27 AM, Reynold Xin  wrote:
> > About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> > wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> > Spark 2.0 would require Java 8 to run).
> >
> > Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> > removed public downloads for JDK 7 in July 2015. In the past I've
> actually
> > been against dropping Java 8, but today I ran into an issue with the new
> > Dataset API not working well with Java 8 lambdas, and that changed my
> > opinion on this.
> >
> > I've been thinking more about this issue today and also talked with a lot
> > people offline to gather feedback, and I actually think the pros
> outweighs
> > the cons, for the following reasons (in some rough order of importance):
> >
> > 1. It is complicated to test how well Spark APIs work for Java lambdas
> if we
> > support Java 7. Jenkins machines need to have both Java 7 and Java 8
> > installed and we must run through a set of test suites in 7, and then the
> > lambda tests in Java 8. This complicates build environments/scripts, and
> > makes them less robust. Without good testing infrastructure, I have no
> > confidence in building good APIs for Java 8.
> >
> > 2. Dataset/DataFrame performance will be between 1x to 10x slower in
> Java 7.
> > The primary APIs we want users to use in Spark 2.x are Dataset/DataFrame,
> > and this impacts pretty much everything from machine learning to
> structured
> > streaming. We have made great progress in their performance through
> > extensive use of code generation. (In many dimensions Spark 2.0 with
> > DataFrames/Datasets looks more like a compiler than a MapReduce or query
> > engine.) These optimizations don't work well in Java 7 due to broken code
> > cache flushing. This problem has been fixed by Oracle in Java 8. In
> > addition, Java 8 comes with better support for Unsafe and SIMD.
> >
> > 3. Scala 2.12 will come out soon, and we will want to add support for
> that.
> > Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a
> fairly
> > complicated compatibility matrix and testing infrastructure.
> >
> > 4. There are libraries that I've looked into in the past that support
> only
> > Java 8. This is more common in high performance libraries such as Aeron
> (a
> > messaging library). Having to support Java 7 means we are not able to use
> > these. It is not that big of a deal right now, but will become
> increasingly
> > more difficult as we optimize performance.
> >
> >
> > The downside of not supporting Java 7 is also obvious. Some organizations
> > are stuck with Java 7, and they wouldn't be able to use Spark 2.0 without
> > upgrading Java.
> >
> >
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Sean Owen
Maybe so; I think we have a ticket open to update to 2.10.6, which
maybe fixes it.

It brings up a different point: supporting multiple Scala versions is
much more painful than Java versions because of mutual
incompatibility. Right now I get the sense there's an intent to keep
supporting 2.10, and 2.11, and 2.12 later in Spark 2. This seems like
relatively way more trouble. In the same breath -- why not remove 2.10
support anyway? It's also EOL, 2.11 also brought big improvements,
etc.

On Thu, Mar 24, 2016 at 9:04 AM, Reynold Xin  wrote:
> I actually talked quite a bit today with an engineer on the scala compiler
> team tonight and the scala 2.10 + java 8 combo should be ok. The latest
> Scala 2.10 release should have all the important fixes that are needed for
> Java 8.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Steve Loughran

> On 24 Mar 2016, at 07:27, Reynold Xin  wrote:
> 
> About a year ago we decided to drop Java 6 support in Spark 1.5. I am 
> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e. Spark 
> 2.0 would require Java 8 to run).
> 
> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and removed 
> public downloads for JDK 7 in July 2015.

Still there, Jan 2016 was the last public one.

> In the past I've actually been against dropping Java 8, but today I ran into 
> an issue with the new Dataset API not working well with Java 8 lambdas, and 
> that changed my opinion on this.
> 
> I've been thinking more about this issue today and also talked with a lot 
> people offline to gather feedback, and I actually think the pros outweighs 
> the cons, for the following reasons (in some rough order of importance):
> 
> 1. It is complicated to test how well Spark APIs work for Java lambdas if we 
> support Java 7. Jenkins machines need to have both Java 7 and Java 8 
> installed and we must run through a set of test suites in 7, and then the 
> lambda tests in Java 8. This complicates build environments/scripts, and 
> makes them less robust. Without good testing infrastructure, I have no 
> confidence in building good APIs for Java 8.

+complicates the test matrix for problems: if something works on java 8 and 
fails on java 7, is that a java 8 problem or a java 7 one?
+most developers would want to be on java 8 on their desktop if they could; the 
risk is that people accidentally code for java 8 even if they don't realise it 
just by using java 8 libraries, etc

> 
> 2. Dataset/DataFrame performance will be between 1x to 10x slower in Java 7. 
> The primary APIs we want users to use in Spark 2.x are Dataset/DataFrame, and 
> this impacts pretty much everything from machine learning to structured 
> streaming. We have made great progress in their performance through extensive 
> use of code generation. (In many dimensions Spark 2.0 with 
> DataFrames/Datasets looks more like a compiler than a MapReduce or query 
> engine.) These optimizations don't work well in Java 7 due to broken code 
> cache flushing. This problem has been fixed by Oracle in Java 8. In addition, 
> Java 8 comes with better support for Unsafe and SIMD.
> 
> 3. Scala 2.12 will come out soon, and we will want to add support for that. 
> Scala 2.12 only works on Java 8. If we do support Java 7, we'd have a fairly 
> complicated compatibility matrix and testing infrastructure.
> 
> 4. There are libraries that I've looked into in the past that support only 
> Java 8. This is more common in high performance libraries such as Aeron (a 
> messaging library). Having to support Java 7 means we are not able to use 
> these. It is not that big of a deal right now, but will become increasingly 
> more difficult as we optimize performance.
> 
> 
> The downside of not supporting Java 7 is also obvious. Some organizations are 
> stuck with Java 7, and they wouldn't be able to use Spark 2.0 without 
> upgrading Java.
> 


One thing you have to consider here is : will the organisations that don't want 
to upgrade to java 8 want to be upgrading to spark 2.0 anyway? 
> 

If there is a price, it means all apps that use any remote Spark APIs will also 
have to be java 8. Something like a REST API is less of an issue, but anything 
loading an JAR in the group org.apache.spark will have to be Java 8+. That's 
what held hadoop back on Java 7 in 2015 : twitter made the case that it 
shouldn't be the hadoop cluster forcing them to upgrade all their client apps 
just to use the IPC and filesystem code.I don't believe that's so much of a 
constraint on Spark.

Finally, Java 8 lines you up better for worrying about Java 9, which is on the 
horizon.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Jean-Baptiste Onofré
+1 to support Java 8 (and future) *only* in Spark 2.0, and end support 
of Java 7. It makes sense.


Regards
JB

On 03/24/2016 08:27 AM, Reynold Xin wrote:

About a year ago we decided to drop Java 6 support in Spark 1.5. I am
wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
Spark 2.0 would require Java 8 to run).

Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
removed public downloads for JDK 7 in July 2015. In the past I've
actually been against dropping Java 8, but today I ran into an issue
with the new Dataset API not working well with Java 8 lambdas, and that
changed my opinion on this.

I've been thinking more about this issue today and also talked with a
lot people offline to gather feedback, and I actually think the pros
outweighs the cons, for the following reasons (in some rough order of
importance):

1. It is complicated to test how well Spark APIs work for Java lambdas
if we support Java 7. Jenkins machines need to have both Java 7 and Java
8 installed and we must run through a set of test suites in 7, and then
the lambda tests in Java 8. This complicates build environments/scripts,
and makes them less robust. Without good testing infrastructure, I have
no confidence in building good APIs for Java 8.

2. Dataset/DataFrame performance will be between 1x to 10x slower in
Java 7. The primary APIs we want users to use in Spark 2.x are
Dataset/DataFrame, and this impacts pretty much everything from machine
learning to structured streaming. We have made great progress in their
performance through extensive use of code generation. (In many
dimensions Spark 2.0 with DataFrames/Datasets looks more like a compiler
than a MapReduce or query engine.) These optimizations don't work well
in Java 7 due to broken code cache flushing. This problem has been fixed
by Oracle in Java 8. In addition, Java 8 comes with better support for
Unsafe and SIMD.

3. Scala 2.12 will come out soon, and we will want to add support for
that. Scala 2.12 only works on Java 8. If we do support Java 7, we'd
have a fairly complicated compatibility matrix and testing infrastructure.

4. There are libraries that I've looked into in the past that support
only Java 8. This is more common in high performance libraries such as
Aeron (a messaging library). Having to support Java 7 means we are not
able to use these. It is not that big of a deal right now, but will
become increasingly more difficult as we optimize performance.


The downside of not supporting Java 7 is also obvious. Some
organizations are stuck with Java 7, and they wouldn't be able to use
Spark 2.0 without upgrading Java.




--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Spark 1.6.1 Hadoop 2.6 package on S3 corrupt?

2016-03-24 Thread Nicholas Chammas
Just checking in on this again as the builds on S3 are still broken. :/

Could it have something to do with us moving release-build.sh

?
​

On Mon, Mar 21, 2016 at 1:43 PM Nicholas Chammas 
wrote:

> Is someone going to retry fixing these packages? It's still a problem.
>
> Also, it would be good to understand why this is happening.
>
> On Fri, Mar 18, 2016 at 6:49 PM Jakob Odersky  wrote:
>
>> I just realized you're using a different download site. Sorry for the
>> confusion, the link I get for a direct download of Spark 1.6.1 /
>> Hadoop 2.6 is
>> http://d3kbcqa49mib13.cloudfront.net/spark-1.6.1-bin-hadoop2.6.tgz
>>
>> On Fri, Mar 18, 2016 at 3:20 PM, Nicholas Chammas
>>  wrote:
>> > I just retried the Spark 1.6.1 / Hadoop 2.6 download and got a corrupt
>> ZIP
>> > file.
>> >
>> > Jakob, are you sure the ZIP unpacks correctly for you? Is it the same
>> Spark
>> > 1.6.1/Hadoop 2.6 package you had a success with?
>> >
>> > On Fri, Mar 18, 2016 at 6:11 PM Jakob Odersky 
>> wrote:
>> >>
>> >> I just experienced the issue, however retrying the download a second
>> >> time worked. Could it be that there is some load balancer/cache in
>> >> front of the archive and some nodes still serve the corrupt packages?
>> >>
>> >> On Fri, Mar 18, 2016 at 8:00 AM, Nicholas Chammas
>> >>  wrote:
>> >> > I'm seeing the same. :(
>> >> >
>> >> > On Fri, Mar 18, 2016 at 10:57 AM Ted Yu  wrote:
>> >> >>
>> >> >> I tried again this morning :
>> >> >>
>> >> >> $ wget
>> >> >>
>> >> >>
>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz
>> >> >> --2016-03-18 07:55:30--
>> >> >>
>> >> >>
>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz
>> >> >> Resolving s3.amazonaws.com... 54.231.19.163
>> >> >> ...
>> >> >> $ tar zxf spark-1.6.1-bin-hadoop2.6.tgz
>> >> >>
>> >> >> gzip: stdin: unexpected end of file
>> >> >> tar: Unexpected EOF in archive
>> >> >> tar: Unexpected EOF in archive
>> >> >> tar: Error is not recoverable: exiting now
>> >> >>
>> >> >> On Thu, Mar 17, 2016 at 8:57 AM, Michael Armbrust
>> >> >> 
>> >> >> wrote:
>> >> >>>
>> >> >>> Patrick reuploaded the artifacts, so it should be fixed now.
>> >> >>>
>> >> >>> On Mar 16, 2016 5:48 PM, "Nicholas Chammas"
>> >> >>> 
>> >> >>> wrote:
>> >> 
>> >>  Looks like the other packages may also be corrupt. I’m getting the
>> >>  same
>> >>  error for the Spark 1.6.1 / Hadoop 2.4 package.
>> >> 
>> >> 
>> >> 
>> >> 
>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.4.tgz
>> >> 
>> >>  Nick
>> >> 
>> >> 
>> >>  On Wed, Mar 16, 2016 at 8:28 PM Ted Yu 
>> wrote:
>> >> >
>> >> > On Linux, I got:
>> >> >
>> >> > $ tar zxf spark-1.6.1-bin-hadoop2.6.tgz
>> >> >
>> >> > gzip: stdin: unexpected end of file
>> >> > tar: Unexpected EOF in archive
>> >> > tar: Unexpected EOF in archive
>> >> > tar: Error is not recoverable: exiting now
>> >> >
>> >> > On Wed, Mar 16, 2016 at 5:15 PM, Nicholas Chammas
>> >> >  wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz
>> >> >>
>> >> >> Does anyone else have trouble unzipping this? How did this
>> happen?
>> >> >>
>> >> >> What I get is:
>> >> >>
>> >> >> $ gzip -t spark-1.6.1-bin-hadoop2.6.tgz
>> >> >> gzip: spark-1.6.1-bin-hadoop2.6.tgz: unexpected end of file
>> >> >> gzip: spark-1.6.1-bin-hadoop2.6.tgz: uncompress failed
>> >> >>
>> >> >> Seems like a strange type of problem to come across.
>> >> >>
>> >> >> Nick
>> >> >
>> >> >
>> >> >>
>> >> >
>>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Koert Kuipers
i think the arguments are convincing, but it also makes me wonder if i live
in some kind of alternate universe... we deploy on customers clusters,
where the OS, python version, java version and hadoop distro are not chosen
by us. so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
have access to a single proxy machine and launch through yarn. asking them
to upgrade java is pretty much out of the question or a 6+ month ordeal. of
the 10 client clusters i can think of on the top of my head all of them are
on java 7, none are on java 8. so by doing this you would make spark 2
basically unusable for us (unless most of them have plans of upgrading in
near term to java 8, i will ask around and report back...).

on a side note, its particularly interesting to me that spark 2 chose to
continue support for scala 2.10, because even for us in our very
constricted client environments the scala version is something we can
easily upgrade (we just deploy a custom build of spark for the relevant
scala version and hadoop distro). and because scala is not a dependency of
any hadoop distro (so not on classpath, which i am very happy about) we can
use whatever scala version we like. also i found the upgrade path from
scala 2.10 to 2.11 to be very easy, so i have a hard time understanding why
anyone would stay on scala 2.10. and finally with scala 2.12 around the
corner you really dont want to be supporting 3 versions. so clearly i am
missing something here.



On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré 
wrote:

> +1 to support Java 8 (and future) *only* in Spark 2.0, and end support of
> Java 7. It makes sense.
>
> Regards
> JB
>
>
> On 03/24/2016 08:27 AM, Reynold Xin wrote:
>
>> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
>> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
>> Spark 2.0 would require Java 8 to run).
>>
>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
>> removed public downloads for JDK 7 in July 2015. In the past I've
>> actually been against dropping Java 8, but today I ran into an issue
>> with the new Dataset API not working well with Java 8 lambdas, and that
>> changed my opinion on this.
>>
>> I've been thinking more about this issue today and also talked with a
>> lot people offline to gather feedback, and I actually think the pros
>> outweighs the cons, for the following reasons (in some rough order of
>> importance):
>>
>> 1. It is complicated to test how well Spark APIs work for Java lambdas
>> if we support Java 7. Jenkins machines need to have both Java 7 and Java
>> 8 installed and we must run through a set of test suites in 7, and then
>> the lambda tests in Java 8. This complicates build environments/scripts,
>> and makes them less robust. Without good testing infrastructure, I have
>> no confidence in building good APIs for Java 8.
>>
>> 2. Dataset/DataFrame performance will be between 1x to 10x slower in
>> Java 7. The primary APIs we want users to use in Spark 2.x are
>> Dataset/DataFrame, and this impacts pretty much everything from machine
>> learning to structured streaming. We have made great progress in their
>> performance through extensive use of code generation. (In many
>> dimensions Spark 2.0 with DataFrames/Datasets looks more like a compiler
>> than a MapReduce or query engine.) These optimizations don't work well
>> in Java 7 due to broken code cache flushing. This problem has been fixed
>> by Oracle in Java 8. In addition, Java 8 comes with better support for
>> Unsafe and SIMD.
>>
>> 3. Scala 2.12 will come out soon, and we will want to add support for
>> that. Scala 2.12 only works on Java 8. If we do support Java 7, we'd
>> have a fairly complicated compatibility matrix and testing infrastructure.
>>
>> 4. There are libraries that I've looked into in the past that support
>> only Java 8. This is more common in high performance libraries such as
>> Aeron (a messaging library). Having to support Java 7 means we are not
>> able to use these. It is not that big of a deal right now, but will
>> become increasingly more difficult as we optimize performance.
>>
>>
>> The downside of not supporting Java 7 is also obvious. Some
>> organizations are stuck with Java 7, and they wouldn't be able to use
>> Spark 2.0 without upgrading Java.
>>
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Sean Owen
(PS CDH5 runs fine with Java 8, but I understand your more general point.)

This is a familiar context indeed, but in that context, would a group
not wanting to update to Java 8 want to manually put Spark 2.0 into
the mix? That is, if this is a context where the cluster is
purposefully some stable mix of components, would you be updating just
one?

You make a good point about Scala being more library than
infrastructure component. So it can be updated on a per-app basis. On
the one hand it's harder to handle different Scala versions from the
framework side, it's less hard on the deployment side.

On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers  wrote:
> i think the arguments are convincing, but it also makes me wonder if i live
> in some kind of alternate universe... we deploy on customers clusters, where
> the OS, python version, java version and hadoop distro are not chosen by us.
> so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply have
> access to a single proxy machine and launch through yarn. asking them to
> upgrade java is pretty much out of the question or a 6+ month ordeal. of the
> 10 client clusters i can think of on the top of my head all of them are on
> java 7, none are on java 8. so by doing this you would make spark 2
> basically unusable for us (unless most of them have plans of upgrading in
> near term to java 8, i will ask around and report back...).
>
> on a side note, its particularly interesting to me that spark 2 chose to
> continue support for scala 2.10, because even for us in our very constricted
> client environments the scala version is something we can easily upgrade (we
> just deploy a custom build of spark for the relevant scala version and
> hadoop distro). and because scala is not a dependency of any hadoop distro
> (so not on classpath, which i am very happy about) we can use whatever scala
> version we like. also i found the upgrade path from scala 2.10 to 2.11 to be
> very easy, so i have a hard time understanding why anyone would stay on
> scala 2.10. and finally with scala 2.12 around the corner you really dont
> want to be supporting 3 versions. so clearly i am missing something here.
>
>
>
> On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré 
> wrote:
>>
>> +1 to support Java 8 (and future) *only* in Spark 2.0, and end support of
>> Java 7. It makes sense.
>>
>> Regards
>> JB
>>
>>
>> On 03/24/2016 08:27 AM, Reynold Xin wrote:
>>>
>>> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
>>> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
>>> Spark 2.0 would require Java 8 to run).
>>>
>>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
>>> removed public downloads for JDK 7 in July 2015. In the past I've
>>> actually been against dropping Java 8, but today I ran into an issue
>>> with the new Dataset API not working well with Java 8 lambdas, and that
>>> changed my opinion on this.
>>>
>>> I've been thinking more about this issue today and also talked with a
>>> lot people offline to gather feedback, and I actually think the pros
>>> outweighs the cons, for the following reasons (in some rough order of
>>> importance):
>>>
>>> 1. It is complicated to test how well Spark APIs work for Java lambdas
>>> if we support Java 7. Jenkins machines need to have both Java 7 and Java
>>> 8 installed and we must run through a set of test suites in 7, and then
>>> the lambda tests in Java 8. This complicates build environments/scripts,
>>> and makes them less robust. Without good testing infrastructure, I have
>>> no confidence in building good APIs for Java 8.
>>>
>>> 2. Dataset/DataFrame performance will be between 1x to 10x slower in
>>> Java 7. The primary APIs we want users to use in Spark 2.x are
>>> Dataset/DataFrame, and this impacts pretty much everything from machine
>>> learning to structured streaming. We have made great progress in their
>>> performance through extensive use of code generation. (In many
>>> dimensions Spark 2.0 with DataFrames/Datasets looks more like a compiler
>>> than a MapReduce or query engine.) These optimizations don't work well
>>> in Java 7 due to broken code cache flushing. This problem has been fixed
>>> by Oracle in Java 8. In addition, Java 8 comes with better support for
>>> Unsafe and SIMD.
>>>
>>> 3. Scala 2.12 will come out soon, and we will want to add support for
>>> that. Scala 2.12 only works on Java 8. If we do support Java 7, we'd
>>> have a fairly complicated compatibility matrix and testing
>>> infrastructure.
>>>
>>> 4. There are libraries that I've looked into in the past that support
>>> only Java 8. This is more common in high performance libraries such as
>>> Aeron (a messaging library). Having to support Java 7 means we are not
>>> able to use these. It is not that big of a deal right now, but will
>>> become increasingly more difficult as we optimize performance.
>>>
>>>
>>> The downside of not supporting Java 7 is also obvious. Some
>>>

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Al Pivonka
As an end user (developer) and Cluster Admin.
I would have to agree with Koet.

To me the real question is timing,  current version is 1.6.1, the question
I have is how many more releases till 2.0 and what is the time frame?

If you give people six to twelve months to plan and make sure they know
(paste it all over the web site) most can plan ahead.


Just my two pennies





On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen  wrote:

> (PS CDH5 runs fine with Java 8, but I understand your more general point.)
>
> This is a familiar context indeed, but in that context, would a group
> not wanting to update to Java 8 want to manually put Spark 2.0 into
> the mix? That is, if this is a context where the cluster is
> purposefully some stable mix of components, would you be updating just
> one?
>
> You make a good point about Scala being more library than
> infrastructure component. So it can be updated on a per-app basis. On
> the one hand it's harder to handle different Scala versions from the
> framework side, it's less hard on the deployment side.
>
> On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers  wrote:
> > i think the arguments are convincing, but it also makes me wonder if i
> live
> > in some kind of alternate universe... we deploy on customers clusters,
> where
> > the OS, python version, java version and hadoop distro are not chosen by
> us.
> > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply have
> > access to a single proxy machine and launch through yarn. asking them to
> > upgrade java is pretty much out of the question or a 6+ month ordeal. of
> the
> > 10 client clusters i can think of on the top of my head all of them are
> on
> > java 7, none are on java 8. so by doing this you would make spark 2
> > basically unusable for us (unless most of them have plans of upgrading in
> > near term to java 8, i will ask around and report back...).
> >
> > on a side note, its particularly interesting to me that spark 2 chose to
> > continue support for scala 2.10, because even for us in our very
> constricted
> > client environments the scala version is something we can easily upgrade
> (we
> > just deploy a custom build of spark for the relevant scala version and
> > hadoop distro). and because scala is not a dependency of any hadoop
> distro
> > (so not on classpath, which i am very happy about) we can use whatever
> scala
> > version we like. also i found the upgrade path from scala 2.10 to 2.11
> to be
> > very easy, so i have a hard time understanding why anyone would stay on
> > scala 2.10. and finally with scala 2.12 around the corner you really dont
> > want to be supporting 3 versions. so clearly i am missing something here.
> >
> >
> >
> > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré 
> > wrote:
> >>
> >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end support
> of
> >> Java 7. It makes sense.
> >>
> >> Regards
> >> JB
> >>
> >>
> >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
> >>>
> >>> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> >>> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> >>> Spark 2.0 would require Java 8 to run).
> >>>
> >>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> >>> removed public downloads for JDK 7 in July 2015. In the past I've
> >>> actually been against dropping Java 8, but today I ran into an issue
> >>> with the new Dataset API not working well with Java 8 lambdas, and that
> >>> changed my opinion on this.
> >>>
> >>> I've been thinking more about this issue today and also talked with a
> >>> lot people offline to gather feedback, and I actually think the pros
> >>> outweighs the cons, for the following reasons (in some rough order of
> >>> importance):
> >>>
> >>> 1. It is complicated to test how well Spark APIs work for Java lambdas
> >>> if we support Java 7. Jenkins machines need to have both Java 7 and
> Java
> >>> 8 installed and we must run through a set of test suites in 7, and then
> >>> the lambda tests in Java 8. This complicates build
> environments/scripts,
> >>> and makes them less robust. Without good testing infrastructure, I have
> >>> no confidence in building good APIs for Java 8.
> >>>
> >>> 2. Dataset/DataFrame performance will be between 1x to 10x slower in
> >>> Java 7. The primary APIs we want users to use in Spark 2.x are
> >>> Dataset/DataFrame, and this impacts pretty much everything from machine
> >>> learning to structured streaming. We have made great progress in their
> >>> performance through extensive use of code generation. (In many
> >>> dimensions Spark 2.0 with DataFrames/Datasets looks more like a
> compiler
> >>> than a MapReduce or query engine.) These optimizations don't work well
> >>> in Java 7 due to broken code cache flushing. This problem has been
> fixed
> >>> by Oracle in Java 8. In addition, Java 8 comes with better support for
> >>> Unsafe and SIMD.
> >>>
> >>> 3. Scala 2.12 will come out soon, and we 

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Koert Kuipers
The group will not upgrade to spark 2.0 themselves, but they are mostly
fine with vendors like us deploying our application via yarn with whatever
spark version we choose (and bundle, so they do not install it separately,
they might not even be aware of what spark version we use). This all works
because spark does not need to be on the cluster nodes, just on the one
machine where our application gets launched. Having yarn is pretty awesome
in this respect.

On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen  wrote:

> (PS CDH5 runs fine with Java 8, but I understand your more general point.)
>
> This is a familiar context indeed, but in that context, would a group
> not wanting to update to Java 8 want to manually put Spark 2.0 into
> the mix? That is, if this is a context where the cluster is
> purposefully some stable mix of components, would you be updating just
> one?
>
> You make a good point about Scala being more library than
> infrastructure component. So it can be updated on a per-app basis. On
> the one hand it's harder to handle different Scala versions from the
> framework side, it's less hard on the deployment side.
>
> On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers  wrote:
> > i think the arguments are convincing, but it also makes me wonder if i
> live
> > in some kind of alternate universe... we deploy on customers clusters,
> where
> > the OS, python version, java version and hadoop distro are not chosen by
> us.
> > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply have
> > access to a single proxy machine and launch through yarn. asking them to
> > upgrade java is pretty much out of the question or a 6+ month ordeal. of
> the
> > 10 client clusters i can think of on the top of my head all of them are
> on
> > java 7, none are on java 8. so by doing this you would make spark 2
> > basically unusable for us (unless most of them have plans of upgrading in
> > near term to java 8, i will ask around and report back...).
> >
> > on a side note, its particularly interesting to me that spark 2 chose to
> > continue support for scala 2.10, because even for us in our very
> constricted
> > client environments the scala version is something we can easily upgrade
> (we
> > just deploy a custom build of spark for the relevant scala version and
> > hadoop distro). and because scala is not a dependency of any hadoop
> distro
> > (so not on classpath, which i am very happy about) we can use whatever
> scala
> > version we like. also i found the upgrade path from scala 2.10 to 2.11
> to be
> > very easy, so i have a hard time understanding why anyone would stay on
> > scala 2.10. and finally with scala 2.12 around the corner you really dont
> > want to be supporting 3 versions. so clearly i am missing something here.
> >
> >
> >
> > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré 
> > wrote:
> >>
> >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end support
> of
> >> Java 7. It makes sense.
> >>
> >> Regards
> >> JB
> >>
> >>
> >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
> >>>
> >>> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
> >>> wondering if we should also just drop Java 7 support in Spark 2.0 (i.e.
> >>> Spark 2.0 would require Java 8 to run).
> >>>
> >>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
> >>> removed public downloads for JDK 7 in July 2015. In the past I've
> >>> actually been against dropping Java 8, but today I ran into an issue
> >>> with the new Dataset API not working well with Java 8 lambdas, and that
> >>> changed my opinion on this.
> >>>
> >>> I've been thinking more about this issue today and also talked with a
> >>> lot people offline to gather feedback, and I actually think the pros
> >>> outweighs the cons, for the following reasons (in some rough order of
> >>> importance):
> >>>
> >>> 1. It is complicated to test how well Spark APIs work for Java lambdas
> >>> if we support Java 7. Jenkins machines need to have both Java 7 and
> Java
> >>> 8 installed and we must run through a set of test suites in 7, and then
> >>> the lambda tests in Java 8. This complicates build
> environments/scripts,
> >>> and makes them less robust. Without good testing infrastructure, I have
> >>> no confidence in building good APIs for Java 8.
> >>>
> >>> 2. Dataset/DataFrame performance will be between 1x to 10x slower in
> >>> Java 7. The primary APIs we want users to use in Spark 2.x are
> >>> Dataset/DataFrame, and this impacts pretty much everything from machine
> >>> learning to structured streaming. We have made great progress in their
> >>> performance through extensive use of code generation. (In many
> >>> dimensions Spark 2.0 with DataFrames/Datasets looks more like a
> compiler
> >>> than a MapReduce or query engine.) These optimizations don't work well
> >>> in Java 7 due to broken code cache flushing. This problem has been
> fixed
> >>> by Oracle in Java 8. In addition, Java 8 comes with better suppo

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Jean-Baptiste Onofré

Hi Al,

Spark 2.0 doesn't mean Spark 1.x will stop. Clearly, new features will 
go on Spark 2.0, but maintenance release can be performed on 1.x branch.


Regards
JB

On 03/24/2016 05:38 PM, Al Pivonka wrote:

As an end user (developer) and Cluster Admin.
I would have to agree with Koet.

To me the real question is timing,  current version is 1.6.1, the
question I have is how many more releases till 2.0 and what is the time
frame?

If you give people six to twelve months to plan and make sure they know
(paste it all over the web site) most can plan ahead.


Just my two pennies





On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen mailto:so...@cloudera.com>> wrote:

(PS CDH5 runs fine with Java 8, but I understand your more general
point.)

This is a familiar context indeed, but in that context, would a group
not wanting to update to Java 8 want to manually put Spark 2.0 into
the mix? That is, if this is a context where the cluster is
purposefully some stable mix of components, would you be updating just
one?

You make a good point about Scala being more library than
infrastructure component. So it can be updated on a per-app basis. On
the one hand it's harder to handle different Scala versions from the
framework side, it's less hard on the deployment side.

On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers mailto:ko...@tresata.com>> wrote:
 > i think the arguments are convincing, but it also makes me wonder
if i live
 > in some kind of alternate universe... we deploy on customers
clusters, where
 > the OS, python version, java version and hadoop distro are not
chosen by us.
 > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we
simply have
 > access to a single proxy machine and launch through yarn. asking
them to
 > upgrade java is pretty much out of the question or a 6+ month
ordeal. of the
 > 10 client clusters i can think of on the top of my head all of
them are on
 > java 7, none are on java 8. so by doing this you would make spark 2
 > basically unusable for us (unless most of them have plans of
upgrading in
 > near term to java 8, i will ask around and report back...).
 >
 > on a side note, its particularly interesting to me that spark 2
chose to
 > continue support for scala 2.10, because even for us in our very
constricted
 > client environments the scala version is something we can easily
upgrade (we
 > just deploy a custom build of spark for the relevant scala
version and
 > hadoop distro). and because scala is not a dependency of any
hadoop distro
 > (so not on classpath, which i am very happy about) we can use
whatever scala
 > version we like. also i found the upgrade path from scala 2.10 to
2.11 to be
 > very easy, so i have a hard time understanding why anyone would
stay on
 > scala 2.10. and finally with scala 2.12 around the corner you
really dont
 > want to be supporting 3 versions. so clearly i am missing
something here.
 >
 >
 >
 > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré
mailto:j...@nanthrax.net>>
 > wrote:
 >>
 >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end
support of
 >> Java 7. It makes sense.
 >>
 >> Regards
 >> JB
 >>
 >>
 >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
 >>>
 >>> About a year ago we decided to drop Java 6 support in Spark
1.5. I am
 >>> wondering if we should also just drop Java 7 support in Spark
2.0 (i.e.
 >>> Spark 2.0 would require Java 8 to run).
 >>>
 >>> Oracle ended public updates for JDK 7 in one year ago (Apr
2015), and
 >>> removed public downloads for JDK 7 in July 2015. In the past I've
 >>> actually been against dropping Java 8, but today I ran into an
issue
 >>> with the new Dataset API not working well with Java 8 lambdas,
and that
 >>> changed my opinion on this.
 >>>
 >>> I've been thinking more about this issue today and also talked
with a
 >>> lot people offline to gather feedback, and I actually think the
pros
 >>> outweighs the cons, for the following reasons (in some rough
order of
 >>> importance):
 >>>
 >>> 1. It is complicated to test how well Spark APIs work for Java
lambdas
 >>> if we support Java 7. Jenkins machines need to have both Java 7
and Java
 >>> 8 installed and we must run through a set of test suites in 7,
and then
 >>> the lambda tests in Java 8. This complicates build
environments/scripts,
 >>> and makes them less robust. Without good testing
infrastructure, I have
 >>> no confidence in building good APIs for Java 8.
 >>>
 >>> 2. Dataset/DataFrame performance will be between 1x to 10x
slower in
 >>> Java 7. The primary APIs we want users to use in Spark 2.x are
 >>> Data

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Al Pivonka
Thank you for the context Jean...
I appreciate it...


On Thu, Mar 24, 2016 at 12:40 PM, Jean-Baptiste Onofré 
wrote:

> Hi Al,
>
> Spark 2.0 doesn't mean Spark 1.x will stop. Clearly, new features will go
> on Spark 2.0, but maintenance release can be performed on 1.x branch.
>
> Regards
> JB
>
> On 03/24/2016 05:38 PM, Al Pivonka wrote:
>
>> As an end user (developer) and Cluster Admin.
>> I would have to agree with Koet.
>>
>> To me the real question is timing,  current version is 1.6.1, the
>> question I have is how many more releases till 2.0 and what is the time
>> frame?
>>
>> If you give people six to twelve months to plan and make sure they know
>> (paste it all over the web site) most can plan ahead.
>>
>>
>> Just my two pennies
>>
>>
>>
>>
>>
>> On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen > > wrote:
>>
>> (PS CDH5 runs fine with Java 8, but I understand your more general
>> point.)
>>
>> This is a familiar context indeed, but in that context, would a group
>> not wanting to update to Java 8 want to manually put Spark 2.0 into
>> the mix? That is, if this is a context where the cluster is
>> purposefully some stable mix of components, would you be updating just
>> one?
>>
>> You make a good point about Scala being more library than
>> infrastructure component. So it can be updated on a per-app basis. On
>> the one hand it's harder to handle different Scala versions from the
>> framework side, it's less hard on the deployment side.
>>
>> On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers > > wrote:
>>  > i think the arguments are convincing, but it also makes me wonder
>> if i live
>>  > in some kind of alternate universe... we deploy on customers
>> clusters, where
>>  > the OS, python version, java version and hadoop distro are not
>> chosen by us.
>>  > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we
>> simply have
>>  > access to a single proxy machine and launch through yarn. asking
>> them to
>>  > upgrade java is pretty much out of the question or a 6+ month
>> ordeal. of the
>>  > 10 client clusters i can think of on the top of my head all of
>> them are on
>>  > java 7, none are on java 8. so by doing this you would make spark 2
>>  > basically unusable for us (unless most of them have plans of
>> upgrading in
>>  > near term to java 8, i will ask around and report back...).
>>  >
>>  > on a side note, its particularly interesting to me that spark 2
>> chose to
>>  > continue support for scala 2.10, because even for us in our very
>> constricted
>>  > client environments the scala version is something we can easily
>> upgrade (we
>>  > just deploy a custom build of spark for the relevant scala
>> version and
>>  > hadoop distro). and because scala is not a dependency of any
>> hadoop distro
>>  > (so not on classpath, which i am very happy about) we can use
>> whatever scala
>>  > version we like. also i found the upgrade path from scala 2.10 to
>> 2.11 to be
>>  > very easy, so i have a hard time understanding why anyone would
>> stay on
>>  > scala 2.10. and finally with scala 2.12 around the corner you
>> really dont
>>  > want to be supporting 3 versions. so clearly i am missing
>> something here.
>>  >
>>  >
>>  >
>>  > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré
>> mailto:j...@nanthrax.net>>
>>
>>  > wrote:
>>  >>
>>  >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end
>> support of
>>  >> Java 7. It makes sense.
>>  >>
>>  >> Regards
>>  >> JB
>>  >>
>>  >>
>>  >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
>>  >>>
>>  >>> About a year ago we decided to drop Java 6 support in Spark
>> 1.5. I am
>>  >>> wondering if we should also just drop Java 7 support in Spark
>> 2.0 (i.e.
>>  >>> Spark 2.0 would require Java 8 to run).
>>  >>>
>>  >>> Oracle ended public updates for JDK 7 in one year ago (Apr
>> 2015), and
>>  >>> removed public downloads for JDK 7 in July 2015. In the past I've
>>  >>> actually been against dropping Java 8, but today I ran into an
>> issue
>>  >>> with the new Dataset API not working well with Java 8 lambdas,
>> and that
>>  >>> changed my opinion on this.
>>  >>>
>>  >>> I've been thinking more about this issue today and also talked
>> with a
>>  >>> lot people offline to gather feedback, and I actually think the
>> pros
>>  >>> outweighs the cons, for the following reasons (in some rough
>> order of
>>  >>> importance):
>>  >>>
>>  >>> 1. It is complicated to test how well Spark APIs work for Java
>> lambdas
>>  >>> if we support Java 7. Jenkins machines need to have both Java 7
>> and Java
>>

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Koert Kuipers
i guess what i am saying is that in a yarn world the only hard restrictions
left are the the containers you run in, which means the hadoop version,
java version and python version (if you use python).


On Thu, Mar 24, 2016 at 12:39 PM, Koert Kuipers  wrote:

> The group will not upgrade to spark 2.0 themselves, but they are mostly
> fine with vendors like us deploying our application via yarn with whatever
> spark version we choose (and bundle, so they do not install it separately,
> they might not even be aware of what spark version we use). This all works
> because spark does not need to be on the cluster nodes, just on the one
> machine where our application gets launched. Having yarn is pretty awesome
> in this respect.
>
> On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen  wrote:
>
>> (PS CDH5 runs fine with Java 8, but I understand your more general point.)
>>
>> This is a familiar context indeed, but in that context, would a group
>> not wanting to update to Java 8 want to manually put Spark 2.0 into
>> the mix? That is, if this is a context where the cluster is
>> purposefully some stable mix of components, would you be updating just
>> one?
>>
>> You make a good point about Scala being more library than
>> infrastructure component. So it can be updated on a per-app basis. On
>> the one hand it's harder to handle different Scala versions from the
>> framework side, it's less hard on the deployment side.
>>
>> On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers  wrote:
>> > i think the arguments are convincing, but it also makes me wonder if i
>> live
>> > in some kind of alternate universe... we deploy on customers clusters,
>> where
>> > the OS, python version, java version and hadoop distro are not chosen
>> by us.
>> > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
>> have
>> > access to a single proxy machine and launch through yarn. asking them to
>> > upgrade java is pretty much out of the question or a 6+ month ordeal.
>> of the
>> > 10 client clusters i can think of on the top of my head all of them are
>> on
>> > java 7, none are on java 8. so by doing this you would make spark 2
>> > basically unusable for us (unless most of them have plans of upgrading
>> in
>> > near term to java 8, i will ask around and report back...).
>> >
>> > on a side note, its particularly interesting to me that spark 2 chose to
>> > continue support for scala 2.10, because even for us in our very
>> constricted
>> > client environments the scala version is something we can easily
>> upgrade (we
>> > just deploy a custom build of spark for the relevant scala version and
>> > hadoop distro). and because scala is not a dependency of any hadoop
>> distro
>> > (so not on classpath, which i am very happy about) we can use whatever
>> scala
>> > version we like. also i found the upgrade path from scala 2.10 to 2.11
>> to be
>> > very easy, so i have a hard time understanding why anyone would stay on
>> > scala 2.10. and finally with scala 2.12 around the corner you really
>> dont
>> > want to be supporting 3 versions. so clearly i am missing something
>> here.
>> >
>> >
>> >
>> > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré 
>> > wrote:
>> >>
>> >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end support
>> of
>> >> Java 7. It makes sense.
>> >>
>> >> Regards
>> >> JB
>> >>
>> >>
>> >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
>> >>>
>> >>> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
>> >>> wondering if we should also just drop Java 7 support in Spark 2.0
>> (i.e.
>> >>> Spark 2.0 would require Java 8 to run).
>> >>>
>> >>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
>> >>> removed public downloads for JDK 7 in July 2015. In the past I've
>> >>> actually been against dropping Java 8, but today I ran into an issue
>> >>> with the new Dataset API not working well with Java 8 lambdas, and
>> that
>> >>> changed my opinion on this.
>> >>>
>> >>> I've been thinking more about this issue today and also talked with a
>> >>> lot people offline to gather feedback, and I actually think the pros
>> >>> outweighs the cons, for the following reasons (in some rough order of
>> >>> importance):
>> >>>
>> >>> 1. It is complicated to test how well Spark APIs work for Java lambdas
>> >>> if we support Java 7. Jenkins machines need to have both Java 7 and
>> Java
>> >>> 8 installed and we must run through a set of test suites in 7, and
>> then
>> >>> the lambda tests in Java 8. This complicates build
>> environments/scripts,
>> >>> and makes them less robust. Without good testing infrastructure, I
>> have
>> >>> no confidence in building good APIs for Java 8.
>> >>>
>> >>> 2. Dataset/DataFrame performance will be between 1x to 10x slower in
>> >>> Java 7. The primary APIs we want users to use in Spark 2.x are
>> >>> Dataset/DataFrame, and this impacts pretty much everything from
>> machine
>> >>> learning to structured streaming. We have made great pr

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
On Thu, Mar 24, 2016 at 1:04 AM, Reynold Xin  wrote:
> I actually talked quite a bit today with an engineer on the scala compiler
> team tonight and the scala 2.10 + java 8 combo should be ok. The latest
> Scala 2.10 release should have all the important fixes that are needed for
> Java 8.

So, do you actually get the benefits you're looking for without
compiling explicitly to the 1.8 jvm? Because:

$ scala -version
Scala code runner version 2.10.6 -- Copyright 2002-2013, LAMP/EPFL
$ scalac -target jvm-1.8
scalac error: Usage: -target:
 where  choices are jvm-1.5, jvm-1.5-fjbg, jvm-1.5-asm,
jvm-1.6, jvm-1.7, msil (default: jvm-1.6)

So even if you use jdk 8 to compile with scala 2.10, you can't target
jvm 1.8 as far as I can tell.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Reynold Xin
Yes

On Thursday, March 24, 2016, Marcelo Vanzin  wrote:

> On Thu, Mar 24, 2016 at 1:04 AM, Reynold Xin  > wrote:
> > I actually talked quite a bit today with an engineer on the scala
> compiler
> > team tonight and the scala 2.10 + java 8 combo should be ok. The latest
> > Scala 2.10 release should have all the important fixes that are needed
> for
> > Java 8.
>
> So, do you actually get the benefits you're looking for without
> compiling explicitly to the 1.8 jvm? Because:
>
> $ scala -version
> Scala code runner version 2.10.6 -- Copyright 2002-2013, LAMP/EPFL
> $ scalac -target jvm-1.8
> scalac error: Usage: -target:
>  where  choices are jvm-1.5, jvm-1.5-fjbg, jvm-1.5-asm,
> jvm-1.6, jvm-1.7, msil (default: jvm-1.6)
>
> So even if you use jdk 8 to compile with scala 2.10, you can't target
> jvm 1.8 as far as I can tell.
>
> --
> Marcelo
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
On Thu, Mar 24, 2016 at 9:54 AM, Koert Kuipers  wrote:
> i guess what i am saying is that in a yarn world the only hard restrictions
> left are the the containers you run in, which means the hadoop version, java
> version and python version (if you use python).

It is theoretically possible to run containers with a different JDK
than the NM (I've done it for testing), although I'm not sure about
whether that's recommended from YARN's perspective.

But I understand your concern is that you're not allowed to modify the
machines where the NMs are hosted. You could hack things and
distribute the JVM with your Spark application, but that would be
incredibly ugly.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
On Thu, Mar 24, 2016 at 10:13 AM, Reynold Xin  wrote:
> Yes

So is it safe to say the only hard requirements for Java 8 in your list is (4)?

(1) and (3) are infrastructure issues. Yes, it sucks to maintain more
testing infrastructure and potentially more complicated build scripts,
but does that really outweigh maintaining support for Java 7?

A cheap hack would also be to require jdk 1.8 for the build, but still
target java 7. You could then isolate java 8 tests in a separate
module that will get run in all builds because of that requirement.
There are downsides, of course: it's basically the same situation we
were in when we still supported Java 6 but were using jdk 1.7 to build
things. Setting the proper bootclasspath to use jdk 7's rt.jar during
compilation could solve a lot of those. (We already have both JDKs in
jenkins machines as far as I can tell.)

For Scala 2.12, and option might be dropping Java 7 when we decide to
add support for that (unless you're also suggesting Scala 2.12 as part
of 2.0?).

For (2) it seems the jvm used to compile things doesn't really make a
difference. It could be as simple as "we strongly recommend running
Spark 2.0 on Java 8".

Note I'm not for or against the change per se; I'd like to see more
data about what users are really using out there before making that
decision. But there was an explicit desire to maintain java 7
compatibility when we talked about going for Spark 2.0. And with those
kinds of decisions there's always a cost, including spending more
resources on infrastructure and testing.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Steve Loughran

On 24 Mar 2016, at 15:27, Koert Kuipers 
mailto:ko...@tresata.com>> wrote:

i think the arguments are convincing, but it also makes me wonder if i live in 
some kind of alternate universe... we deploy on customers clusters, where the 
OS, python version, java version and hadoop distro are not chosen by us. so 
think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply have access 
to a single proxy machine and launch through yarn. asking them to upgrade java 
is pretty much out of the question or a 6+ month ordeal. of the 10 client 
clusters i can think of on the top of my head all of them are on java 7, none 
are on java 8. so by doing this you would make spark 2 basically unusable for 
us (unless most of them have plans of upgrading in near term to java 8, i will 
ask around and report back...).


It's not actually mandatory for the process executing in the Yarn cluster to 
run with the same JVM as the rest of the Hadoop stack; all that is needed is 
for the environment variables to set up the JAVA_HOME and PATH. Switching JVMs 
not something which YARN makes it easy to do, but it may be possible, 
especially if Spark itself provides some hooks, so you don't have to manually 
lay with setting things up. That may be something which could significantly 
ease adoption of Spark 2 in YARN clusters. Same for Python.

This is something I could probably help others to address



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Stephen Boesch
+1 for java8 only   +1 for 2.11+ only .At this point scala libraries
supporting only 2.10 are typically less active and/or poorly maintained.
That trend will only continue when considering the lifespan of spark 2.X.

2016-03-24 11:32 GMT-07:00 Steve Loughran :

>
> On 24 Mar 2016, at 15:27, Koert Kuipers  wrote:
>
> i think the arguments are convincing, but it also makes me wonder if i
> live in some kind of alternate universe... we deploy on customers clusters,
> where the OS, python version, java version and hadoop distro are not chosen
> by us. so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
> have access to a single proxy machine and launch through yarn. asking them
> to upgrade java is pretty much out of the question or a 6+ month ordeal. of
> the 10 client clusters i can think of on the top of my head all of them are
> on java 7, none are on java 8. so by doing this you would make spark 2
> basically unusable for us (unless most of them have plans of upgrading in
> near term to java 8, i will ask around and report back...).
>
>
>
> It's not actually mandatory for the process executing in the Yarn cluster
> to run with the same JVM as the rest of the Hadoop stack; all that is
> needed is for the environment variables to set up the JAVA_HOME and PATH.
> Switching JVMs not something which YARN makes it easy to do, but it may be
> possible, especially if Spark itself provides some hooks, so you don't have
> to manually lay with setting things up. That may be something which could
> significantly ease adoption of Spark 2 in YARN clusters. Same for Python.
>
> This is something I could probably help others to address
>
>


Re: Spark 1.6.1 Hadoop 2.6 package on S3 corrupt?

2016-03-24 Thread Michael Armbrust
Patrick is investigating.

On Thu, Mar 24, 2016 at 7:25 AM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:

> Just checking in on this again as the builds on S3 are still broken. :/
>
> Could it have something to do with us moving release-build.sh
> 
> ?
> ​
>
> On Mon, Mar 21, 2016 at 1:43 PM Nicholas Chammas <
> nicholas.cham...@gmail.com> wrote:
>
>> Is someone going to retry fixing these packages? It's still a problem.
>>
>> Also, it would be good to understand why this is happening.
>>
>> On Fri, Mar 18, 2016 at 6:49 PM Jakob Odersky  wrote:
>>
>>> I just realized you're using a different download site. Sorry for the
>>> confusion, the link I get for a direct download of Spark 1.6.1 /
>>> Hadoop 2.6 is
>>> http://d3kbcqa49mib13.cloudfront.net/spark-1.6.1-bin-hadoop2.6.tgz
>>>
>>> On Fri, Mar 18, 2016 at 3:20 PM, Nicholas Chammas
>>>  wrote:
>>> > I just retried the Spark 1.6.1 / Hadoop 2.6 download and got a corrupt
>>> ZIP
>>> > file.
>>> >
>>> > Jakob, are you sure the ZIP unpacks correctly for you? Is it the same
>>> Spark
>>> > 1.6.1/Hadoop 2.6 package you had a success with?
>>> >
>>> > On Fri, Mar 18, 2016 at 6:11 PM Jakob Odersky 
>>> wrote:
>>> >>
>>> >> I just experienced the issue, however retrying the download a second
>>> >> time worked. Could it be that there is some load balancer/cache in
>>> >> front of the archive and some nodes still serve the corrupt packages?
>>> >>
>>> >> On Fri, Mar 18, 2016 at 8:00 AM, Nicholas Chammas
>>> >>  wrote:
>>> >> > I'm seeing the same. :(
>>> >> >
>>> >> > On Fri, Mar 18, 2016 at 10:57 AM Ted Yu 
>>> wrote:
>>> >> >>
>>> >> >> I tried again this morning :
>>> >> >>
>>> >> >> $ wget
>>> >> >>
>>> >> >>
>>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz
>>> >> >> --2016-03-18 07:55:30--
>>> >> >>
>>> >> >>
>>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz
>>> >> >> Resolving s3.amazonaws.com... 54.231.19.163
>>> >> >> ...
>>> >> >> $ tar zxf spark-1.6.1-bin-hadoop2.6.tgz
>>> >> >>
>>> >> >> gzip: stdin: unexpected end of file
>>> >> >> tar: Unexpected EOF in archive
>>> >> >> tar: Unexpected EOF in archive
>>> >> >> tar: Error is not recoverable: exiting now
>>> >> >>
>>> >> >> On Thu, Mar 17, 2016 at 8:57 AM, Michael Armbrust
>>> >> >> 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> Patrick reuploaded the artifacts, so it should be fixed now.
>>> >> >>>
>>> >> >>> On Mar 16, 2016 5:48 PM, "Nicholas Chammas"
>>> >> >>> 
>>> >> >>> wrote:
>>> >> 
>>> >>  Looks like the other packages may also be corrupt. I’m getting
>>> the
>>> >>  same
>>> >>  error for the Spark 1.6.1 / Hadoop 2.4 package.
>>> >> 
>>> >> 
>>> >> 
>>> >> 
>>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.4.tgz
>>> >> 
>>> >>  Nick
>>> >> 
>>> >> 
>>> >>  On Wed, Mar 16, 2016 at 8:28 PM Ted Yu 
>>> wrote:
>>> >> >
>>> >> > On Linux, I got:
>>> >> >
>>> >> > $ tar zxf spark-1.6.1-bin-hadoop2.6.tgz
>>> >> >
>>> >> > gzip: stdin: unexpected end of file
>>> >> > tar: Unexpected EOF in archive
>>> >> > tar: Unexpected EOF in archive
>>> >> > tar: Error is not recoverable: exiting now
>>> >> >
>>> >> > On Wed, Mar 16, 2016 at 5:15 PM, Nicholas Chammas
>>> >> >  wrote:
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz
>>> >> >>
>>> >> >> Does anyone else have trouble unzipping this? How did this
>>> happen?
>>> >> >>
>>> >> >> What I get is:
>>> >> >>
>>> >> >> $ gzip -t spark-1.6.1-bin-hadoop2.6.tgz
>>> >> >> gzip: spark-1.6.1-bin-hadoop2.6.tgz: unexpected end of file
>>> >> >> gzip: spark-1.6.1-bin-hadoop2.6.tgz: uncompress failed
>>> >> >>
>>> >> >> Seems like a strange type of problem to come across.
>>> >> >>
>>> >> >> Nick
>>> >> >
>>> >> >
>>> >> >>
>>> >> >
>>>
>>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Andrew Ash
Spark 2.x has to be the time for Java 8.

I'd rather increase JVM major version on a Spark major version than on a
Spark minor version, and I'd rather Spark do that upgrade for the 2.x
series than the 3.x series (~2yr from now based on the lifetime of Spark
1.x).  If we wait until the next opportunity for a breaking change to Spark
(3.x) we might be upgrading to Java 9 at that point rather than Java 8.

If Spark users need Java 7 they are free to continue using the 1.x series,
the same way that folks who need Java 6 are free to continue using 1.4

On Thu, Mar 24, 2016 at 11:46 AM, Stephen Boesch  wrote:

> +1 for java8 only   +1 for 2.11+ only .At this point scala libraries
> supporting only 2.10 are typically less active and/or poorly maintained.
> That trend will only continue when considering the lifespan of spark 2.X.
>
> 2016-03-24 11:32 GMT-07:00 Steve Loughran :
>
>>
>> On 24 Mar 2016, at 15:27, Koert Kuipers  wrote:
>>
>> i think the arguments are convincing, but it also makes me wonder if i
>> live in some kind of alternate universe... we deploy on customers clusters,
>> where the OS, python version, java version and hadoop distro are not chosen
>> by us. so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
>> have access to a single proxy machine and launch through yarn. asking them
>> to upgrade java is pretty much out of the question or a 6+ month ordeal. of
>> the 10 client clusters i can think of on the top of my head all of them are
>> on java 7, none are on java 8. so by doing this you would make spark 2
>> basically unusable for us (unless most of them have plans of upgrading in
>> near term to java 8, i will ask around and report back...).
>>
>>
>>
>> It's not actually mandatory for the process executing in the Yarn cluster
>> to run with the same JVM as the rest of the Hadoop stack; all that is
>> needed is for the environment variables to set up the JAVA_HOME and PATH.
>> Switching JVMs not something which YARN makes it easy to do, but it may be
>> possible, especially if Spark itself provides some hooks, so you don't have
>> to manually lay with setting things up. That may be something which could
>> significantly ease adoption of Spark 2 in YARN clusters. Same for Python.
>>
>> This is something I could probably help others to address
>>
>>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Jakob Odersky
Reynold's 3rd point is particularly strong in my opinion. Supporting
Scala 2.12 will require Java 8 anyway, and introducing such a change
is probably best done in a major release.
Consider what would happen if Spark 2.0 doesn't require Java 8 and
hence not support Scala 2.12. Will it be stuck on an older version
until 3.0 is out? Will it be introduced in a minor release?
I think 2.0 is the best time for such a change.

On Thu, Mar 24, 2016 at 11:46 AM, Stephen Boesch  wrote:
> +1 for java8 only   +1 for 2.11+ only .At this point scala libraries
> supporting only 2.10 are typically less active and/or poorly maintained.
> That trend will only continue when considering the lifespan of spark 2.X.
>
> 2016-03-24 11:32 GMT-07:00 Steve Loughran :
>>
>>
>> On 24 Mar 2016, at 15:27, Koert Kuipers  wrote:
>>
>> i think the arguments are convincing, but it also makes me wonder if i
>> live in some kind of alternate universe... we deploy on customers clusters,
>> where the OS, python version, java version and hadoop distro are not chosen
>> by us. so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
>> have access to a single proxy machine and launch through yarn. asking them
>> to upgrade java is pretty much out of the question or a 6+ month ordeal. of
>> the 10 client clusters i can think of on the top of my head all of them are
>> on java 7, none are on java 8. so by doing this you would make spark 2
>> basically unusable for us (unless most of them have plans of upgrading in
>> near term to java 8, i will ask around and report back...).
>>
>>
>>
>> It's not actually mandatory for the process executing in the Yarn cluster
>> to run with the same JVM as the rest of the Hadoop stack; all that is needed
>> is for the environment variables to set up the JAVA_HOME and PATH. Switching
>> JVMs not something which YARN makes it easy to do, but it may be possible,
>> especially if Spark itself provides some hooks, so you don't have to
>> manually lay with setting things up. That may be something which could
>> significantly ease adoption of Spark 2 in YARN clusters. Same for Python.
>>
>> This is something I could probably help others to address
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Romi Kuntsman
+1 for Java 8 only

I think it will make it easier to make a unified API for Java and Scala,
instead of the wrappers of Java over Scala.
On Mar 24, 2016 11:46 AM, "Stephen Boesch"  wrote:

> +1 for java8 only   +1 for 2.11+ only .At this point scala libraries
> supporting only 2.10 are typically less active and/or poorly maintained.
> That trend will only continue when considering the lifespan of spark 2.X.
>
> 2016-03-24 11:32 GMT-07:00 Steve Loughran :
>
>>
>> On 24 Mar 2016, at 15:27, Koert Kuipers  wrote:
>>
>> i think the arguments are convincing, but it also makes me wonder if i
>> live in some kind of alternate universe... we deploy on customers clusters,
>> where the OS, python version, java version and hadoop distro are not chosen
>> by us. so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
>> have access to a single proxy machine and launch through yarn. asking them
>> to upgrade java is pretty much out of the question or a 6+ month ordeal. of
>> the 10 client clusters i can think of on the top of my head all of them are
>> on java 7, none are on java 8. so by doing this you would make spark 2
>> basically unusable for us (unless most of them have plans of upgrading in
>> near term to java 8, i will ask around and report back...).
>>
>>
>>
>> It's not actually mandatory for the process executing in the Yarn cluster
>> to run with the same JVM as the rest of the Hadoop stack; all that is
>> needed is for the environment variables to set up the JAVA_HOME and PATH.
>> Switching JVMs not something which YARN makes it easy to do, but it may be
>> possible, especially if Spark itself provides some hooks, so you don't have
>> to manually lay with setting things up. That may be something which could
>> significantly ease adoption of Spark 2 in YARN clusters. Same for Python.
>>
>> This is something I could probably help others to address
>>
>>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
Hi Jakob,

On Thu, Mar 24, 2016 at 2:29 PM, Jakob Odersky  wrote:
> Reynold's 3rd point is particularly strong in my opinion. Supporting
> Consider what would happen if Spark 2.0 doesn't require Java 8 and
> hence not support Scala 2.12. Will it be stuck on an older version
> until 3.0 is out?

That's a false choice. You can support 2.10 (or 2.11) on Java 7 and
2.12 on Java 8.

I'm not saying it's a great idea, just that what you're suggesting
isn't really a problem.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Jakob Odersky
You can, but since it's going to be a maintainability issue I would
argue it is in fact a problem.

On Thu, Mar 24, 2016 at 2:34 PM, Marcelo Vanzin  wrote:
> Hi Jakob,
>
> On Thu, Mar 24, 2016 at 2:29 PM, Jakob Odersky  wrote:
>> Reynold's 3rd point is particularly strong in my opinion. Supporting
>> Consider what would happen if Spark 2.0 doesn't require Java 8 and
>> hence not support Scala 2.12. Will it be stuck on an older version
>> until 3.0 is out?
>
> That's a false choice. You can support 2.10 (or 2.11) on Java 7 and
> 2.12 on Java 8.
>
> I'm not saying it's a great idea, just that what you're suggesting
> isn't really a problem.
>
> --
> Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Jakob Odersky
I mean from the perspective of someone developing Spark, it makes
things more complicated. It's just my point of view, people that
actually support Spark deployments may have a different opinion ;)

On Thu, Mar 24, 2016 at 2:41 PM, Jakob Odersky  wrote:
> You can, but since it's going to be a maintainability issue I would
> argue it is in fact a problem.
>
> On Thu, Mar 24, 2016 at 2:34 PM, Marcelo Vanzin  wrote:
>> Hi Jakob,
>>
>> On Thu, Mar 24, 2016 at 2:29 PM, Jakob Odersky  wrote:
>>> Reynold's 3rd point is particularly strong in my opinion. Supporting
>>> Consider what would happen if Spark 2.0 doesn't require Java 8 and
>>> hence not support Scala 2.12. Will it be stuck on an older version
>>> until 3.0 is out?
>>
>> That's a false choice. You can support 2.10 (or 2.11) on Java 7 and
>> 2.12 on Java 8.
>>
>> I'm not saying it's a great idea, just that what you're suggesting
>> isn't really a problem.
>>
>> --
>> Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
On Thu, Mar 24, 2016 at 2:41 PM, Jakob Odersky  wrote:
> You can, but since it's going to be a maintainability issue I would
> argue it is in fact a problem.

Every thing you choose to support generates a maintenance burden.
Support 3 versions of Scala would be a huge maintenance burden, for
example, as is supporting 2 versions of the JDK. Just note that,
technically, we do support 2 versions of the jdk today; we just don't
do a lot of automated testing on jdk 8 (PRs are all built with jdk 7
AFAIK).

So at the end it's a compromise. How many users will be affected by
your choices? That's the question that I think is the most important.
If switching to java 8-only means a bunch of users won't be able to
upgrade, it means that Spark 2.0 will get less use than 1.x and will
take longer to gain traction. That has other ramifications - such as
less use means less issues might be found and the overall quality may
suffer in the beginning of this transition.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Kostas Sakellis
If an argument here is the ongoing build/maintenance burden I think we
should seriously consider dropping scala 2.10 in Spark 2.0. Supporting
scala 2.10 is bigger build/infrastructure burden than supporting jdk7 since
you actually have to build different artifacts and test them whereas you
can target Spark onto 1.7 and just test it on JDK8.

In addition, as others pointed out, it seems like a bigger pain to drop
support for a JDK than scala version. So if we are considering dropping
java 7, which is a breaking change on the infra side, now is also a good
time to drop Scala 2.10 support.

Kostas

P.S. I haven't heard anyone on this thread fight for Scala 2.10 support.

On Thu, Mar 24, 2016 at 2:46 PM, Marcelo Vanzin  wrote:

> On Thu, Mar 24, 2016 at 2:41 PM, Jakob Odersky  wrote:
> > You can, but since it's going to be a maintainability issue I would
> > argue it is in fact a problem.
>
> Every thing you choose to support generates a maintenance burden.
> Support 3 versions of Scala would be a huge maintenance burden, for
> example, as is supporting 2 versions of the JDK. Just note that,
> technically, we do support 2 versions of the jdk today; we just don't
> do a lot of automated testing on jdk 8 (PRs are all built with jdk 7
> AFAIK).
>
> So at the end it's a compromise. How many users will be affected by
> your choices? That's the question that I think is the most important.
> If switching to java 8-only means a bunch of users won't be able to
> upgrade, it means that Spark 2.0 will get less use than 1.x and will
> take longer to gain traction. That has other ramifications - such as
> less use means less issues might be found and the overall quality may
> suffer in the beginning of this transition.
>
> --
> Marcelo
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Reynold Xin
Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are not
binary compatible, whereas JVM 7 and 8 are binary compatible except certain
esoteric cases.


On Thu, Mar 24, 2016 at 4:44 PM, Kostas Sakellis 
wrote:

> If an argument here is the ongoing build/maintenance burden I think we
> should seriously consider dropping scala 2.10 in Spark 2.0. Supporting
> scala 2.10 is bigger build/infrastructure burden than supporting jdk7 since
> you actually have to build different artifacts and test them whereas you
> can target Spark onto 1.7 and just test it on JDK8.
>
> In addition, as others pointed out, it seems like a bigger pain to drop
> support for a JDK than scala version. So if we are considering dropping
> java 7, which is a breaking change on the infra side, now is also a good
> time to drop Scala 2.10 support.
>
> Kostas
>
> P.S. I haven't heard anyone on this thread fight for Scala 2.10 support.
>
> On Thu, Mar 24, 2016 at 2:46 PM, Marcelo Vanzin 
> wrote:
>
>> On Thu, Mar 24, 2016 at 2:41 PM, Jakob Odersky  wrote:
>> > You can, but since it's going to be a maintainability issue I would
>> > argue it is in fact a problem.
>>
>> Every thing you choose to support generates a maintenance burden.
>> Support 3 versions of Scala would be a huge maintenance burden, for
>> example, as is supporting 2 versions of the JDK. Just note that,
>> technically, we do support 2 versions of the jdk today; we just don't
>> do a lot of automated testing on jdk 8 (PRs are all built with jdk 7
>> AFAIK).
>>
>> So at the end it's a compromise. How many users will be affected by
>> your choices? That's the question that I think is the most important.
>> If switching to java 8-only means a bunch of users won't be able to
>> upgrade, it means that Spark 2.0 will get less use than 1.x and will
>> take longer to gain traction. That has other ramifications - such as
>> less use means less issues might be found and the overall quality may
>> suffer in the beginning of this transition.
>>
>> --
>> Marcelo
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
> Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
> upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are not
> binary compatible, whereas JVM 7 and 8 are binary compatible except certain
> esoteric cases.

True, but ask anyone who manages a large cluster how long it would
take them to upgrade the jdk across their cluster and validate all
their applications and everything... binary compatibility is a tiny
drop in that bucket.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Reynold Xin
If you want to go down that route, you should also ask somebody who has had
experience managing a large organization's applications and try to update
Scala version.


On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin  wrote:

> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
> not
> > binary compatible, whereas JVM 7 and 8 are binary compatible except
> certain
> > esoteric cases.
>
> True, but ask anyone who manages a large cluster how long it would
> take them to upgrade the jdk across their cluster and validate all
> their applications and everything... binary compatibility is a tiny
> drop in that bucket.
>
> --
> Marcelo
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Kostas Sakellis
In addition, with Spark 2.0, we are throwing away binary compatibility
anyways so user applications will have to be recompiled.

The only argument I can see is for libraries that have already been built
on Scala 2.10 that are no longer being maintained. How big of an issue do
we think that is?

Kostas

On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin  wrote:

> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
> not
> > binary compatible, whereas JVM 7 and 8 are binary compatible except
> certain
> > esoteric cases.
>
> True, but ask anyone who manages a large cluster how long it would
> take them to upgrade the jdk across their cluster and validate all
> their applications and everything... binary compatibility is a tiny
> drop in that bucket.
>
> --
> Marcelo
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mark Hamstra
It's a pain in the ass.  Especially if some of your transitive dependencies
never upgraded from 2.10 to 2.11.

On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin  wrote:

> If you want to go down that route, you should also ask somebody who has
> had experience managing a large organization's applications and try to
> update Scala version.
>
>
> On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
> wrote:
>
>> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
>> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
>> not
>> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>> certain
>> > esoteric cases.
>>
>> True, but ask anyone who manages a large cluster how long it would
>> take them to upgrade the jdk across their cluster and validate all
>> their applications and everything... binary compatibility is a tiny
>> drop in that bucket.
>>
>> --
>> Marcelo
>>
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Marcelo Vanzin
On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin  wrote:
> If you want to go down that route, you should also ask somebody who has had
> experience managing a large organization's applications and try to update
> Scala version.

I understand both sides. But if you look at what I've been asking
since the beginning, it's all about the cost and benefits of dropping
support for java 1.7.

The biggest argument in your original e-mail is about testing. And the
testing cost is much bigger for supporting scala 2.10 than it is for
supporting java 1.7. If you read one of my earlier replies, it should
be even possible to just do everything in a single job - compile for
java 7 and still be able to test things in 1.8, including lambdas,
which seems to be the main thing you were worried about.


> On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin  wrote:
>>
>> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
>> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
>> > not
>> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>> > certain
>> > esoteric cases.
>>
>> True, but ask anyone who manages a large cluster how long it would
>> take them to upgrade the jdk across their cluster and validate all
>> their applications and everything... binary compatibility is a tiny
>> drop in that bucket.
>>
>> --
>> Marcelo
>
>



-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mark Hamstra
There aren't many such libraries, but there are a few.  When faced with one
of those dependencies that still doesn't go beyond 2.10, you essentially
have the choice of taking on the maintenance burden to bring the library up
to date, or you do what is potentially a fairly larger refactoring to use
an alternative but well-maintained library.

On Thu, Mar 24, 2016 at 4:53 PM, Kostas Sakellis 
wrote:

> In addition, with Spark 2.0, we are throwing away binary compatibility
> anyways so user applications will have to be recompiled.
>
> The only argument I can see is for libraries that have already been built
> on Scala 2.10 that are no longer being maintained. How big of an issue do
> we think that is?
>
> Kostas
>
> On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
> wrote:
>
>> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
>> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
>> not
>> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>> certain
>> > esoteric cases.
>>
>> True, but ask anyone who manages a large cluster how long it would
>> take them to upgrade the jdk across their cluster and validate all
>> their applications and everything... binary compatibility is a tiny
>> drop in that bucket.
>>
>> --
>> Marcelo
>>
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Michael Armbrust
On Thu, Mar 24, 2016 at 4:54 PM, Mark Hamstra 
 wrote:

> It's a pain in the ass.  Especially if some of your transitive
> dependencies never upgraded from 2.10 to 2.11.
>

Yeah, I'm going to have to agree here.  It is not as bad as it was in the
2.9 days, but its still non-trivial due to the eco-system part of it.  For
this reason I think that it is premature to drop support for 2.10.x.


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Koert Kuipers
the good news is, that from an shared infrastructure perspective, most
places have zero scala, so the upgrade is actually very easy. i can see how
it would be different for say twitter

On Thu, Mar 24, 2016 at 7:50 PM, Reynold Xin  wrote:

> If you want to go down that route, you should also ask somebody who has
> had experience managing a large organization's applications and try to
> update Scala version.
>
>
> On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
> wrote:
>
>> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  wrote:
>> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
>> not
>> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>> certain
>> > esoteric cases.
>>
>> True, but ask anyone who manages a large cluster how long it would
>> take them to upgrade the jdk across their cluster and validate all
>> their applications and everything... binary compatibility is a tiny
>> drop in that bucket.
>>
>> --
>> Marcelo
>>
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mridul Muralidharan
Container Java version can be different from yarn Java version : we run
jobs with jdk8 on jdk7 cluster without issues.

Regards
Mridul

On Thursday, March 24, 2016, Koert Kuipers  wrote:

> i guess what i am saying is that in a yarn world the only hard
> restrictions left are the the containers you run in, which means the hadoop
> version, java version and python version (if you use python).
>
>
> On Thu, Mar 24, 2016 at 12:39 PM, Koert Kuipers  > wrote:
>
>> The group will not upgrade to spark 2.0 themselves, but they are mostly
>> fine with vendors like us deploying our application via yarn with whatever
>> spark version we choose (and bundle, so they do not install it separately,
>> they might not even be aware of what spark version we use). This all works
>> because spark does not need to be on the cluster nodes, just on the one
>> machine where our application gets launched. Having yarn is pretty awesome
>> in this respect.
>>
>> On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen > > wrote:
>>
>>> (PS CDH5 runs fine with Java 8, but I understand your more general
>>> point.)
>>>
>>> This is a familiar context indeed, but in that context, would a group
>>> not wanting to update to Java 8 want to manually put Spark 2.0 into
>>> the mix? That is, if this is a context where the cluster is
>>> purposefully some stable mix of components, would you be updating just
>>> one?
>>>
>>> You make a good point about Scala being more library than
>>> infrastructure component. So it can be updated on a per-app basis. On
>>> the one hand it's harder to handle different Scala versions from the
>>> framework side, it's less hard on the deployment side.
>>>
>>> On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers >> > wrote:
>>> > i think the arguments are convincing, but it also makes me wonder if i
>>> live
>>> > in some kind of alternate universe... we deploy on customers clusters,
>>> where
>>> > the OS, python version, java version and hadoop distro are not chosen
>>> by us.
>>> > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
>>> have
>>> > access to a single proxy machine and launch through yarn. asking them
>>> to
>>> > upgrade java is pretty much out of the question or a 6+ month ordeal.
>>> of the
>>> > 10 client clusters i can think of on the top of my head all of them
>>> are on
>>> > java 7, none are on java 8. so by doing this you would make spark 2
>>> > basically unusable for us (unless most of them have plans of upgrading
>>> in
>>> > near term to java 8, i will ask around and report back...).
>>> >
>>> > on a side note, its particularly interesting to me that spark 2 chose
>>> to
>>> > continue support for scala 2.10, because even for us in our very
>>> constricted
>>> > client environments the scala version is something we can easily
>>> upgrade (we
>>> > just deploy a custom build of spark for the relevant scala version and
>>> > hadoop distro). and because scala is not a dependency of any hadoop
>>> distro
>>> > (so not on classpath, which i am very happy about) we can use whatever
>>> scala
>>> > version we like. also i found the upgrade path from scala 2.10 to 2.11
>>> to be
>>> > very easy, so i have a hard time understanding why anyone would stay on
>>> > scala 2.10. and finally with scala 2.12 around the corner you really
>>> dont
>>> > want to be supporting 3 versions. so clearly i am missing something
>>> here.
>>> >
>>> >
>>> >
>>> > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré >> >
>>> > wrote:
>>> >>
>>> >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end
>>> support of
>>> >> Java 7. It makes sense.
>>> >>
>>> >> Regards
>>> >> JB
>>> >>
>>> >>
>>> >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
>>> >>>
>>> >>> About a year ago we decided to drop Java 6 support in Spark 1.5. I am
>>> >>> wondering if we should also just drop Java 7 support in Spark 2.0
>>> (i.e.
>>> >>> Spark 2.0 would require Java 8 to run).
>>> >>>
>>> >>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015), and
>>> >>> removed public downloads for JDK 7 in July 2015. In the past I've
>>> >>> actually been against dropping Java 8, but today I ran into an issue
>>> >>> with the new Dataset API not working well with Java 8 lambdas, and
>>> that
>>> >>> changed my opinion on this.
>>> >>>
>>> >>> I've been thinking more about this issue today and also talked with a
>>> >>> lot people offline to gather feedback, and I actually think the pros
>>> >>> outweighs the cons, for the following reasons (in some rough order of
>>> >>> importance):
>>> >>>
>>> >>> 1. It is complicated to test how well Spark APIs work for Java
>>> lambdas
>>> >>> if we support Java 7. Jenkins machines need to have both Java 7 and
>>> Java
>>> >>> 8 installed and we must run through a set of test suites in 7, and
>>> then
>>> >>> the lambda tests in Java 8. This complicates build
>>> environments/scripts,
>>> >>> and makes them less robust. Without good testing infrastructure, I
>>> have
>>> >>> no confidence i

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Koert Kuipers
i think marcelo also pointed this out before. its very interesting to hear,
i was not aware of that until today. it would mean we would only have to
convince a group/client with a cluster to install jdk8 on the nodes,
without actually transitioning to it, if i understand it correctly. that
would definitely lower the hurdle by a lot.

On Thu, Mar 24, 2016 at 9:36 PM, Mridul Muralidharan 
wrote:

>
> Container Java version can be different from yarn Java version : we run
> jobs with jdk8 on jdk7 cluster without issues.
>
> Regards
> Mridul
>
>
> On Thursday, March 24, 2016, Koert Kuipers  wrote:
>
>> i guess what i am saying is that in a yarn world the only hard
>> restrictions left are the the containers you run in, which means the hadoop
>> version, java version and python version (if you use python).
>>
>>
>> On Thu, Mar 24, 2016 at 12:39 PM, Koert Kuipers 
>> wrote:
>>
>>> The group will not upgrade to spark 2.0 themselves, but they are mostly
>>> fine with vendors like us deploying our application via yarn with whatever
>>> spark version we choose (and bundle, so they do not install it separately,
>>> they might not even be aware of what spark version we use). This all works
>>> because spark does not need to be on the cluster nodes, just on the one
>>> machine where our application gets launched. Having yarn is pretty awesome
>>> in this respect.
>>>
>>> On Thu, Mar 24, 2016 at 12:25 PM, Sean Owen  wrote:
>>>
 (PS CDH5 runs fine with Java 8, but I understand your more general
 point.)

 This is a familiar context indeed, but in that context, would a group
 not wanting to update to Java 8 want to manually put Spark 2.0 into
 the mix? That is, if this is a context where the cluster is
 purposefully some stable mix of components, would you be updating just
 one?

 You make a good point about Scala being more library than
 infrastructure component. So it can be updated on a per-app basis. On
 the one hand it's harder to handle different Scala versions from the
 framework side, it's less hard on the deployment side.

 On Thu, Mar 24, 2016 at 4:27 PM, Koert Kuipers 
 wrote:
 > i think the arguments are convincing, but it also makes me wonder if
 i live
 > in some kind of alternate universe... we deploy on customers
 clusters, where
 > the OS, python version, java version and hadoop distro are not chosen
 by us.
 > so think centos 6, cdh5 or hdp 2.3, java 7 and python 2.6. we simply
 have
 > access to a single proxy machine and launch through yarn. asking them
 to
 > upgrade java is pretty much out of the question or a 6+ month ordeal.
 of the
 > 10 client clusters i can think of on the top of my head all of them
 are on
 > java 7, none are on java 8. so by doing this you would make spark 2
 > basically unusable for us (unless most of them have plans of
 upgrading in
 > near term to java 8, i will ask around and report back...).
 >
 > on a side note, its particularly interesting to me that spark 2 chose
 to
 > continue support for scala 2.10, because even for us in our very
 constricted
 > client environments the scala version is something we can easily
 upgrade (we
 > just deploy a custom build of spark for the relevant scala version and
 > hadoop distro). and because scala is not a dependency of any hadoop
 distro
 > (so not on classpath, which i am very happy about) we can use
 whatever scala
 > version we like. also i found the upgrade path from scala 2.10 to
 2.11 to be
 > very easy, so i have a hard time understanding why anyone would stay
 on
 > scala 2.10. and finally with scala 2.12 around the corner you really
 dont
 > want to be supporting 3 versions. so clearly i am missing something
 here.
 >
 >
 >
 > On Thu, Mar 24, 2016 at 8:52 AM, Jean-Baptiste Onofré <
 j...@nanthrax.net>
 > wrote:
 >>
 >> +1 to support Java 8 (and future) *only* in Spark 2.0, and end
 support of
 >> Java 7. It makes sense.
 >>
 >> Regards
 >> JB
 >>
 >>
 >> On 03/24/2016 08:27 AM, Reynold Xin wrote:
 >>>
 >>> About a year ago we decided to drop Java 6 support in Spark 1.5. I
 am
 >>> wondering if we should also just drop Java 7 support in Spark 2.0
 (i.e.
 >>> Spark 2.0 would require Java 8 to run).
 >>>
 >>> Oracle ended public updates for JDK 7 in one year ago (Apr 2015),
 and
 >>> removed public downloads for JDK 7 in July 2015. In the past I've
 >>> actually been against dropping Java 8, but today I ran into an issue
 >>> with the new Dataset API not working well with Java 8 lambdas, and
 that
 >>> changed my opinion on this.
 >>>
 >>> I've been thinking more about this issue today and also talked with
 a
 >>> lot people offline to gather feedback, and I actually think the pr

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mridul Muralidharan
Removing compatibility (with jdk, etc) can be done with a major release-
given that 7 has been EOLed a while back and is now unsupported, we have to
decide if we drop support for it in 2.0 or 3.0 (2+ years from now).

Given the functionality & performance benefits of going to jdk8, future
enhancements relevant in 2.x timeframe ( scala, dependencies) which
requires it, and simplicity wrt code, test & support it looks like a good
checkpoint to drop jdk7 support.

As already mentioned in the thread, existing yarn clusters are unaffected
if they want to continue running jdk7 and yet use spark2 (install jdk8 on
all nodes and use it via JAVA_HOME, or worst case distribute jdk8 as
archive - suboptimal).
I am unsure about mesos (standalone might be easier upgrade I guess ?).


Proposal is for 1.6x line to continue to be supported with critical
fixes; newer
features will require 2.x and so jdk8

Regards
Mridul


On Thursday, March 24, 2016, Marcelo Vanzin  wrote:

> On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin  > wrote:
> > If you want to go down that route, you should also ask somebody who has
> had
> > experience managing a large organization's applications and try to update
> > Scala version.
>
> I understand both sides. But if you look at what I've been asking
> since the beginning, it's all about the cost and benefits of dropping
> support for java 1.7.
>
> The biggest argument in your original e-mail is about testing. And the
> testing cost is much bigger for supporting scala 2.10 than it is for
> supporting java 1.7. If you read one of my earlier replies, it should
> be even possible to just do everything in a single job - compile for
> java 7 and still be able to test things in 1.8, including lambdas,
> which seems to be the main thing you were worried about.
>
>
> > On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin  > wrote:
> >>
> >> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin  > wrote:
> >> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
> >> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11 are
> >> > not
> >> > binary compatible, whereas JVM 7 and 8 are binary compatible except
> >> > certain
> >> > esoteric cases.
> >>
> >> True, but ask anyone who manages a large cluster how long it would
> >> take them to upgrade the jdk across their cluster and validate all
> >> their applications and everything... binary compatibility is a tiny
> >> drop in that bucket.
> >>
> >> --
> >> Marcelo
> >
> >
>
>
>
> --
> Marcelo
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org 
> For additional commands, e-mail: dev-h...@spark.apache.org 
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Koert Kuipers
i think that logic is reasonable, but then the same should also apply to
scala 2.10, which is also unmaintained/unsupported at this point (basically
has been since march 2015 except for one hotfix due to a license
incompatibility)

who wants to support scala 2.10 three years after they did the last
maintenance release?


On Thu, Mar 24, 2016 at 9:59 PM, Mridul Muralidharan 
wrote:

> Removing compatibility (with jdk, etc) can be done with a major release-
> given that 7 has been EOLed a while back and is now unsupported, we have to
> decide if we drop support for it in 2.0 or 3.0 (2+ years from now).
>
> Given the functionality & performance benefits of going to jdk8, future
> enhancements relevant in 2.x timeframe ( scala, dependencies) which
> requires it, and simplicity wrt code, test & support it looks like a good
> checkpoint to drop jdk7 support.
>
> As already mentioned in the thread, existing yarn clusters are unaffected
> if they want to continue running jdk7 and yet use spark2 (install jdk8 on
> all nodes and use it via JAVA_HOME, or worst case distribute jdk8 as
> archive - suboptimal).
> I am unsure about mesos (standalone might be easier upgrade I guess ?).
>
>
> Proposal is for 1.6x line to continue to be supported with critical fixes; 
> newer
> features will require 2.x and so jdk8
>
> Regards
> Mridul
>
>
> On Thursday, March 24, 2016, Marcelo Vanzin  wrote:
>
>> On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin  wrote:
>> > If you want to go down that route, you should also ask somebody who has
>> had
>> > experience managing a large organization's applications and try to
>> update
>> > Scala version.
>>
>> I understand both sides. But if you look at what I've been asking
>> since the beginning, it's all about the cost and benefits of dropping
>> support for java 1.7.
>>
>> The biggest argument in your original e-mail is about testing. And the
>> testing cost is much bigger for supporting scala 2.10 than it is for
>> supporting java 1.7. If you read one of my earlier replies, it should
>> be even possible to just do everything in a single job - compile for
>> java 7 and still be able to test things in 1.8, including lambdas,
>> which seems to be the main thing you were worried about.
>>
>>
>> > On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
>> wrote:
>> >>
>> >> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin 
>> wrote:
>> >> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>> >> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11
>> are
>> >> > not
>> >> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>> >> > certain
>> >> > esoteric cases.
>> >>
>> >> True, but ask anyone who manages a large cluster how long it would
>> >> take them to upgrade the jdk across their cluster and validate all
>> >> their applications and everything... binary compatibility is a tiny
>> >> drop in that bucket.
>> >>
>> >> --
>> >> Marcelo
>> >
>> >
>>
>>
>>
>> --
>> Marcelo
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Liwei Lin
Arguments are really convincing; new Dataset API as well as performance

improvements is exiting, so I'm personally +1 on moving onto Java8.



However, I'm afraid Tencent is one of "the organizations stuck with Java7"

-- our IT Infra division wouldn't upgrade to Java7 until Java8 is out, and

wouldn't upgrade to Java8 until Java9 is out.


So:

(non-binding) +1 on dropping scala 2.10 support

(non-binding)  -1 on dropping Java 7 support

  * as long as we figure out a practical way to run
Spark with

JDK8 on JDK7 clusters, this -1 would then
definitely be +1


Thanks !

On Fri, Mar 25, 2016 at 10:28 AM, Koert Kuipers  wrote:

> i think that logic is reasonable, but then the same should also apply to
> scala 2.10, which is also unmaintained/unsupported at this point (basically
> has been since march 2015 except for one hotfix due to a license
> incompatibility)
>
> who wants to support scala 2.10 three years after they did the last
> maintenance release?
>
>
> On Thu, Mar 24, 2016 at 9:59 PM, Mridul Muralidharan 
> wrote:
>
>> Removing compatibility (with jdk, etc) can be done with a major release-
>> given that 7 has been EOLed a while back and is now unsupported, we have to
>> decide if we drop support for it in 2.0 or 3.0 (2+ years from now).
>>
>> Given the functionality & performance benefits of going to jdk8, future
>> enhancements relevant in 2.x timeframe ( scala, dependencies) which
>> requires it, and simplicity wrt code, test & support it looks like a good
>> checkpoint to drop jdk7 support.
>>
>> As already mentioned in the thread, existing yarn clusters are unaffected
>> if they want to continue running jdk7 and yet use spark2 (install jdk8 on
>> all nodes and use it via JAVA_HOME, or worst case distribute jdk8 as
>> archive - suboptimal).
>> I am unsure about mesos (standalone might be easier upgrade I guess ?).
>>
>>
>> Proposal is for 1.6x line to continue to be supported with critical
>> fixes; newer features will require 2.x and so jdk8
>>
>> Regards
>> Mridul
>>
>>
>> On Thursday, March 24, 2016, Marcelo Vanzin  wrote:
>>
>>> On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin 
>>> wrote:
>>> > If you want to go down that route, you should also ask somebody who
>>> has had
>>> > experience managing a large organization's applications and try to
>>> update
>>> > Scala version.
>>>
>>> I understand both sides. But if you look at what I've been asking
>>> since the beginning, it's all about the cost and benefits of dropping
>>> support for java 1.7.
>>>
>>> The biggest argument in your original e-mail is about testing. And the
>>> testing cost is much bigger for supporting scala 2.10 than it is for
>>> supporting java 1.7. If you read one of my earlier replies, it should
>>> be even possible to just do everything in a single job - compile for
>>> java 7 and still be able to test things in 1.8, including lambdas,
>>> which seems to be the main thing you were worried about.
>>>
>>>
>>> > On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
>>> wrote:
>>> >>
>>> >> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin 
>>> wrote:
>>> >> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>>> >> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11
>>> are
>>> >> > not
>>> >> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>>> >> > certain
>>> >> > esoteric cases.
>>> >>
>>> >> True, but ask anyone who manages a large cluster how long it would
>>> >> take them to upgrade the jdk across their cluster and validate all
>>> >> their applications and everything... binary compatibility is a tiny
>>> >> drop in that bucket.
>>> >>
>>> >> --
>>> >> Marcelo
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Marcelo
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mridul Muralidharan
I do agree w.r.t scala 2.10 as well; similar arguments apply (though there
is a nuanced diff - source compatibility for scala vs binary compatibility
wrt Java)
Was there a proposal which did not go through ? Not sure if I missed it.

Regards
Mridul

On Thursday, March 24, 2016, Koert Kuipers  wrote:

> i think that logic is reasonable, but then the same should also apply to
> scala 2.10, which is also unmaintained/unsupported at this point (basically
> has been since march 2015 except for one hotfix due to a license
> incompatibility)
>
> who wants to support scala 2.10 three years after they did the last
> maintenance release?
>
>
> On Thu, Mar 24, 2016 at 9:59 PM, Mridul Muralidharan  > wrote:
>
>> Removing compatibility (with jdk, etc) can be done with a major release-
>> given that 7 has been EOLed a while back and is now unsupported, we have to
>> decide if we drop support for it in 2.0 or 3.0 (2+ years from now).
>>
>> Given the functionality & performance benefits of going to jdk8, future
>> enhancements relevant in 2.x timeframe ( scala, dependencies) which
>> requires it, and simplicity wrt code, test & support it looks like a good
>> checkpoint to drop jdk7 support.
>>
>> As already mentioned in the thread, existing yarn clusters are unaffected
>> if they want to continue running jdk7 and yet use spark2 (install jdk8 on
>> all nodes and use it via JAVA_HOME, or worst case distribute jdk8 as
>> archive - suboptimal).
>> I am unsure about mesos (standalone might be easier upgrade I guess ?).
>>
>>
>> Proposal is for 1.6x line to continue to be supported with critical
>> fixes; newer features will require 2.x and so jdk8
>>
>> Regards
>> Mridul
>>
>>
>> On Thursday, March 24, 2016, Marcelo Vanzin > > wrote:
>>
>>> On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin 
>>> wrote:
>>> > If you want to go down that route, you should also ask somebody who
>>> has had
>>> > experience managing a large organization's applications and try to
>>> update
>>> > Scala version.
>>>
>>> I understand both sides. But if you look at what I've been asking
>>> since the beginning, it's all about the cost and benefits of dropping
>>> support for java 1.7.
>>>
>>> The biggest argument in your original e-mail is about testing. And the
>>> testing cost is much bigger for supporting scala 2.10 than it is for
>>> supporting java 1.7. If you read one of my earlier replies, it should
>>> be even possible to just do everything in a single job - compile for
>>> java 7 and still be able to test things in 1.8, including lambdas,
>>> which seems to be the main thing you were worried about.
>>>
>>>
>>> > On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
>>> wrote:
>>> >>
>>> >> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin 
>>> wrote:
>>> >> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>>> >> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11
>>> are
>>> >> > not
>>> >> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>>> >> > certain
>>> >> > esoteric cases.
>>> >>
>>> >> True, but ask anyone who manages a large cluster how long it would
>>> >> take them to upgrade the jdk across their cluster and validate all
>>> >> their applications and everything... binary compatibility is a tiny
>>> >> drop in that bucket.
>>> >>
>>> >> --
>>> >> Marcelo
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Marcelo
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>>
>


Re: Can we remove private[spark] from Metrics Source and SInk traits?

2016-03-24 Thread Saisai Shao
+1 on exposing the source/sink interface, since MetricsSystem by natural
support plugin source and sink, so supporting this don't require a big
change of current code. Also there's lot of requirements to add custom sink
and source to the MetricsSystem, it is not suitable to maintain these code
in Spark codebase.

Previously we use some ways like SparkEnv.metricsSystem to register source
and sink, I think it is quick hacky, so exposing these interface will be
better for external developers to better use this MetricsSystem.

I'm going to open a JIRA for this.

Thanks
Jerry

On Tue, Mar 22, 2016 at 6:29 PM, Steve Loughran 
wrote:

>
> On 19 Mar 2016, at 16:16, Pete Robbins  wrote:
>
>
> There are several open Jiras to add new Sinks
>
> OpenTSDB https://issues.apache.org/jira/browse/SPARK-12194
> StatsD https://issues.apache.org/jira/browse/SPARK-11574
>
>
>
> statsd is nicely easy to test: either listen in on a (localhost, port) or
> simply create a socket and force it into the sink for the test run
>
>
> Kafka https://issues.apache.org/jira/browse/SPARK-13392
>
> Some have PRs from 2015 so I'm assuming there is not the desire to
> integrate these into core Spark. Opening up the Sink/Source interfaces
> would at least allow these to exist somewhere such as spark-packages
> without having to pollute the o.a.s namespace
>
>
> On Sat, 19 Mar 2016 at 13:05 Gerard Maas  wrote:
>
>> +1
>> On Mar 19, 2016 08:33, "Pete Robbins"  wrote:
>>
>>> This seems to me to be unnecessarily restrictive. These are very useful
>>> extension points for adding 3rd party sources and sinks.
>>>
>>> I intend to make an Elasticsearch sink available on spark-packages but
>>> this will require a single class, the sink, to be in the org.apache.spark
>>> package tree. I could submit the package as a PR to the Spark codebase, and
>>> I'd be happy to do that but it could be a completely separate add-on.
>>>
>>> There are similar issues with writing a 3rd party metrics source which
>>> may not be of interest to the community at large so would probably not
>>> warrant inclusion in the Spark codebase.
>>>
>>> Any thoughts?
>>>
>>
>


Does SparkSql has official jdbc/odbc driver ?

2016-03-24 Thread sage
Hi all,
   Does SparkSql has official jdbc/odbc driver? 
   I only found third-party's odbc/jdbc driver, like simba, and most of
third-party's odbc/jdbc driver are not free to use.




--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Does-SparkSql-has-official-jdbc-odbc-driver-tp16857.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Does SparkSql has official jdbc/odbc driver ?

2016-03-24 Thread Reynold Xin
No - it is too painful to develop a jdbc/odbc driver.


On Thu, Mar 24, 2016 at 11:56 PM, sage  wrote:

> Hi all,
>Does SparkSql has official jdbc/odbc driver?
>I only found third-party's odbc/jdbc driver, like simba, and most of
> third-party's odbc/jdbc driver are not free to use.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/Does-SparkSql-has-official-jdbc-odbc-driver-tp16857.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-24 Thread Mridul Muralidharan
I do agree w.r.t scala 2.10 as well; similar arguments apply (though there
is a nuanced diff - source compatibility for scala vs binary compatibility
wrt Java)
Was there a proposal which did not go through ? Not sure if I missed it.

Regards
Mridul

On Thursday, March 24, 2016, Koert Kuipers  wrote:

> i think that logic is reasonable, but then the same should also apply to
> scala 2.10, which is also unmaintained/unsupported at this point (basically
> has been since march 2015 except for one hotfix due to a license
> incompatibility)
>
> who wants to support scala 2.10 three years after they did the last
> maintenance release?
>
>
> On Thu, Mar 24, 2016 at 9:59 PM, Mridul Muralidharan  > wrote:
>
>> Removing compatibility (with jdk, etc) can be done with a major release-
>> given that 7 has been EOLed a while back and is now unsupported, we have to
>> decide if we drop support for it in 2.0 or 3.0 (2+ years from now).
>>
>> Given the functionality & performance benefits of going to jdk8, future
>> enhancements relevant in 2.x timeframe ( scala, dependencies) which
>> requires it, and simplicity wrt code, test & support it looks like a good
>> checkpoint to drop jdk7 support.
>>
>> As already mentioned in the thread, existing yarn clusters are unaffected
>> if they want to continue running jdk7 and yet use spark2 (install jdk8 on
>> all nodes and use it via JAVA_HOME, or worst case distribute jdk8 as
>> archive - suboptimal).
>> I am unsure about mesos (standalone might be easier upgrade I guess ?).
>>
>>
>> Proposal is for 1.6x line to continue to be supported with critical
>> fixes; newer features will require 2.x and so jdk8
>>
>> Regards
>> Mridul
>>
>>
>> On Thursday, March 24, 2016, Marcelo Vanzin > > wrote:
>>
>>> On Thu, Mar 24, 2016 at 4:50 PM, Reynold Xin 
>>> wrote:
>>> > If you want to go down that route, you should also ask somebody who
>>> has had
>>> > experience managing a large organization's applications and try to
>>> update
>>> > Scala version.
>>>
>>> I understand both sides. But if you look at what I've been asking
>>> since the beginning, it's all about the cost and benefits of dropping
>>> support for java 1.7.
>>>
>>> The biggest argument in your original e-mail is about testing. And the
>>> testing cost is much bigger for supporting scala 2.10 than it is for
>>> supporting java 1.7. If you read one of my earlier replies, it should
>>> be even possible to just do everything in a single job - compile for
>>> java 7 and still be able to test things in 1.8, including lambdas,
>>> which seems to be the main thing you were worried about.
>>>
>>>
>>> > On Thu, Mar 24, 2016 at 4:48 PM, Marcelo Vanzin 
>>> wrote:
>>> >>
>>> >> On Thu, Mar 24, 2016 at 4:46 PM, Reynold Xin 
>>> wrote:
>>> >> > Actually it's *way* harder to upgrade Scala from 2.10 to 2.11, than
>>> >> > upgrading the JVM runtime from 7 to 8, because Scala 2.10 and 2.11
>>> are
>>> >> > not
>>> >> > binary compatible, whereas JVM 7 and 8 are binary compatible except
>>> >> > certain
>>> >> > esoteric cases.
>>> >>
>>> >> True, but ask anyone who manages a large cluster how long it would
>>> >> take them to upgrade the jdk across their cluster and validate all
>>> >> their applications and everything... binary compatibility is a tiny
>>> >> drop in that bucket.
>>> >>
>>> >> --
>>> >> Marcelo
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Marcelo
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>>
>