[Spark SS] Spark-23541 Backward Compatibility on 2.3.2

2019-09-26 Thread Ahn, Daniel
Is it tested whether this fix is backward compatible (https://issues.apache.org/jira/browse/SPARK-23541) for 2.3.2? I see that fix version is 2.4.0 in Jira. But quickly reviewing pull request (https://github.com/apache/spark/pull/20698), it looks like all the code change is limited to spark-sql

Re: backward compatibility

2017-01-10 Thread Marco Mistroni
aset APIs. SQLContext and HiveContext are > kept for backward compatibility.* > A new, streamlined configuration API for SparkSession > Simpler, more performant accumulator API > A new, improved Aggregator API for typed aggregation in Datasets > > > thanks > Pradeep > >

backward compatibility

2017-01-10 Thread pradeepbill
SQLContext and HiveContext for DataFrame and Dataset APIs. SQLContext and HiveContext are kept for backward compatibility.* A new, streamlined configuration API for SparkSession Simpler, more performant accumulator API A new, improved Aggregator API for typed aggregation in Datasets thanks Pradeep

Re: Backward compatibility with org.apache.spark.sql.api.java.Row class

2015-05-13 Thread Michael Armbrust
Sorry for missing that in the upgrade guide. As part of unifying the Java and Scala interfaces we got rid of the java specific row. You are correct in assuming that you want to use row in org.apache.spark.sql from both Scala and Java now. On Wed, May 13, 2015 at 2:48 AM, Emerson CastaƱeda wrote

Backward compatibility with org.apache.spark.sql.api.java.Row class

2015-05-13 Thread Emerson CastaƱeda
Hello everyone I'm adopting the latest version of Apache Spark on my project, moving from *1.2.x* to *1.3.x*, and the only significative incompatibility for now is related to the *Row *class. Any idea about what did happen to* org.apache.spark.sql.api.java.Row* class in Apache Spark 1.3 ? Migra