Is it tested whether this fix is backward compatible
(https://issues.apache.org/jira/browse/SPARK-23541) for 2.3.2? I see that fix
version is 2.4.0 in Jira. But quickly reviewing pull request
(https://github.com/apache/spark/pull/20698), it looks like all the code change
is limited to spark-sql
aset APIs. SQLContext and HiveContext are
> kept for backward compatibility.*
> A new, streamlined configuration API for SparkSession
> Simpler, more performant accumulator API
> A new, improved Aggregator API for typed aggregation in Datasets
>
>
> thanks
> Pradeep
>
>
SQLContext and
HiveContext for DataFrame and Dataset APIs. SQLContext and HiveContext are
kept for backward compatibility.*
A new, streamlined configuration API for SparkSession
Simpler, more performant accumulator API
A new, improved Aggregator API for typed aggregation in Datasets
thanks
Pradeep
Sorry for missing that in the upgrade guide. As part of unifying the Java
and Scala interfaces we got rid of the java specific row. You are correct
in assuming that you want to use row in org.apache.spark.sql from both
Scala and Java now.
On Wed, May 13, 2015 at 2:48 AM, Emerson CastaƱeda wrote
Hello everyone
I'm adopting the latest version of Apache Spark on my project, moving from
*1.2.x* to *1.3.x*, and the only significative incompatibility for now is
related to the *Row *class.
Any idea about what did happen to* org.apache.spark.sql.api.java.Row* class
in Apache Spark 1.3 ?
Migra