I have a dataframe. I register it as a temp table and run a spark sql query
on it to get another dataframe. Now when I run groupBy on it, it gives me
this exception
e: Lost task 1.3 in stage 21.0 (TID 579, 172.28.0.162):
java.lang.ClassCastException: java.lang.String cannot be cast to
org.apache.s
spark version 1.4
import com.datastax.spark.connector._
import org.apache.spark._
import org.apache.spark.sql.cassandra.CassandraSQLContext
import org.apache.spark.SparkConf
//import com.microsoft.sqlserver.jdbc.SQLServerDriver
import java.sql.Connection
import java.sql.DriverManager
import java.
You are right. There are some odd things about this connector. Earlier I got
OOM exception with this connector just because there was a bug in the
connector which transferred only 64 bytes before closing the connection and
now this one
Strangely I copied the data into another data frame and it work
d like to integrate these changes in spark, can anyone please let me
know the process of submitting patches/ new features to spark. Also. I
understand that the current version of Spark is 2.1. However, our changes
have been done and tested on Spark 1.6.2, will this be a problem?
Thanks
Nipun
d like to integrate these changes in spark, can anyone please let me
know the process of submitting patches/ new features to spark. Also. I
understand that the current version of Spark is 2.1. However, our changes
have been done and tested on Spark 1.6.2, will this be a problem?
Thanks
Nipun
;);
// The results of SQL queries are DataFrames and support all the
normal RDD operations.// The columns of a row in the result can be
accessed by ordinal.List names = results.map(new Function() {
public String call(Row row) {
return "Name: " + row.getString(0);
}
}).collect();
Thanks
Nipun