Hi all, following the
import com.datastax.spark.connector.SelectableColumnRef;
import com.datastax.spark.connector.japi.CassandraJavaUtil;
import org.apache.spark.sql.SchemaRDD;
import static com.datastax.spark.connector.util.JavaApiHelper.toScalaSeq;
import scala.collection.Seq;
SchemaRDD schema
Hi all,
On https://spark.apache.org/docs/latest/programming-guide.html
under the "RDD Persistence > Removing Data", it states
"Spark automatically monitors cache usage on each node and drops out old
> data partitions in a least-recently-used (LRU) fashion."
Can it be understood that the cache
Hi all,
Spark 1.2.1.
I have a Cassandra column family and doing the following
SchemaRDD s = cassandraSQLContext.sql("select user.id as user_id from
user");
// user.id is UUID in table definition
s.registerTempTable( "my_user" );
s.cache(); // throws following exception
// tried the
cassandraSQLC