Hi, Rohit,

Thank you for sharing this good news.

I have some relevant issue that I would like to ask your help.
I am using spark 1.1.0 and I have a spark application using 
"com.tuplejump"% "calliope-core_2.10"% "1.1.0-CTP-U2",

At runtime there are following errors that seem indicate that 
calliope package is compiled with hadoop 1.x and spark is running on hadoop 2.x.
Can you release a new version of calliope so that it will be compatible with 
spark 1.1.0?

Thanks. here is the error details.
java.lang.IncompatibleClassChangeError: 
Found interface (hadoop 2.x)
org.apache.hadoop.mapreduce.TaskAttemptContext, but class (hadoop 1.x) was
expected
    
com.tuplejump.calliope.hadoop.cql3.CqlRecordReader.initialize(CqlRecordReader.java:82)


Tian


On Friday, October 3, 2014 11:15 AM, Rohit Rai <ro...@tuplejump.com> wrote:
 


Hi All,

An year ago we started this journey and laid the path for Spark + Cassandra 
stack. We established the ground work and direction for Spark Cassandra 
connectors and we have been happy seeing the results.

With Spark 1.1.0 and SparkSQL release, we its time to take Calliope to the 
logical next level also paving the way for much more advanced functionality to 
come. 

Yesterday we released Calliope 1.1.0 Community Tech Preview, which brings 
Native SparkSQL support for Cassandra. The further details are available here.

This release showcases in core spark-sql, hiveql and HiveThriftServer support. 

I differentiate it as "native" spark-sql integration as it doesn't rely on 
Cassandra's hive connectors (like Cash or DSE) and saves a level of indirection 
through Hive. 

It also allows us to harness Spark's analyzer and optimizer in future to work 
out the best execution plan targeting a balance between Cassandra's querying 
restrictions and Sparks in memory processing.

As far as we know this it the first and only third party datastore connector 
for SparkSQL. This is a CTP release as it relies on Spark internals that still 
don't have/stabilized a developer API and we will work with the Spark Community 
in documenting the requirements and working towards a standard and stable API 
for third party data store integration.

On another note, we no longer require you to signup to access the early access 
code repository.

Inviting all of you try it and give us your valuable feedback.

Regards,

Rohit
Founder & CEO, Tuplejump, Inc.
____________________________
www.tuplejump.comThe Data Engineering Platform

Reply via email to