Hi All,
In DSE they claim to have Cassandra File System in place of Hadoop which makes
it real fault tolerant.
Is there a way to use Cassandra file system CFS in place of HDFS if I don't
have DSE?
Regards,
Tarun Tiwari | Workforce Analytics-ETL | Kronos India
M: +91 9540 28 27 77 | Tel: +91 120
WPtD892G_VvU0DjK3qeG-QpjXeFO3Q7e77xsaxc0TPbwQA]<http://feeds.feedburner.com/datastax>[https://lh3.googleusercontent.com/XEb8siCDthQd9pPzGM62gd-KwmrCQkNuhLqToqta8XqIhJABtU8doRL7UQy0YyliroXaqY6P95aMZpQCTBI2CjIjw5tvGBhAMsb68LRMOWbYlEn_kCjS459wU4aYbUoZEw]<https://github.com/datastax/>
Hi Experts,
I am getting java.lang.NoClassDefFoundError:
com/datastax/spark/connector/mapper/ColumnMapper while running a app to load
data to Cassandra table using the datastax spark connector
Is there something else I need to import in the program or dependencies?
RUNTIME ERROR: Exception in
;t initialize it's static initializer, it could
be because some other class couldn't be found, or it could be some other non
classloader related error.
On 2015-03-31 10:42, Tiwari, Tarun wrote:
Hi Experts,
I am getting java.lang.NoClassDefFoundError:
com/datastax/spark/connector/mapper/Colum
tw?
On 04/02/2015 11:16 PM, Tiwari, Tarun wrote:
Sorry I was unable to reply for couple of days.
I checked the error again and can’t see any other initial cause. Here is the
full error that is coming.
Exception in thread "main" java.lang.NoClassDefFoundError:
com/datastax/spark/
Hi,
I am looking for, if the CQLSH COPY command be run using the spark scala
program. Does it benefit from the parallelism achieved by spark.
I am doing something like below:
val conf = new SparkConf(true).setMaster("spark://Master-Host:7077")
.setAppName("Load Cs Table using COPY TO")
lazy val
access the session object of the Java driver directly (using
withSessionDo{...}), you bypass the data locality optimisation made by the
connector
On Sun, Apr 5, 2015 at 9:53 AM, Tiwari, Tarun
mailto:tarun.tiw...@kronos.com>> wrote:
Hi,
I am looking for, if the CQLSH COPY command be run usi
Hi,
While setting up a cluster for our POC, when we installed Cassandra on the 1st
node we gave num_tokens: 256 , while on next 2 nodes which were added later we
left it blank in Cassandra.yaml.
This made our cluster an unbalanced one with nodetool status showing 99% load
on one server. Now ev
We have encountered issues of very long running nodetool repair when we ran it
node by node on really large dataset. It even kept on running for a week in
some cases.
IMO the strategy you are choosing of repairing nodes by –st and –et is good one
and does the same work in small increments logs o