Hi,
I have a table with schema
CREATE TABLE TEST_TABLE (
keyCol bigint,
col1 bigint,
col2 bigint,
col3 text,
) WITH CLUSTERING ORDER BY (col1 DESC, col2 DESC)
on cassandra 1.2.13
I used SSTableSimpleUnsortedWriter and sstableloader to load some data and
loaded data for a keycolum
Hi,
I am trying to load using SSTableloader with cassandra 1.2 version like a
million records. It streams very fast, but in the end its streaming gets
stuck at two three machines in the cluster, rest all are 100% done.
Has anybody seen such a problem and is there any tool I can use to diagnose
th
Thanks Rahul..article was insightful
On Fri, Dec 13, 2013 at 12:25 AM, Rahul Menon wrote:
> Here you go
>
> http://thelastpickle.com/blog/2013/01/11/primary-keys-in-cql.html
>
>
> On Fri, Dec 13, 2013 at 7:19 AM, varun allampalli <
> vshoori.off...@gmail.com> wro
PM, varun allampalli
wrote:
> Thanks Aaron, I was able to generate sstables and load using
> sstableloader. But after loading the tables when I do a select query I get
> this, the table has only one record. Is there anything I am missing or any
> logs I can look at.
>
> Reques
m hadoop
> http://www.datastax.com/docs/1.1/cluster_architecture/hadoop_integration
>
> Cheers
>
> -
> Aaron Morton
> New Zealand
> @aaronmorton
>
> Co-Founder & Principal Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
>
Hi All,
I want to bulk insert data into cassandra. I was wondering of using
BulkOutputformat in hadoop. Is it the best way or using driver and doing
batch insert is the better way.
Are there any disandvantages of using bulkoutputformat.
Thanks for helping
Varun