Question on TTLs and Tombstones

2012-12-28 Thread Michal Michalski

Hi,

I have a question regarding TTLs and Tombstones with a pretty long 
scenario + solution question. My first, general question is - when 
Cassandra checks for the TTL (if it expired) and creates the Tombstone 
if needed? I know it happens during compaction, but is this the only 
situation? How about checking it on reads? How about the 
"nodetool-based" actions? Scrub? Repair?


The reason of my question is such scenario - I add the same amount of 
rows to CF every month. All of them have TTL of 6 months - when I add 
data from July, data from January should expire. I do NOT modify these 
data any later. However, because of SizeTiered compaction and large 
SSTables my old data do not expire in terms of disk usage - the're in 
the biggest/oldest SSTable which is not going to be compacted any soon. 
I want to get rid of the data I don't need. So my solution is to perform 
a user defined compaction on the single file that contains the oldest 
data (I make an assumption that in my use case it's the biggest / oldest 
SSTable). It works (at least the first compaction - see below), but I 
want to make sure that I'm right and I understand why it happens ;-)


Heres how I understand how it works (it's December, my oldest data are 
from November, so I want to have nothing older than June):


I have a large SSTable which was compacted in August for the last time 
and it's the oldest SSTable, much larger than the rest, so I can assume 
that it contains:
(a) some Tombstones for the January data (when it was compacted for the 
last time January was the month to be expired so the Tombstones were 
created) which haven't been removed so far
(b) some data from February - May which are NOT marked for deletion so 
far because when compaction has occured for the last time they were 
"fresh" enough to stay

(c) some newer data (June+)
So I compact it. Tombstones (a) are removed. Expired data (b) are marked 
for deletion by creating Tombstones for them. The rest of data is 
untouched. This reduces the file size by ~10-20%. This is what I checked 
and it worked.
Then I wait 10 days (gc_grace) and compact it once again. It should 
remove all the Tombstones created during previous compaction, so file 
size should be reduced significantly (let's say it should be like 20% of 
the initial size or so). This is what I wait for.

Am I right?

How about repair? As compaction is a "per-node" task, I guess I should 
run repair between these two compactions to make sure that Tombstones 
have been transfered to other replicas?


Or maybe - returning to my first question - Cassandra checks TTLs much 
more often (like with every single read?) so they're "spread" among many 
SSTables and they won't get compacted efficiently during compacting the 
oldest SSTable only? Or maybe jobs like scrub check TTLs and create 
Tombstones too? Or repair?


I know that I could check some of these things with new nodetool 
features (like checking % of Tombstones in SSTable), but I run 1.1.1 and 
it's unavailable here. I know that 1.2 (or 1.1.7?) handles Tombstones in 
a better way, but - still - it's not my case unless I upgrade.


Kind regards,
MichaƂ


Re: Cassandra read throughput with little/no caching.

2012-12-28 Thread Yiming Sun
James, sorry I was out for a few days.  Yes, if the row cache doesn't give
a good hit rate then it should be disabled.

Is there any chance to increase the VM configuration specs?  I couldn't
pinpoint in exactly which message you mentioned the VMs are 2GB mem and 2
cores, which is a bit meager.  Also is it possible to batch the writes
together?

-- Y.


On Mon, Dec 24, 2012 at 7:28 AM, James Masson wrote:

>
>
> On 21/12/12 17:56, Yiming Sun wrote:
>
>> James, you could experiment with Row cache, with off-heap JNA cache, and
>> see if it helps.  My own experience with row cache was not good, and the
>> OS cache seemed to be most useful, but in my case, our data space was
>> big, over 10TB.  Your sequential access pattern certainly doesn't play
>> well with LRU, but giving the small data space you have, you may be able
>> to fit the data from one column family entirely into the row cache.
>>
>>
>>
> I've done some experimenting today with JNA/row cache. Extra 500Mb of
> heap, 300Mb row cache, latest JNA, set caching=ALL in the schema for all
> column families in this keyspace.
>
> Getting average 5% row cache hit rate - no increase in cassandra
> throughput, and increased disk read I/O, basically because I've sacrificed
> Linux disk cache for the cassandra row-cache.
>
> Load average was 4 (2cpu boxes) for the duration of the cycle, where it
> was about 2 before, basically because of the disk I/O I think.
>
> So, I think I'll disable row caching again...
>
> James M
>


cassandra 1.2.0-rc2 on osx

2012-12-28 Thread cscetbon.ext
Hi,

FYI, I've added the devel version to the cassandra formula of the homebrew 
package installer, and updated the release version to 1.1.8.
You can now use brew install cassandra to install version 1.1.8 and brew 
install --devel cassandra to install version 1.2.0-rc2

enjoy !
--
Cyril SCETBON

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages 
that have been modified, changed or falsified.
Thank you.



tpstats ReadStage when using secondary index

2012-12-28 Thread cscetbon.ext
Hi,

Is it normal that ReadStage completed counter is incremented by 2 when the CQL 
request uses a secondary index ?

thanks
--
Cyril SCETBON


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete 
altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages 
that have been modified, changed or falsified.
Thank you.



Re: CQL3 Compound Primary Keys - Do I have the right idea?

2012-12-28 Thread Pierre-Yves Ritschard
OK, so great news, it is now possible to do in CQL with the following
syntax, as per CASSANDRA-4179

CREATE TABLE foo (
  host text,
  service text,
  metric int,
  PRIMARY KEY ((host,service)));

(note the double parentheses).

This will effectively create a CF whose row key is a composite type.

Thanks for getting this in 1.2 !

  - pyr


On Mon, Dec 24, 2012 at 2:17 PM, Manu Zhang  wrote:

> CREATE TABLE seen_ships (
>>day text,
>>time_seen timestamp,
>>shipname text,
>>PRIMARY KEY (day, time_seen)
>>);
>
>
> In CQL3, we could select all the columns with the same 'day' and same
> 'time_seen'.
>
> Is it possible with cassandra-cli?
>
>
> On Mon, Dec 24, 2012 at 6:54 AM, Tristan Seligmann <
> mithra...@mithrandi.net> wrote:
>
>> On Sun, Dec 23, 2012 at 9:25 PM, aaron morton 
>> wrote:
>> > In this example:
>> >
>> >  CREATE TABLE seen_ships (
>> >day text,
>> >time_seen timestamp,
>> >shipname text,
>> >PRIMARY KEY (day, time_seen)
>> >);
>> > http://www.datastax.com/dev/blog/whats-new-in-cql-3-0
>> >
>> > * day is the internal row key
>> > * there is only ONE internal column / cell, the shipname
>> > * the internal column / cell "shipname" is a composite of the *value* of
>> > time_seen. e.g. 
>>
>> Alternatively, if you want a composite partition key eg.
>> , this functionality is implemented in
>> https://issues.apache.org/jira/browse/CASSANDRA-4179 and I believe is
>> available in Cassandra 1.2 as well[1].
>>
>> [1] I recently asked about this on SO:
>>
>> http://stackoverflow.com/questions/13938288/can-a-cassandra-cql3-column-family-have-a-composite-partition-key
>> --
>> mithrandi, i Ainil en-Balandor, a faer Ambar
>>
>
>


Re: Changing rpc_port in cassandra.yaml has no effect

2012-12-28 Thread Andras Szerdahelyi

Its not clear to me how you are starting the daemon. You mention Eclipse - do 
you have Cassandra embedded in an Eclipse project? cassandra.yaml might not be 
read correctly. Make sure its on the class path, i have it in 
src/main/resources and it gets picked up from there just fine ( although i only 
ever use it in my unit tests )

CassandraDaemon cassandra = new CassandraDaemon();
cassandra.init(null);
cassandra.start();

It might also be worthwhile to inspect log output from org.apache.cassandra. 
Put this in your log4j.properties

log4j.logger.org.apache.cassandra = FINEST, cassandra
# Cassandra log
log4j.appender.cassandra = org.apache.log4j.RollingFileAppender
log4j.appender.cassandra.File = target/log/cassandra.log
log4j.appender.cassandra.MaxFileSize = 10MB
log4j.appender.cassandra.MaxBackupIndex = 5
log4j.appender.cassandra.layout = org.apache.log4j.PatternLayout
log4j.appender.cassandra.layout.ConversionPattern = %d %p %c - %m%n


Andras Szerdahelyi
Solutions Architect, IgnitionOne | 1831 Diegem E.Mommaertslaan 20A
M: +32 493 05 50 88 | Skype: sandrew84


[cid:7BDF7228-D831-4D98-967A-BE04FEB17544]




On 25 Dec 2012, at 22:56, Bob Futrelle 
mailto:bob.futre...@gmail.com>> wrote:

I have been using cqlsh (and --cql3) successfully for a few weeks.
But yesterday it stopped working, with the all too familiar,

 "Connection error: Could not connect to localhost:9160"

Still got the same message, 9160, after I changed the rpc_port in the yaml file 
to 9161 or 9159.

I recently installed the Maven plugin in Eclipse, so I thought that might be 
the problem. (My prototyping/research code is so simple that I've used 
Eclipse's builds with no problems for years.  Maven would be overkill.)

I reread the various Cassandra docs, reinstalled Cassandra 
(apache-cassandra-1.1.7) recreated the var/lib directories, rebooted my 
machine, restarted Eclipse, that's about all I could think of.  Eclipse was 
going to be the next stage for me after various cqlsh experiments, so it 
shouldn't have any effect on the CLI.I'm no expert on ports and the like. I 
don't do web servers/clients.  My expertise is in the Java/Swing systems I 
build for NLP research.  Now that I'm retired and working from home, I can't 
just walk down the hall and talk with our systems experts.

  - Bob Futrelle


<>

keyspace not copied to new node

2012-12-28 Thread Cory Mintz
I am trying to add a second node to a cluster that is currently a single
node, with a single key space on it.

* Cluster names are in sync
* They both have the same seed node (the first one)
* Both of the same snitch (Ec2Snitch)
* I am not filling in an initial_token on either

When I start the new node it joins the cluster and gets 50% of cluster.
Everything looks good, except the existing keyspace never is copied over
and the new node starts recording exceptions. This can also be seen in
nodetool that the Load has not been split.

Any help would be appreciated.

nodetool ring after join:
10.125.0.6  us-east 1a  Up Normal  6.7 KB
50.02%  58390160951331053490088942307836377318
10.125.0.17 us-east 1a  Up Normal  8.73 MB
49.98%  143434328038307359414303841008091364408

Log file for the existing node:
 INFO [GossipStage:1] 2012-12-28 18:40:23,945 Gossiper.java (line 848) Node
/10.125.0.6 has restarted, now UP
 INFO [GossipStage:1] 2012-12-28 18:40:23,946 Gossiper.java (line 816)
InetAddress /10.125.0.6 is now UP
 INFO [GossipStage:1] 2012-12-28 18:40:40,593 Gossiper.java (line 830)
InetAddress /10.125.0.6 is now dead.
 INFO [GossipTasks:1] 2012-12-28 18:41:09,292 Gossiper.java (line 644)
FatClient /10.125.0.6 has been silent for 3ms, removing from gossip
 INFO [GossipStage:1] 2012-12-28 18:45:18,219 Gossiper.java (line 850) Node
/10.125.0.6 is now part of the cluster
 INFO [GossipStage:1] 2012-12-28 18:45:18,220 Gossiper.java (line 816)
InetAddress /10.125.0.6 is now UP
 INFO [GossipStage:1] 2012-12-28 18:46:17,471 ColumnFamilyStore.java (line
659) Enqueuing flush of Memtable-LocationInfo@782297898(35/43
serialized/live bytes, 1 ops)
 INFO [FlushWriter:6] 2012-12-28 18:46:17,471 Memtable.java (line 264)
Writing Memtable-LocationInfo@782297898(35/43 serialized/live bytes, 1 ops)
 INFO [FlushWriter:6] 2012-12-28 18:46:17,477 Memtable.java (line 305)
Completed flushing
/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-40-Data.db
(89 bytes) for commitlog position ReplayPosition(segmentId=1356717215209,
position=81403)
 INFO [CompactionExecutor:7] 2012-12-28 18:46:17,478 CompactionTask.java
(line 109) Compacting
[SSTableReader(path='/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-39-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-40-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-38-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-37-Data.db')]
 INFO [CompactionExecutor:7] 2012-12-28 18:46:17,554 CompactionTask.java
(line 221) Compacted to
[/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-41-Data.db,].
702 to 435 (~61% of original) bytes for 4 keys at 0.005531MB/s.  Time: 75ms.
 INFO [MemoryMeter:1] 2012-12-28 19:00:28,396 Memtable.java (line 213)
CFS(Keyspace='system', ColumnFamily='HintsColumnFamily') liveRatio is
3.167859982557678 (just-counted was 3.167505874454515).  calculation took
24ms for 54 columns
 INFO [GossipStage:1] 2012-12-28 19:01:20,035 StorageService.java (line
1287) Removing token 58390160951331053490088942307836377318 for /10.125.0.6
 INFO [OptionalTasks:1] 2012-12-28 19:01:20,035 HintedHandOffManager.java
(line 180) Deleting any stored hints for /10.125.0.6
 INFO [OptionalTasks:1] 2012-12-28 19:01:20,037 ColumnFamilyStore.java
(line 659) Enqueuing flush of Memtable-HintsColumnFamily@614495718(23832/94370
serialized/live bytes, 73 ops)
 INFO [GossipStage:1] 2012-12-28 19:01:20,037 ColumnFamilyStore.java (line
659) Enqueuing flush of Memtable-LocationInfo@2119262016(35/43
serialized/live bytes, 1 ops)
 INFO [FlushWriter:7] 2012-12-28 19:01:20,038 Memtable.java (line 264)
Writing Memtable-HintsColumnFamily@614495718(23832/94370 serialized/live
bytes, 73 ops)
 INFO [FlushWriter:7] 2012-12-28 19:01:20,047 Memtable.java (line 305)
Completed flushing
/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hf-5-Data.db
(66 bytes) for commitlog position ReplayPosition(segmentId=1356717215209,
position=114728)
 INFO [FlushWriter:7] 2012-12-28 19:01:20,048 Memtable.java (line 264)
Writing Memtable-LocationInfo@2119262016(35/43 serialized/live bytes, 1 ops)
 INFO [CompactionExecutor:8] 2012-12-28 19:01:20,051 CompactionTask.java
(line 109) Compacting
[SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hf-5-Data.db'),
SSTableReader(path='/var/lib/cassandra/data/system/HintsColumnFamily/system-HintsColumnFamily-hf-4-Data.db')]
 INFO [FlushWriter:7] 2012-12-28 19:01:20,052 Memtable.java (line 305)
Completed flushing
/var/lib/cassandra/data/system/LocationInfo/system-LocationInfo-hf-42-Data.db
(89 bytes) for commitlog position ReplayPosition(segmentId=1356717215209,
position=114728)
 INFO [CompactionExecutor:8] 2012-12-28 19:01:20,059 CompactionTask.java
(line 221) Compacted to
[/var

Re: Changing rpc_port in cassandra.yaml has no effect

2012-12-28 Thread Bob Futrelle
Thanks for the detailed help.
I'll follow up.

   - Bob Futrelle


On Fri, Dec 28, 2012 at 1:51 PM, Andras Szerdahelyi <
andras.szerdahe...@ignitionone.com> wrote:

>
>  Its not clear to me how you are starting the daemon. You mention Eclipse
> - do you have Cassandra embedded in an Eclipse project? cassandra.yaml
> might not be read correctly. Make sure its on the class path, i have it in
> src/main/resources and it gets picked up from there just fine ( although i
> only ever use it in my unit tests )
>
>  CassandraDaemon cassandra = new CassandraDaemon();
> cassandra.init(null);
> cassandra.start();
>
>  It might also be worthwhile to inspect log output from
> org.apache.cassandra. Put this in your log4j.properties
>
>  log4j.logger.org.apache.cassandra = FINEST, cassandra
>  # Cassandra log
>  log4j.appender.cassandra = org.apache.log4j.RollingFileAppender
>  log4j.appender.cassandra.File = target/log/cassandra.log
> log4j.appender.cassandra.MaxFileSize = 10MB
> log4j.appender.cassandra.MaxBackupIndex = 5
> log4j.appender.cassandra.layout = org.apache.log4j.PatternLayout
> log4j.appender.cassandra.layout.ConversionPattern = %d %p %c - %m%n
>
>
> Andras Szerdahelyi*
> *Solutions Architect, IgnitionOne | 1831 Diegem E.Mommaertslaan 20A
> M: +32 493 05 50 88 | Skype: sandrew84
>
>
>
>
>
>  On 25 Dec 2012, at 22:56, Bob Futrelle  wrote:
>
>  I have been using cqlsh (and --cql3) successfully for a few weeks.
> But yesterday it stopped working, with the all too familiar,
>
>   "Connection error: Could not connect to localhost:9160"
>
>  Still got the same message, 9160, after I changed the rpc_port in the
> yaml file to 9161 or 9159.
>
>  I recently installed the Maven plugin in Eclipse, so I thought that
> might be the problem. (My prototyping/research code is so simple that I've
> used Eclipse's builds with no problems for years.  Maven would be overkill.)
>
>  I reread the various Cassandra docs, reinstalled Cassandra
> (apache-cassandra-1.1.7) recreated the var/lib directories, rebooted my
> machine, restarted Eclipse, that's about all I could think of.  Eclipse was
> going to be the next stage for me after various cqlsh experiments, so it
> shouldn't have any effect on the CLI.I'm no expert on ports and the
> like. I don't do web servers/clients.  My expertise is in the Java/Swing
> systems I build for NLP research.  Now that I'm retired and working from
> home, I can't just walk down the hall and talk with our systems experts.
>
>- Bob Futrelle
>
>
>
>
<>

Fixing the schema for a Column Family

2012-12-28 Thread Charles Lamanna
Hello folks --

I just ran into this nasty Cassandra issue:
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel

As a result, one of my column families had its schema reset. For example,
when I created the column family, this was its schema:

CREATE TABLE values (   rk text,   ck text,   cnt counter,   sum counter,
PRIMARY KEY (rk, ck) );

And now, it's schema has become:

cqlsh:metrics> describe COLUMNFAMILy values;
CREATE TABLE fifteenminutes (  rk text PRIMARY KEY )


*Is there a way to restore the schema? (all my client code expects the
original schema?)*

I can add back the cnt / sum columns by updating the column_metadata in the
Cassandra-Cli -- however, I cannot find a way to fix the compound primary
key. This is the command that restored everything but the compound primary
key:

alter column family values
  with column_type = 'Standard'
  and comparator =
'CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)'
  and default_validation_class = 'CounterColumnType'
  and key_validation_class = 'UTF8Type'
  and read_repair_chance = 0.0
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and compaction_strategy =
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and compression_options = {'sstable_compression' :
'org.apache.cassandra.io.compress.SnappyCompressor'}
  and column_metadata=[
{ column_name:'cnt', validation_class:CounterColumnType },
{ column_name:'sum', validation_class:CounterColumnType }
];


Also, FWIW, if I describe the schema of my busted CF, I see the following
errors:

Unexpected table structure; may not translate correctly to CQL. expected
composite key CF to have column aliases, but found none
Unexpected table structure; may not translate correctly to CQL. expected
[u'rk'] length to be 2, but it's 1.


Thanks!
Charles


Re: Cassadra API for Java

2012-12-28 Thread Michael Kjellman
Hector is an abstraction to pure thrift. I prefer 
https://github.com/Netflix/astyanax

If you are just starting and can wait for the official 1.2 release (obviously 
in production you can use trunk or the rc versions) then take a look at 
https://github.com/datastax/java-driver

Best,
mike

From: Baskar Sikkayan mailto:techba...@gmail.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Friday, December 28, 2012 7:24 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Cassadra API for Java

Hi,
  I am new to Apache Cassandra.
Could you please suggest me good java API( Hector, thrift or .) for 
Cassandra?

Thanks,
Baskar.S
+91 97394 76008



Join Barracuda Networks in the fight against hunger.
To learn how you can help in your community, please visit: 
http://on.fb.me/UAdL4f





Re: Fixing the schema for a Column Family

2012-12-28 Thread Michael Kjellman
I've found that if you drop a column family, the data is still 
there/snapshotted. If you recreate the column family as expected the data will 
repopulate the cf.

From: Charles Lamanna mailto:char...@metricshub.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Friday, December 28, 2012 5:22 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Fixing the schema for a Column Family

Hello folks --

I just ran into this nasty Cassandra issue: 
https://issues.apache.org/jira/browse/CASSANDRA-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel

As a result, one of my column families had its schema reset. For example, when 
I created the column family, this was its schema:
CREATE TABLE values (   rk text,   ck text,   cnt counter,   sum counter,   
PRIMARY KEY (rk, ck) );

And now, it's schema has become:
cqlsh:metrics> describe COLUMNFAMILy values;
CREATE TABLE fifteenminutes (  rk text PRIMARY KEY )

Is there a way to restore the schema? (all my client code expects the original 
schema?)

I can add back the cnt / sum columns by updating the column_metadata in the 
Cassandra-Cli -- however, I cannot find a way to fix the compound primary key. 
This is the command that restored everything but the compound primary key:

alter column family values
  with column_type = 'Standard'
  and comparator = 
'CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)'
  and default_validation_class = 'CounterColumnType'
  and key_validation_class = 'UTF8Type'
  and read_repair_chance = 0.0
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'}
  and column_metadata=[
{ column_name:'cnt', validation_class:CounterColumnType },
{ column_name:'sum', validation_class:CounterColumnType }
];

Also, FWIW, if I describe the schema of my busted CF, I see the following 
errors:
Unexpected table structure; may not translate correctly to CQL. expected 
composite key CF to have column aliases, but found none
Unexpected table structure; may not translate correctly to CQL. expected 
[u'rk'] length to be 2, but it's 1.

Thanks!
Charles

Join Barracuda Networks in the fight against hunger.
To learn how you can help in your community, please visit: 
http://on.fb.me/UAdL4f





Re: Cassadra API for Java

2012-12-28 Thread Michael Kjellman
This was asked as recently as one month + 1 day btw:

http://grokbase.com/t/cassandra/user/12bve4d8e8/java-high-level-client if you 
weren't subscribed to the group to see the messages to see a longer discussion.

From: Baskar Sikkayan mailto:techba...@gmail.com>>
Reply-To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Date: Friday, December 28, 2012 7:24 PM
To: "user@cassandra.apache.org" 
mailto:user@cassandra.apache.org>>
Subject: Cassadra API for Java

Hi,
  I am new to Apache Cassandra.
Could you please suggest me good java API( Hector, thrift or .) for 
Cassandra?

Thanks,
Baskar.S
+91 97394 76008



Join Barracuda Networks in the fight against hunger.
To learn how you can help in your community, please visit: 
http://on.fb.me/UAdL4f