Re: how many rows can one partion key hold?

2015-02-27 Thread Jason Wee
you might want to read here http://wiki.apache.org/cassandra/CassandraLimitations jason On Fri, Feb 27, 2015 at 2:44 PM, wateray wrote: > Hi all, > My team is using Cassandra as our database. We have one question as below. > As we know, the row with the some partition key will be stored in the

Re: how many rows can one partion key hold?

2015-02-27 Thread Jens Rantil
Also, note that repairs will be slower for larger rows and AFAIK also require slightly more memory. Also, to avoid many tombstones it could be worth to consider bucketing your partitions by time. Cheers, Jens On Fri, Feb 27, 2015 at 7:44 AM, wateray wrote: > Hi all, > My team is using Cassandra

Re: how many rows can one partion key hold?

2015-02-27 Thread Marcelo Valle (BLOOMBERG/ LONDON)
> When one partition's data is extreme large, the write/read will slow? This is actually a good question, If a partition has near 2 billion rows, will writes or reads get too slow? My understanding is it shouldn't, as data is indexed inside a partition and when you read or write you are doing a

how to make unique coloumns in cassandra

2015-02-27 Thread ROBIN SRIVASTAVA
I want to make unique constraint in cassandra . As i want that all the value in my column be unique in my column family ex: name-rahul phone-123 address-abc now i want that in this row no values equal to rahul ,123 and abc get inserted again on searching on datastax i found that i can achieve it b

Re: how many rows can one partion key hold?

2015-02-27 Thread Jack Krupansky
As a general, rough guideline, I would suggest that a partition be kept down to thousands or tens of thousands of rows, probably not more than 100K rows per partition, and physical size kept to tens of thousands to hundreds of thousands or maybe a few megabytes or ten megabytes maximum per partitio

Cassandra rack awareness

2015-02-27 Thread Amlan Roy
Hi, I am new to Cassandra and trying to setup a Cassandra 2.0 cluster using 4 nodes, 2 each in 2 different racks. All are in same data centre. This is what I see in the documentation: To use racks correctly: Use the same number of nodes in each rack. Use one rack and place the nodes in differ

Delete columns

2015-02-27 Thread Benyi Wang
In C* 2.1.2, is there a way you can delete without specifying the row key? create table ( guid text, key1 text, key2 text, data int primary key (guid, key1, key2) ); delete from a_table where key1='' and key2=''; I'm trying to avoid doing like this: * query the table to get guids (32 b

Caching the PreparedStatement (Java driver)

2015-02-27 Thread Ajay
Hi, We are building REST APIs for Cassandra using the Cassandra Java Driver. So as per the below guidlines from the documentation, we are caching the Cluster instance (per cluster) and the Session instance (per keyspace) as they are multi thread safe. http://www.datastax.com/documentation/develop

Re: Cassandra rack awareness

2015-02-27 Thread Robert Coli
On Fri, Feb 27, 2015 at 7:30 AM, Amlan Roy wrote: > I am new to Cassandra and trying to setup a Cassandra 2.0 cluster using 4 > nodes, 2 each in 2 different racks. All are in same data centre. This is > what I see in the documentation: > > To use racks correctly: > Use the same number of nodes in

Less frequent flushing with LCS

2015-02-27 Thread Dan Kinder
Hi all, We have a table in Cassandra where we frequently overwrite recent inserts. Compaction does a fine job with this but ultimately larger memtables would reduce compactions. The question is: can we make Cassandra use larger memtables and flush less frequently? What currently triggers the flus

Re: Less frequent flushing with LCS

2015-02-27 Thread Robert Coli
On Fri, Feb 27, 2015 at 2:01 PM, Dan Kinder wrote: > Theoretically sstable_size_in_mb could be causing it to flush (it's at the > default 160MB)... though we are flushing well before we hit 160MB. I have > not tried changing this but we don't necessarily want all the sstables to > be large anyway

Error on nodetool cleanup

2015-02-27 Thread Gianluca Borello
Hello, I have a cluster of four nodes running 2.0.12. I added one more node and then went on with the cleanup procedure on the other four nodes, but I get this error (the same error on each node): INFO [CompactionExecutor:10] 2015-02-28 01:55:15,097 CompactionManager.java (line 619) Cleaned up t

How to extract all the user id from a single table in Cassandra?

2015-02-27 Thread Check Peck
I have a Cassandra table like this - create table user_record (user_id text, record_name text, record_value blob, primary key (user_id, record_name)); What is the best way to extract all the user_id from this table? As of now, I cannot change my data model to do this exercise so I need to fin

how to make unique constraints in cassandra

2015-02-27 Thread ROBIN SRIVASTAVA
I want to make unique constraint in Cassandra . As i want that all the value in my column be unique in my column family example : name-rahul ,phone-123, address-abc now i want that in this row no values equal to rahul ,123 and abc get inserted again on searching on datastax i found that i can ac

Re: Error on nodetool cleanup

2015-02-27 Thread Jeff Wehrwein
We had the exact same problem, and found this bug: https://issues.apache.org/jira/browse/CASSANDRA-8716. It's fixed in 2.0.13 (unreleased), but we haven't found a workaround for the interim. Please share if you find one! Thanks, Jeff On Fri, Feb 27, 2015 at 6:01 PM, Gianluca Borello wrote: >