wrote:
> Select distinct keys from column family; hits a timeout exception.
> pk1, pk2,…pkn are 800K in total.
> From: Mohammed Guller [mailto:moham...@glassbeam.com]
> Sent: Friday, January 23, 2015 3:24 PM
> To: user@cassandra.apache.org
> Subject: RE: Retrieving all row keys o
Select distinct keys from column family; hits a timeout exception.
pk1, pk2,…pkn are 800K in total.
From: Mohammed Guller [mailto:moham...@glassbeam.com]
Sent: Friday, January 23, 2015 3:24 PM
To: user@cassandra.apache.org
Subject: RE: Retrieving all row keys of a CF
No wonder, the client is
row keys of a CF
In each partition cql rows on average is 200K. Max is 3M.
800K is number of cassandra partitions.
From: Mohammed Guller [mailto:moham...@glassbeam.com]
Sent: Thursday, January 22, 2015 7:43 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subje
In each partition cql rows on average is 200K. Max is 3M.
800K is number of cassandra partitions.
From: Mohammed Guller [mailto:moham...@glassbeam.com]
Sent: Thursday, January 22, 2015 7:43 PM
To: user@cassandra.apache.org
Subject: RE: Retrieving all row keys of a CF
What is the average and max
CF”
You will need to specify all the composite columns if you are using a composite
partition key.
Mohammed
From: Ravi Agrawal [mailto:ragra...@clearpoolgroup.com]
Sent: Thursday, January 22, 2015 1:57 PM
To: user@cassandra.apache.org
Subject: RE: Retrieving all row keys of a CF
Hi,
I increased
]
Sent: Saturday, January 17, 2015 9:55 AM
To: user@cassandra.apache.org
Subject: Re: Retrieving all row keys of a CF
If you're getting partial data back, then failing eventually, try setting
.withCheckpointManager() - this will let you keep track of the token ranges
you've successfully
arpoolgroup.com]
> *Sent:* Friday, January 16, 2015 5:11 PM
>
> *To:* user@cassandra.apache.org
> *Subject:* RE: Retrieving all row keys of a CF
>
>
>
>
>
> 1)What is the heap size and total memory on each node? 8GB,
> 8GB
>
> 2)How big is th
t the nodes
after increasing the timeout and try again.
Mohammed
From: Ravi Agrawal [mailto:ragra...@clearpoolgroup.com]
Sent: Friday, January 16, 2015 5:11 PM
To: user@cassandra.apache.org
Subject: RE: Retrieving all row keys of a CF
1)What is the heap size and total memory on each
day, January 16, 2015 7:30 PM
To: user@cassandra.apache.org
Subject: RE: Retrieving all row keys of a CF
A few questions:
1) What is the heap size and total memory on each node?
2) How big is the cluster?
3) What are the read and range timeouts (in cassandra.yaml) on the C*
nodes?
* nodes? How long does GC for new gen
and old gen take?
6) Does any node crash with OOM error when you try AllRowsReader?
Mohammed
From: Ravi Agrawal [mailto:ragra...@clearpoolgroup.com]
Sent: Friday, January 16, 2015 4:14 PM
To: user@cassandra.apache.org
Subject: Re: Retrieving all row keys of
Hi,
I and Ruchir tried query using AllRowsReader recipe but had no luck. We are
seeing PoolTimeoutException.
SEVERE: [Thread_1] Error reading RowKeys
com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException:
PoolTimeoutException: [host=servername, latency=2003(2003), attempts=4]Timed
Ruchir,
I am curious if you had better luck with the AllRowsReader recipe.
Mohammed
From: Eric Stevens [mailto:migh...@gmail.com]
Sent: Friday, January 16, 2015 12:33 PM
To: user@cassandra.apache.org
Subject: Re: Retrieving all row keys of a CF
Note that getAllRows() is deprecated in Astyanax
Note the section titled Reading only the row keys
<https://github.com/Netflix/astyanax/wiki/AllRowsReader-All-rows-query#reading-only-the-row-keys>,
which seems to match your use case exactly. You should start getting row
keys back very, very quickly.
On Fri, Jan 16, 2015 at 11:32 AM, Ruchi
We have a column family that has about 800K rows and on an average about a
million columns. I am interested in getting all the row keys in this column
family and I am using the following Astyanax code snippet to do this.
This query never finishes (ran it for 2 days but did not finish).
This
ueries, but that makes it clearer that at
> least batching in smaller sizes is a good idea.
>
>
> On Wed, Jun 11, 2014 at 6:34 PM, Peter Sanford
> wrote:
>
>> On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jongsma
>> wrote:
>>
>>> The big problem seems
emy Jongsma
> wrote:
>
>> The big problem seems to have been requesting a large number of row keys
>> combined with a large number of named columns in a query. 20K rows with 20K
>> columns destroyed my cluster. Splitting it into slices of 100 sequential
>> queries fixed
On Wed, Jun 11, 2014 at 9:17 PM, Jack Krupansky
wrote:
> Hmmm... that multipl-gets section is not present in the 2.0 doc:
>
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAntiPatterns_c.html
>
> Was that intentional – is that anti-pattern no lon
batches” as an anti-pattern:
http://www.slideshare.net/mattdennis
-- Jack Krupansky
From: Peter Sanford
Sent: Wednesday, June 11, 2014 7:34 PM
To: user@cassandra.apache.org
Subject: Re: Large number of row keys in query kills cluster
On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jongsma wrote:
The
On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jongsma
wrote:
> The big problem seems to have been requesting a large number of row keys
> combined with a large number of named columns in a query. 20K rows with 20K
> columns destroyed my cluster. Splitting it into slices of 100 sequential
On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jongsma
wrote:
> Is there any documentation on this? Obviously these limits will vary by
> cluster capacity, but for new users it would be great to know that you can
> run into problems with large queries, and how they present themselves when
> you hit the
The big problem seems to have been requesting a large number of row keys
combined with a large number of named columns in a query. 20K rows with 20K
columns destroyed my cluster. Splitting it into slices of 100 sequential
queries fixed the performance issue.
When updating 20K rows at a time, I
odes are somehow stressed.
>>>
>>> How did you make the query? Are you using Thrift or CQL3 API?
>>>
>>> Please note that there is another way to get all partition keys : SELECT
>>> DISTINCT FROM..., more details here :
>>> www.datastax.co
INCT FROM..., more details here :
>> www.datastax.com/dev/blog/cassandra-2-0-1-2-0-2-and-a-quick-peek-at-2-0-3
>> I ran an application today that attempted to fetch 20,000+ unique row
>> keys in one query against a set of completely empty column families. On a
>> 4-node
CT FROM..., more details here :
> www.datastax.com/dev/blog/cassandra-2-0-1-2-0-2-and-a-quick-peek-at-2-0-3
> I ran an application today that attempted to fetch 20,000+ unique row keys
> in one query against a set of completely empty column families. On a 4-node
> cluster (EC2 m1.large ins
o get all partition keys : SELECT
DISTINCT FROM..., more details here :
www.datastax.com/dev/blog/cassandra-2-0-1-2-0-2-and-a-quick-peek-at-2-0-3
I ran an application today that attempted to fetch 20,000+ unique row keys
in one query against a set of completely empty column families. On a 4-node
cl
I ran an application today that attempted to fetch 20,000+ unique row keys
in one query against a set of completely empty column families. On a 4-node
cluster (EC2 m1.large instances) with the recommended memory settings (2 GB
heap), every single node immediately ran out of memory and became
Thanks,
worked a treat !
Andy
From: DuyHai Doan
Sent: 15 February 2014 18:51
To: user@cassandra.apache.org
Subject: Re: CQL get unique row keys ?
Hello Andy
Since C* 2.0.1 it is possible to list all distinct partition keys (not
clustering keys) with
time.
Regards
Duy Hai DOAN
On Sat, Feb 15, 2014 at 6:05 PM, Andrew Cobley wrote:
> I may be missing something here, but is there a way in cql to get all
> unique row keys in a column family(table) ?
>
> I've created a table like this:
>
> CREATE TABLE totp (
>
I may be missing something here, but is there a way in cql to get all unique
row keys in a column family(table) ?
I’ve created a table like this:
CREATE TABLE totp (
artist varchar,
track varchar,
appearance_type varchar,
PRIMARY KEY ((artist),track)
) WITH CLUSTERING ORDER BY (track asc
Ask_Price, validation_class:AsciiType}
>> ];
>>
>> how do i get from this usersWriter.newRow(String.
>> valueOf(lineNumber)); ?
>> thanks.
>>
>>
>>
>> On Fri, Oct 11, 2013 at 4:30 PM, Vivek Mishra wrote:
>>
>>> I am not able to
11, 2013 at 4:30 PM, Vivek Mishra wrote:
>
>> I am not able to get your meaning for "*string as row keys" ? *
>> *
>> *
>> Row key values will be of type "key_validation_class" only
>> *
>> *
>>
>> On Fri, Oct 11, 2013 at 4:2
)); ?
thanks.
On Fri, Oct 11, 2013 at 4:30 PM, Vivek Mishra wrote:
> I am not able to get your meaning for "*string as row keys" ? *
> *
> *
> Row key values will be of type "key_validation_class" only
> *
> *
>
> On Fri, Oct 11, 2013 at 4:25
I am not able to get your meaning for "*string as row keys" ? *
*
*
Row key values will be of type "key_validation_class" only
*
*
On Fri, Oct 11, 2013 at 4:25 PM, ashish sanadhya wrote:
> Hi vivek key_validation_class=UTF8Type will do ,but i certainly want
> *strin
Hi vivek key_validation_class=UTF8Type will do ,but i certainly want
*string as row keys, *so will it work ?? *
*
On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra wrote:
> Also, please use ByteBufferUtils for byte conversions.
>
>
> On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra wrote
Also, please use ByteBufferUtils for byte conversions.
On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra wrote:
> but i have changed my **key_validation_class=AsciiType** in order to make
> **string as row keys**
>
> why not key_validation_class=UTF8Type ?
>
> -Vivek
>
>
&
but i have changed my **key_validation_class=AsciiType** in order to make
**string as row keys**
why not key_validation_class=UTF8Type ?
-Vivek
On Fri, Oct 11, 2013 at 3:55 PM, ashish sanadhya wrote:
> I have done with bulk loader with key_validation_class=LexicalUUIDType for
> new ro
I have done with bulk loader with key_validation_class=LexicalUUIDType for
new row with the help of this [code][1] but i have changed my
**key_validation_class=AsciiType** in order to make **string as row keys**
create column family Users1
with key_validation_class=AsciiType
titioner-results
>
> You can use it to page through your rows.
>
> Blake
>
>
> On Jul 23, 2013, at 10:18 PM, Jimmy Lin wrote:
>
>> hi,
>> I want to fetch all the row keys of a table using CQL3:
>>
>> e.g
>> select id from mytable limit 999
; Check out the token function:
>
>
> http://www.datastax.com/docs/1.1/dml/using_cql#paging-through-non-ordered-partitioner-results
>
> You can use it to page through your rows.
>
> Blake
>
>
> On Jul 23, 2013, at 10:18 PM, Jimmy Lin wrote:
>
> hi,
> I want to fetch all
Hi Jimmy,
Check out the token function:
http://www.datastax.com/docs/1.1/dml/using_cql#paging-through-non-ordered-partitioner-results
You can use it to page through your rows.
Blake
On Jul 23, 2013, at 10:18 PM, Jimmy Lin wrote:
> hi,
> I want to fetch all the row keys of a table usin
hi,
I want to fetch all the row keys of a table using CQL3:
e.g
select id from mytable limit 999
#1
For this query, does the node need to wait for all rows return from all
other nodes before returning the data to the client(I am using astyanax) ?
In other words, will this operation create a
--
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 10/01/2013, at 11:55 AM, Snehal Nagmote wrote:
>
>> Hello All,
>>
>> I am using Kundera 2.0.7 and Cassandra 1.0.8. I need
>
> Hello All,
>
> I am using Kundera 2.0.7 and Cassandra 1.0.8. I need to implement
> batching/ pagination over row keys.
>
> for instance, Scan columnfamily , get 100 records in batch everytime ,
> till all keys are exhausted.
>
> I am using random partitioner for keyspac
All,
>
> I am using Kundera 2.0.7 and Cassandra 1.0.8. I need to implement batching/
> pagination over row keys.
>
> for instance, Scan columnfamily , get 100 records in batch everytime , till
> all keys are exhausted.
>
> I am using random partitioner for keyspace. I ex
Hello All,
I am using Kundera 2.0.7 and Cassandra 1.0.8. I need to implement batching/
pagination over row keys.
for instance, Scan columnfamily , get 100 records in batch everytime , till
all keys are exhausted.
I am using random partitioner for keyspace. I explored limit option in cql
and
n family with non-composite row keys = incremental id
> I have a Cassandra column family with a composite row keys = incremental id 1
> : group id
> Which one will be faster to insert? And which one will be faster to read
> by incremental id?
>
> Best regards,
> --
> Ma
Suppose two cases:
1. I have a Cassandra column family with non-composite row keys =
incremental id
2. I have a Cassandra column family with a composite row keys =
incremental id 1 : group id
Which one will be faster to insert? And which one will be faster to
read by incremental
d in row (i.e. key) major order."
>
>
>
> Does this mean that new row keys should be ascending? If they are not
> ascending does that mean all
>
> of the data after the new key needs to be shifted down?
>
>
> Thanks.
>
> Cory
>
--
Tyler Hobbs
DataStax <http://datastax.com/>
null,
> 64)
>
> I then figured I could use compositeUtf8Utf8Type when creating composite row
> keys and column names of the kind I require. Cassandra 1.1.x introduces the
> CompositeType.Builder class for creating actual composite values, but that'
ortedWriter(
cassandraOutputDir,
"IngenuityContent",
"Articles",
compositeUtf8Utf8Type,
null,
64)
I then figured I could use compositeUtf8Utf8T
Only if you reuse a row key.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 27/03/2012, at 6:38 AM, Ertio Lew wrote:
> I need to use the range beyond the integer32 type range, so I am using Long
> to write those keys. I am afraid if th
Only if you reuse a row key.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 27/03/2012, at 6:38 AM, Ertio Lew wrote:
> I need to use the range beyond the integer32 type range, so I am using Long
> to write those keys. I am afraid if th
I need to use the range beyond the integer32 type range, so I am using
Long to write those keys. I am afraid if this might lead to collisions with
the previously stored integer keys in the same CF even if I leave out the
int32 type range.
On Mon, Mar 26, 2012 at 10:51 PM, aaron morton wrote:
>
> without them overlapping/disturbing each other (assuming that keys lie in
> above domains) ?
Not sure what you mean by overlapping.
42 as a int and 42 as a long are the same key.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 25/03/201
I have been writing rows to a CF all with integer(4 byte) keys. So my CF
contains rows with keys in the entire range from Integer.MIN_VALUE to
Integer.MAX_VALUE.
Now I want to store Long type keys as well in this CF **without disturbing
the integer keys. The range of Long type keys would be exclud
nly filenames (2nd component of the Composite) starting with
"FT".
If the algorithm is correct, I should get one single result "FT2".
I have used a SliceQuery following the DataStax algo above: just OK
I have used too a RangeSlicesQuery following the DataStax algo above:
mns and a standard
>>>> CF? Row key is the UUID column name is (timestamp : dir_entry) you
>>>> can then slice all columns with a particular time stamp.
>>>>
>>>> Even if you have a random key, I would use the RP unless
>>>> y
reme use case.
> >>
> >> Cheers
> >>
> >> -
> >> Aaron Morton
> >> Freelance Developer
> >> @aaronmorton
> >> http://www.thelastpickle.com
> >>
> >> On 21/12/2011, at 3:06 AM, Bryce Al
>
>> -
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 21/12/2011, at 3:06 AM, Bryce Allen wrote:
>>
>>> I think it comes down to how much you benefit from row range
/12/2011, at 3:06 AM, Bryce Allen wrote:
>
> > I think it comes down to how much you benefit from row range scans,
> > and how confident you are that going forward all data will continue
> > to use random row keys.
> >
> > I'm considering using BOP as a way of
to use
> random row keys.
>
> I'm considering using BOP as a way of working around the non indexes
> super column limitation. In my current schema, row keys are random
> UUIDs, super column names are timestamps, and columns contain a
> snapshot in time of directory contents,
I think it comes down to how much you benefit from row range scans, and
how confident you are that going forward all data will continue to use
random row keys.
I'm considering using BOP as a way of working around the non indexes
super column limitation. In my current schema, row keys are r
r and
OrderPreservingPartitioning will lead to hotspots and unbalanced
rings.
2011/12/20 Drew Kutcharian :
> Hey Guys,
>
> I just came
> across http://wiki.apache.org/cassandra/ByteOrderedPartitioner and it got me
> thinking. If the row keys are java.util.UUID which are generated randomly
&g
Hey Guys,
I just came across http://wiki.apache.org/cassandra/ByteOrderedPartitioner and
it got me thinking. If the row keys are java.util.UUID which are generated
randomly (and securely), then what type of partitioner would be the best? Since
the key values are already random, would it make a
> Many articles suggest model TimeUUID in columns instead of rows, but since>
> only one node can serve a single row, won't this lead to hot spot problems?
It won't cause hotspots as long as you are sharding by a small enough
time period, like hour, day, or week.
I.e. the key is the hour day or
Thank you.
I agree that request "lots of" machines process a single query could be
slow, if there are hundreds of them instead of dozens. Will a cluster of
e.g. 4-20 nodes behave well if we spread the query to all nodes?
Many articles suggest model TimeUUID in columns instead of rows, but since
o
On Fri, Nov 4, 2011 at 7:49 AM, Gary Shi wrote:
> I want to save time series event logs into Cassandra, and I need to load
> them by key range (row key is time-based). But we can't use
> RandomPartitioner in this way, while OrderPreservingPartitioner leads to hot
> spot problem.
You should read t
On Fri, Nov 4, 2011 at 1:49 PM, Gary Shi wrote:
> I want to save time series event logs into Cassandra, and I need to load
> them by key range (row key is time-based). But we can't use
> RandomPartitioner in this way, while OrderPreservingPartitioner leads to hot
> spot problem.
>
> So I wonder wh
I want to save time series event logs into Cassandra, and I need to load
them by key range (row key is time-based). But we can't use
RandomPartitioner in this way, while OrderPreservingPartitioner leads to
hot spot problem.
So I wonder why Cassandra save SSTable by sorted row tokens instead of
key
On Tue, Oct 18, 2011 at 4:30 PM, Jonathan Ellis wrote:
> On Tue, Oct 18, 2011 at 4:10 PM, Tyler Hobbs wrote:
> > * You'll get range ghosts
> > (http://wiki.apache.org/cassandra/FAQ#range_ghosts) with column_count=0.
> > You can avoid them if you set column_count=1.
>
> What heuristic do you us
On Tue, Oct 18, 2011 at 4:10 PM, Tyler Hobbs wrote:
> * You'll get range ghosts
> (http://wiki.apache.org/cassandra/FAQ#range_ghosts) with column_count=0.
> You can avoid them if you set column_count=1.
What heuristic do you use for skipping empty rows?
--
Jonathan Ellis
Project Chair, Apache
u set column_count=1.
- Tyler
On Tue, Oct 18, 2011 at 3:58 PM, aaron morton wrote:
> There no first class support for just getting row keys, you will always
> want to get a column.
>
> You can fake it by requesting zero columns.
>
> Cheers
>
> -
> Aa
There no first class support for just getting row keys, you will always want to
get a column.
You can fake it by requesting zero columns.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/10/2011, at 3:53 AM, David Fischer(Gtalk) wrote
Hello all.
New to cassandra and I am using pycassa to access data. I was
wondering someone knows how to just pull row keys insead of get_range?
This question may be a bit more on the pycassa but not sure. If
someone has a java snippet to do it that would be ok also
Thanks
Why do you need another CF? Is there something wrong with repeating the key
as a column and indexing it?
On Fri, Jul 22, 2011 at 7:40 PM, Patrick Julien wrote:
> Exactly. In any case, I just answered my own question. If I need
> range, I can just make another column family where the column nam
Exactly. In any case, I just answered my own question. If I need
range, I can just make another column family where the column name are
these keys
On Fri, Jul 22, 2011 at 12:37 PM, Nate McCall wrote:
>> yes,but why would you use CompositeType if you don't need range query?
>
> If you were doing
> yes,but why would you use CompositeType if you don't need range query?
If you were doing composite keys anyway (common approach with time
series data for example), you would not have to write parsing and
concatenation code. Particularly useful if you had mixed types in the
key.
On 22/07/2011 17:56, Patrick Julien wrote:
I can still use it for keys if I don't need ranges then? Because for
what we are doing we can always re-assemble keys
yes,but why would you use CompositeType if you don't need range query?
On Fri, Jul 22, 2011 at 11:38 AM, Donal Zang wrote:
If you a
I can still use it for keys if I don't need ranges then? Because for
what we are doing we can always re-assemble keys
On Fri, Jul 22, 2011 at 11:38 AM, Donal Zang wrote:
> If you are using OPP, then you can use CompositeType on both key and column
> name; otherwise(Random Partition), just use it
If you are using OPP, then you can use CompositeType on both key and
column name; otherwise(Random Partition), just use it for columns.
On 22/07/2011 17:10, Patrick Julien wrote:
With the current implementation of CompositeType in Cassandra 0.8.1,
is it recommended practice to try to use a Compo
With the current implementation of CompositeType in Cassandra 0.8.1,
is it recommended practice to try to use a CompositeType as the key?
Or are both, column and key, equally well supported?
The documentation on CompositeType is light, well non-existent really, with
key_validation_class set to Co
If you are using 0.10 or 0.11 of the cassandra gem you will only get rows back
that have values(columns). This is due to the way cassandra handles deleted
rows by adding a tombstone. So if you delete a row (or delete all the columns
in a row) the gem will remove that particular row from the hash
Hard to say exactly what the issue is. Are they connected to the same node and
using the same Consistency Level?
Try turing the logging up to DEBUG to see they are issuing the same query.
Hope that helps.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thel
Thanks, that definitely helped. Any idea why my client is showing far less
existing rows than cassandra-cli though?
I'm using the Ruby Cassandra client, and when I get all the rows for the
"Sessions" cf, I get 8 rows returned. However, when I do "list Sessions" in
the cassandra-cli I get 40 rows r
The key printed in the DEBUG message is the byte array the server was given as the key converted to hex. Your client API may have converted the string to ascii bytes before sending to the server.e.g. here is me writing a 'foo' key to the server DEBUG 15:52:15,818 insert writing local RowMutation(ke
We're using Cassandra to store our sessions, all in a single column family
"Sessions" with the format:
Sessions['session_key'] = {'val': }
(session_key is a randomly generated hash)
The "raw" keys I'm talking about are for example the 'key' value as seen
from Cassandra DEBUG output:
insert writing
On Wed, Mar 2, 2011 at 4:53 AM, Eric Charles
mailto:eric.char...@u-mangate.com>> wrote:
Hi,
I'm also facing the need to retrieve all row keys.
What do you mean with "stable" order?
From this thread, I understand paging method with
RandomPartitioner
with setKeys("keydsg","") again, you will get following
result.
keydsg
key8jkg
keyag87
key45s
...
Regards,
Chen
www.evidentsoftware.com
On Wed, Mar 2, 2011 at 4:53 AM, Eric Charles wrote:
> Hi,
>
> I'm also facing the need to retrieve all row keys.
>
>
Hi,
I'm also facing the need to retrieve all row keys.
What do you mean with "stable" order?
From this thread, I understand paging method with RandomPartitioner will return
all keys (shuffled, but missing key, no double key).
This seems to have already told, but I prefer
row... In order to
> > > > > control the ordering of rows you'll need to use the
> > > > > OrderPreservingPartitioner
> > > > > (http://www.datastax.com/docs/0.7/operations/clustering#tokens-partitioners-ring).
> > > > >
> > > >
your time, I will take a look at this.
>
> *As for getColumnsFromRows; it should be returning you a map of lists.
> The map is insertion-order-preserving and populated based on the provided
> list of row keys (so if you iterate over the entries in the map they should
> be in the s
Thanks Roshan,
I think I understand now. The setRowCount() is in the Java Cassandra
driver. I'll try to find the similar method in the Ruby API.
Kind regards,
Joshua
On Thu, Feb 24, 2011 at 1:04 PM, Roshan Dawrani wrote:
> On Thu, Feb 24, 2011 at 6:54 AM, Joshua Partogi
> wrote:
>>
>> I am sor
On Thu, Feb 24, 2011 at 6:54 AM, Joshua Partogi wrote:
>
> I am sorry for not making it clear in my original
> post that what I am looking for is the list of keys in the database
> assuming that the client application does not know the keys. From what
> I understand, RangeSliceQuery requires you t
possible-to-get-list-of-row-keys-tp6055419p6058764.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
--
http://twitter.com/jpartogi
Is your data updated or large chunks are read-only?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Is-it-possible-to-get-list-of-row-keys-tp6055419p6058764.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Hi everyone,
Thank you to everyone that have responded to my email. I really
appreciate that. I am sorry for not making it clear in my original
post that what I am looking for is the list of keys in the database
assuming that the client application does not know the keys. From what
I understand, R
mRows; it should be returning you a map of lists. The
> > > map is insertion-order-preserving and populated based on the provided
> > > list of row keys (so if you iterate over the entries in the map they
> > > should be in the same order as the list of row keys).
>
They are, however, in *stable* order, which is important.
On Wed, Feb 23, 2011 at 3:20 PM, Norman Maurer wrote:
> yes but be aware that the keys will not in the "right order".
>
> Bye,
> Norman
>
> 2011/2/23 Roshan Dawrani :
>> On Wed, Feb 23, 2011 at 7:17 PM, Ching-Cheng Chen
>> wrote:
>>>
>>>
Yes. But I don't think the retrieving keys in the "right order" was part of
the original question. :-)
On Wed, Feb 23, 2011 at 7:50 PM, Norman Maurer wrote:
> yes but be aware that the keys will not in the "right order".
>
> Bye,
> Norman
>
> 2011/2/23 Roshan Dawrani :
> > On Wed, Feb 23, 2011 a
yes but be aware that the keys will not in the "right order".
Bye,
Norman
2011/2/23 Roshan Dawrani :
> On Wed, Feb 23, 2011 at 7:17 PM, Ching-Cheng Chen
> wrote:
>>
>> Actually, if you want to get ALL keys, I believe you can still use
>> RangeSliceQuery with RP.
>> Just use setKeys("","") as fir
1 - 100 of 119 matches
Mail list logo