I'm dealing with a similar issue, with an additional complication. We are
collecting time series data, and the amount of data per time period varies
greatly. We will collect and query event data by account, but the biggest
account will accumulate about 10,000 times as much data per time period as
the median account. So for the median account I could put multiple years of
data in one row, while for the largest accounts I don't want to put more
one day's worth in a row. If I use a uniform bucket size of one day (to
accomodate the largest accounts) it will make for rows that are too short
for the vast majority of accounts--we would have to read thirty rows to get
a month's worth of data. One obvious approach is to set a maximum row size,
that is, write data in a row until it reaches a maximum length, then start
a new one. There are two things that make that harder than it sounds:

   1. There's no efficient way to count columns in a Cassandra row in order
   to find out when to start a new one.
   2. Row keys aren't searchable. So I need to be able to construct or look
   up the key to each row that contains a account's data. (Our data will be in
   reverse date order.)

Possible solutions:

   1. Cassandra counter columns are an efficient way to keep counts
   2. I could have a "directory" row that contains pointers to the rows
   that contain an account data

(I could probably combine the row directory and the column counter into a
single counter column family, where the column name is the row key and the
value is the counter.) A naive solution would require reading the directory
before every read and the counter before every write--caching could
probably help with that. So this approach would probably lead to a
reasonable solution, but it's liable to be somewhat complex. Before I go
much further down this path, I thought I'd run it by this group in case
someone can point out a more clever solution.

Thanks,

Jim
On Thu, Mar 22, 2012 at 5:36 PM, Alexandru Sicoe <adsi...@gmail.com> wrote:

> Thanks Aaron, I'll lower the time bucket, see how it goes.
>
> Cheers,
> Alex
>
>
> On Thu, Mar 22, 2012 at 10:07 PM, aaron morton <aa...@thelastpickle.com>wrote:
>
>> Will adding a few tens of wide rows like this every day cause me problems
>> on the long term? Should I consider lowering the time bucket?
>>
>> IMHO yeah, yup, ya and yes.
>>
>>
>> From experience I am a bit reluctant to create too many rows because I
>> see that reading across multiple rows seriously affects performance. Of
>> course I will use map-reduce as well ...will it be significantly affected
>> by many rows?
>>
>> Don't think it would make too much difference.
>> range slice used by map-reduce will find the first row in the batch and
>> then step through them.
>>
>> Cheers
>>
>>
>>   -----------------
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 22/03/2012, at 11:43 PM, Alexandru Sicoe wrote:
>>
>> Hi guys,
>>
>> Based on what you are saying there seems to be a tradeoff that developers
>> have to handle between:
>>
>>                                "keep your rows under a certain size" vs
>> "keep data that's queried together, on disk together"
>>
>> How would you handle this tradeoff in my case:
>>
>> I monitor about 40.000 independent timeseries streams of data. The
>> streams have highly variable rates. Each stream has its own row and I go to
>> a new row every 28 hrs. With this scheme, I see several tens of rows
>> reaching sizes in the millions of columns within this time bucket (largest
>> I saw was 6.4 million). The sizes of these wide rows are around 400MBytes
>> (considerably > than 60MB)
>>
>> Will adding a few tens of wide rows like this every day cause me problems
>> on the long term? Should I consider lowering the time bucket?
>>
>> From experience I am a bit reluctant to create too many rows because I
>> see that reading across multiple rows seriously affects performance. Of
>> course I will use map-reduce as well ...will it be significantly affected
>> by many rows?
>>
>> Cheers,
>> Alex
>>
>> On Tue, Mar 20, 2012 at 6:37 PM, aaron morton <aa...@thelastpickle.com>wrote:
>>
>>> The reads are only fetching slices of 20 to 100 columns max at a time
>>> from the row but if the key is planted on one node in the cluster I am
>>> concerned about that node getting the brunt of traffic.
>>>
>>> What RF are you using, how many nodes are in the cluster, what CL do you
>>> read at ?
>>>
>>> If you have lots of nodes that are in different racks the
>>> NetworkTopologyStrategy will do a better job of distributing read load than
>>> the SimpleStrategy. The DynamicSnitch can also result distribute load, see
>>> cassandra yaml for it's configuration.
>>>
>>> I thought about breaking the column data into multiple different row
>>> keys to help distribute throughout the cluster but its so darn handy having
>>> all the columns in one key!!
>>>
>>> If you have a row that will continually grow it is a good idea to
>>> partition it in some way. Large rows can slow things like compaction and
>>> repair down. If you have something above 60MB it's starting to slow things
>>> down. Can you partition by a date range such as month ?
>>>
>>> Large rows are also a little slower to query from
>>> http://thelastpickle.com/2011/07/04/Cassandra-Query-Plans/
>>>
>>> If most reads are only pulling 20 to 100 columns at a time are there two
>>> workloads ? Is it possible store just these columns in a separate row ? If
>>> you understand how big a row may get may be able to use the row cache to
>>> improve performance.
>>>
>>> Cheers
>>>
>>>
>>>   -----------------
>>> Aaron Morton
>>> Freelance Developer
>>> @aaronmorton
>>> http://www.thelastpickle.com
>>>
>>> On 20/03/2012, at 2:05 PM, Blake Starkenburg wrote:
>>>
>>> I have a row key which is now up to 125,000 columns (and anticipated to
>>> grow), I know this is a far-cry from the 2-billion columns a single row key
>>> can store in Cassandra but my concern is the amount of reads that this
>>> specific row key may get compared to other row keys. This particular row
>>> key houses column data associated with one of the more popular areas of the
>>> site. The reads are only fetching slices of 20 to 100 columns max at a time
>>> from the row but if the key is planted on one node in the cluster I am
>>> concerned about that node getting the brunt of traffic.
>>>
>>> I thought about breaking the column data into multiple different row
>>> keys to help distribute throughout the cluster but its so darn handy having
>>> all the columns in one key!!
>>>
>>> key_cache is enabled but row cache is disabled on the column family.
>>>
>>> Should I be concerned going forward? Any particular advice on large wide
>>> rows?
>>>
>>> Thanks!
>>>
>>>
>>>
>>
>>
>

Reply via email to