Re: How to confirm TWCS is fully in-place

2016-11-09 Thread kurt Greaves
What compaction strategy are you migrating from? If you're migrating from
STCS it's likely that when switching to TWCS no extra compactions are
necessary, as the SSTables will be put into their respective windows but
there won't be enough candidates for compaction within a window.

Kurt Greaves
k...@instaclustr.com
www.instaclustr.com

On 8 November 2016 at 21:11, Oskar Kjellin  wrote:

> Hi,
>
> You could manually trigger it with nodetool compact.
>
> /Oskar
>
> > On 8 nov. 2016, at 21:47, Lahiru Gamathige  wrote:
> >
> > Hi Users,
> >
> > I am thinking of migrating our timeseries tables to use TWCS. I am using
> JMX to set the new compaction and one node at a time and I am not sure how
> to confirm that after the flush all the compaction is done in each node. I
> tried this in a small cluster but after setting the compaction I didn't see
> any compaction triggering  and ran nodetool flush and still didn't see a
> compaction triggering.
> >
> > Now I am about to do the same thing in our staging cluster, so curious
> how do I confirm compaction ran in each node before I change the table
> schema because I am worried it will start the compaction in all the nodes
> at the same time.
> >
> > Lahiru
>


Re: 答复: A difficult data model with C*

2016-11-09 Thread Vladimir Yudovin
You are welcome! )



>recent ten movies watched by the user within 30 days.

In this case you can't use PRIMARY KEY (user_name, video_id), as video_id is 
demanded to fetch row, so all this stuff may be

CREATE TYPE play (video_id text, position int, last_time timestamp);

CREATE TABLE recent (user_name text PRIMARY KEY, play_list 
LIST>);


You can easily retrieve play list for specific user by his ID. Instead of LIST 
you can use MAP, I don't think that for ten entries it matters.




Best regards, Vladimir Yudovin, 

Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.





 On Tue, 08 Nov 2016 22:29:48 -0500ben ben  
wrote 




Hi Vladimir Yudovin,



Thank you very much for your detailed explaining. Maybe I didn't describe 
the requirement clearly. The use cases should be:

1. a user login our app.

2. show the recent ten movies watched by the user within 30 days.

3. the user can click any one of the ten movie and continue to watch from the 
last position she/he did. BTW, a movie can be watched several times by a user 
and the last positon is needed indeed.



BRs,

BEN




发件人: Vladimir Yudovin 
 发送时间: 2016年11月8日 22:35:48
 收件人: user
 主题: Re: A difficult data model with C* 
 


Hi Ben,



if need very limited number of positions (as you said ten) may be you can store 
them in LIST of UDT? Or just as JSON string?

So you'll have one row per each pair user-video. 



It can be something like this:



CREATE TYPE play (position int, last_time timestamp);

CREATE TABLE recent (user_name text, video_id text, review 
LIST>, PRIMARY KEY (user_name, video_id));



UPDATE recent set review = review + [(1234,12345)] where user_name='some user' 
AND video_id='great video';

UPDATE recent set review = review + [(1234,123456)] where user_name='some user' 
AND video_id='great video';

UPDATE recent set review = review + [(1234,1234567)] where user_name='some 
user' AND video_id='great video';



You can delete the oldest entry by index:

DELETE review[0] FROM recent WHERE user_name='some user' AND video_id='great 
video';



or by value, if you know the oldest entry:



UPDATE recent SET review = review - [(1234,12345)]  WHERE user_name='some user' 
AND video_id='great video';



Best regards, Vladimir Yudovin, 

Winguzone - Hosted Cloud Cassandra
 Launch your cluster in minutes.





 On Mon, 07 Nov 2016 21:54:08 -0500ben ben  
wrote 






Hi guys,



We are maintaining a system for an on-line video service. ALL users' viewing 
records of every movie are stored in C*. So she/he can continue to enjoy the 
movie from the last point next time. The table is designed as below:

CREATE TABLE recent (

user_name text,

vedio_id text,

position int,

last_time timestamp,

PRIMARY KEY (user_name, vedio_id)

)



It worked well before. However, the records increase every day and the last ten 
items may be adequate for the business. The current model use vedio_id as 
cluster key to keep a row for a movie, but as you know, the business prefer to 
order by the last_time desc. If we use last_time as cluster key, there will be 
many records for a singe movie and the recent one is actually desired. So how 
to model that? Do you have any suggestions? 

Thanks!





BRs,

BEN


















Having Counters in a Collection, like a map?

2016-11-09 Thread Ali Akhtar
I have a use-case where I need to have a dynamic number of counters.

The easiest way to do this would be to have a map where the
int is the key, and the counter is the value which is incremented /
decremented. E.g if something related to 5 happened, then i'd get the
counter for 5 and increment / decrement it.

I also need to have multiple maps of this type, where each
int is a key referring to something different.

Is there a way to do this in c* which doesn't require creating 1 table per
type of map that i need?


Re: Having Counters in a Collection, like a map?

2016-11-09 Thread Vladimir Yudovin
Unfortunately it's impossible nor to use counters inside collections neither 
mix them with other non-counter columns :



CREATE TABLE cnt (id int PRIMARY KEY , cntmap MAP);

InvalidRequest: Error from server: code=2200 [Invalid query] message="Counters 
are not allowed inside collections: map"



CREATE TABLE cnt (id int PRIMARY KEY , cnt1 counter, txt text);

InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot 
mix counter and non counter columns in the same table"





>Is there a way to do this in c* which doesn't require creating 1 table per 
type of map that i need?

But you don't need to create separate table per each counter, just use one row 
per counter:



CREATE TABLE cnt (id int PRIMARY KEY , value counter);



Best regards, Vladimir Yudovin, 

Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.





 On Wed, 09 Nov 2016 07:17:53 -0500Ali Akhtar  
wrote 




I have a use-case where I need to have a dynamic number of counters.



The easiest way to do this would be to have a map where the 
int is the key, and the counter is the value which is incremented / 
decremented. E.g if something related to 5 happened, then i'd get the counter 
for 5 and increment / decrement it.



I also need to have multiple maps of this type, where each 
int is a key referring to something different.



Is there a way to do this in c* which doesn't require creating 1 table per type 
of map that i need?









Re: Having Counters in a Collection, like a map?

2016-11-09 Thread DuyHai Doan
"Is there a way to do this in c* which doesn't require creating 1 table per
type of map that i need?"

You're lucky, it's possible with some tricks


CREATE TABLE my_counters_map (
 partition_key id uuid,
 map_name text,
 map_key int,
 count counter,
 PRIMARY KEY ((id), map_name, map_key)
);

This table can be seen as:

Map >>

The couple (map_key, counter) simulates your map

The clustering column map_name allows you to have multiple maps of counters
for a single partition_key



On Wed, Nov 9, 2016 at 1:32 PM, Vladimir Yudovin 
wrote:

> Unfortunately it's impossible nor to use counters inside collections
> neither mix them with other non-counter columns :
>
> CREATE TABLE cnt (id int PRIMARY KEY , cntmap MAP);
> InvalidRequest: Error from server: code=2200 [Invalid query]
> message="Counters are not allowed inside collections: map"
>
> CREATE TABLE cnt (id int PRIMARY KEY , cnt1 counter, txt text);
> InvalidRequest: Error from server: code=2200 [Invalid query]
> message="Cannot mix counter and non counter columns in the same table"
>
>
> >Is there a way to do this in c* which doesn't require creating 1 table
> per type of map that i need?
> But you don't need to create separate table per each counter, just use one
> row per counter:
>
> CREATE TABLE cnt (id int PRIMARY KEY , value counter);
>
> Best regards, Vladimir Yudovin,
>
> *Winguzone  - Hosted Cloud
> CassandraLaunch your cluster in minutes.*
>
>
>  On Wed, 09 Nov 2016 07:17:53 -0500*Ali Akhtar  >* wrote 
>
> I have a use-case where I need to have a dynamic number of counters.
>
> The easiest way to do this would be to have a map where the
> int is the key, and the counter is the value which is incremented /
> decremented. E.g if something related to 5 happened, then i'd get the
> counter for 5 and increment / decrement it.
>
> I also need to have multiple maps of this type, where each
> int is a key referring to something different.
>
> Is there a way to do this in c* which doesn't require creating 1 table per
> type of map that i need?
>
>
>


Re: Having Counters in a Collection, like a map?

2016-11-09 Thread Ali Akhtar
The only issue with the last 2 solutions is, they require knowing the key
in advance in order to look up the counters.

The keys however are dynamic in my case.

On Wed, Nov 9, 2016 at 5:47 PM, DuyHai Doan  wrote:

> "Is there a way to do this in c* which doesn't require creating 1 table
> per type of map that i need?"
>
> You're lucky, it's possible with some tricks
>
>
> CREATE TABLE my_counters_map (
>  partition_key id uuid,
>  map_name text,
>  map_key int,
>  count counter,
>  PRIMARY KEY ((id), map_name, map_key)
> );
>
> This table can be seen as:
>
> Map >>
>
> The couple (map_key, counter) simulates your map
>
> The clustering column map_name allows you to have multiple maps of
> counters for a single partition_key
>
>
>
> On Wed, Nov 9, 2016 at 1:32 PM, Vladimir Yudovin 
> wrote:
>
>> Unfortunately it's impossible nor to use counters inside collections
>> neither mix them with other non-counter columns :
>>
>> CREATE TABLE cnt (id int PRIMARY KEY , cntmap MAP);
>> InvalidRequest: Error from server: code=2200 [Invalid query]
>> message="Counters are not allowed inside collections: map"
>>
>> CREATE TABLE cnt (id int PRIMARY KEY , cnt1 counter, txt text);
>> InvalidRequest: Error from server: code=2200 [Invalid query]
>> message="Cannot mix counter and non counter columns in the same table"
>>
>>
>> >Is there a way to do this in c* which doesn't require creating 1 table
>> per type of map that i need?
>> But you don't need to create separate table per each counter, just use
>> one row per counter:
>>
>> CREATE TABLE cnt (id int PRIMARY KEY , value counter);
>>
>> Best regards, Vladimir Yudovin,
>>
>> *Winguzone  - Hosted Cloud
>> CassandraLaunch your cluster in minutes.*
>>
>>
>>  On Wed, 09 Nov 2016 07:17:53 -0500*Ali Akhtar > >* wrote 
>>
>> I have a use-case where I need to have a dynamic number of counters.
>>
>> The easiest way to do this would be to have a map where the
>> int is the key, and the counter is the value which is incremented /
>> decremented. E.g if something related to 5 happened, then i'd get the
>> counter for 5 and increment / decrement it.
>>
>> I also need to have multiple maps of this type, where each
>> int is a key referring to something different.
>>
>> Is there a way to do this in c* which doesn't require creating 1 table
>> per type of map that i need?
>>
>>
>>
>


Re: Having Counters in a Collection, like a map?

2016-11-09 Thread Vladimir Yudovin
>The keys however are dynamic in my case.

Why is it problem for you? As you said "if something related to 5 happened, 
then i'd get the counter for 5 and increment / decrement it."


So do "UPDATE cnt SET value = value + SOMETHING where id = 5;"

If counter for event 5 exists it will be changed, if not - created set to  
initial value.




Best regards, Vladimir Yudovin, 

Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.





 On Wed, 09 Nov 2016 07:52:29 -0500Ali Akhtar  
wrote 




The only issue with the last 2 solutions is, they require knowing the key in 
advance in order to look up the counters.



The keys however are dynamic in my case.




On Wed, Nov 9, 2016 at 5:47 PM, DuyHai Doan  wrote:






"Is there a way to do this in c* which doesn't require creating 1 table per 
type of map that i need?"



You're lucky, it's possible with some tricks





CREATE TABLE my_counters_map (

 partition_key id uuid,

 map_name text,

 map_key int,

 count counter,

 PRIMARY KEY ((id), map_name, map_key)

);



This table can be seen as:



Map >>



The couple (map_key, counter) simulates your map



The clustering column map_name allows you to have multiple maps of counters for 
a single partition_key








On Wed, Nov 9, 2016 at 1:32 PM, Vladimir Yudovin  
wrote:



Unfortunately it's impossible nor to use counters inside collections neither 
mix them with other non-counter columns :



CREATE TABLE cnt (id int PRIMARY KEY , cntmap MAP);

InvalidRequest: Error from server: code=2200 [Invalid query] message="Counters 
are not allowed inside collections: map"



CREATE TABLE cnt (id int PRIMARY KEY , cnt1 counter, txt text);

InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot 
mix counter and non counter columns in the same table"





>Is there a way to do this in c* which doesn't require creating 1 table per 
type of map that i need?



But you don't need to create separate table per each counter, just use one row 
per counter:



CREATE TABLE cnt (id int PRIMARY KEY , value counter);



Best regards, Vladimir Yudovin, 

Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.





 On Wed, 09 Nov 2016 07:17:53 -0500Ali Akhtar  
wrote 




I have a use-case where I need to have a dynamic number of counters.



The easiest way to do this would be to have a map where the 
int is the key, and the counter is the value which is incremented / 
decremented. E.g if something related to 5 happened, then i'd get the counter 
for 5 and increment / decrement it.



I also need to have multiple maps of this type, where each 
int is a key referring to something different.



Is there a way to do this in c* which doesn't require creating 1 table per type 
of map that i need?




















Re: Having Counters in a Collection, like a map?

2016-11-09 Thread DuyHai Doan
"they require knowing the key in advance in order to look up the counters"

--> Wrong

Imagine your table

partition_key uuid,
first_map map,
second_map map

With my proposed data model:

SELECT first_map FROM table would translate to

SELECT map_key, count FROM my_counters_map WHERE partition_key = xxx AND
map_name = 'first_map';



On Wed, Nov 9, 2016 at 1:52 PM, Ali Akhtar  wrote:

> The only issue with the last 2 solutions is, they require knowing the key
> in advance in order to look up the counters.
>
> The keys however are dynamic in my case.
>
> On Wed, Nov 9, 2016 at 5:47 PM, DuyHai Doan  wrote:
>
>> "Is there a way to do this in c* which doesn't require creating 1 table
>> per type of map that i need?"
>>
>> You're lucky, it's possible with some tricks
>>
>>
>> CREATE TABLE my_counters_map (
>>  partition_key id uuid,
>>  map_name text,
>>  map_key int,
>>  count counter,
>>  PRIMARY KEY ((id), map_name, map_key)
>> );
>>
>> This table can be seen as:
>>
>> Map >>
>>
>> The couple (map_key, counter) simulates your map
>>
>> The clustering column map_name allows you to have multiple maps of
>> counters for a single partition_key
>>
>>
>>
>> On Wed, Nov 9, 2016 at 1:32 PM, Vladimir Yudovin 
>> wrote:
>>
>>> Unfortunately it's impossible nor to use counters inside collections
>>> neither mix them with other non-counter columns :
>>>
>>> CREATE TABLE cnt (id int PRIMARY KEY , cntmap MAP);
>>> InvalidRequest: Error from server: code=2200 [Invalid query]
>>> message="Counters are not allowed inside collections: map"
>>>
>>> CREATE TABLE cnt (id int PRIMARY KEY , cnt1 counter, txt text);
>>> InvalidRequest: Error from server: code=2200 [Invalid query]
>>> message="Cannot mix counter and non counter columns in the same table"
>>>
>>>
>>> >Is there a way to do this in c* which doesn't require creating 1 table
>>> per type of map that i need?
>>> But you don't need to create separate table per each counter, just use
>>> one row per counter:
>>>
>>> CREATE TABLE cnt (id int PRIMARY KEY , value counter);
>>>
>>> Best regards, Vladimir Yudovin,
>>>
>>> *Winguzone  - Hosted Cloud
>>> CassandraLaunch your cluster in minutes.*
>>>
>>>
>>>  On Wed, 09 Nov 2016 07:17:53 -0500*Ali Akhtar
>>> >* wrote 
>>>
>>> I have a use-case where I need to have a dynamic number of counters.
>>>
>>> The easiest way to do this would be to have a map where
>>> the int is the key, and the counter is the value which is incremented /
>>> decremented. E.g if something related to 5 happened, then i'd get the
>>> counter for 5 and increment / decrement it.
>>>
>>> I also need to have multiple maps of this type, where each
>>> int is a key referring to something different.
>>>
>>> Is there a way to do this in c* which doesn't require creating 1 table
>>> per type of map that i need?
>>>
>>>
>>>
>>
>


Cassandra Triggers

2016-11-09 Thread Nethi, Manoj
Hi,
Are Triggers in  Cassandra production ready ?
Version: Cassandra 3.3.0

Thanks
Manoj



Re: Cassandra Triggers

2016-11-09 Thread DuyHai Doan
They are production ready in the sens that they are fully functional. But
using them require a *deep* knowledge of Cassandra Internal Write path and
is dangerous because the write path is critical.

Alternatively if you need a notification system of new mutation, there is a
CDC feature, available since 3.9 only (maybe not production ready yet)

On Wed, Nov 9, 2016 at 3:11 PM, Nethi, Manoj  wrote:

> Hi,
>
> Are Triggers in  Cassandra production ready ?
>
> Version: Cassandra 3.3.0
>
>
>
> Thanks
>
> Manoj
>
>


Log traces of debug logs

2016-11-09 Thread Benjamin Roth
Hi!

Is there a way to tell logback to log the trace of a debug log? The
background is that i'd like to know from where a table flush is triggered.

Thanks guys!


Re: Log traces of debug logs

2016-11-09 Thread Vladimir Yudovin
Hi,



you can change log level with nodetool setlogginglevel command



Best regards, Vladimir Yudovin, 

Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.





 On Wed, 09 Nov 2016 10:17:37 -0500Benjamin Roth 
 wrote 




Hi!

Is there a way to tell logback to log the trace of a debug log? The background 
is that i'd like to know from where a table flush is triggered.

Thanks guys!








Re: Log traces of debug logs

2016-11-09 Thread Benjamin Roth
I don't want to change the log level. I want to add a trace to the log entry

Am 09.11.2016 15:22 schrieb "Vladimir Yudovin" :

> Hi,
>
> you can change log level with nodetool setlogginglevel command
>
> Best regards, Vladimir Yudovin,
>
> *Winguzone  - Hosted Cloud
> CassandraLaunch your cluster in minutes.*
>
>
>  On Wed, 09 Nov 2016 10:17:37 -0500*Benjamin Roth
> >* wrote 
>
> Hi!
>
> Is there a way to tell logback to log the trace of a debug log? The
> background is that i'd like to know from where a table flush is triggered.
>
> Thanks guys!
>
>
>


Re: Cassandra Triggers

2016-11-09 Thread sat
Hi,

We are doing POC on Cassandra for our business needs. We also need some
kind of notification when a column/attribute is modified
(insert/update/delete) of a table.

Thanks for sharing information about CDC. Could you please point us to some
example of how to implement this in Cassandra 3.9.


Thanks and Regards
A.SathishKumar

On Wed, Nov 9, 2016 at 6:18 AM, DuyHai Doan  wrote:

> They are production ready in the sens that they are fully functional. But
> using them require a *deep* knowledge of Cassandra Internal Write path and
> is dangerous because the write path is critical.
>
> Alternatively if you need a notification system of new mutation, there is
> a CDC feature, available since 3.9 only (maybe not production ready yet)
>
> On Wed, Nov 9, 2016 at 3:11 PM, Nethi, Manoj  wrote:
>
>> Hi,
>>
>> Are Triggers in  Cassandra production ready ?
>>
>> Version: Cassandra 3.3.0
>>
>>
>>
>> Thanks
>>
>> Manoj
>>
>>
>


-- 
A.SathishKumar
044-24735023


Re: Cassandra Triggers

2016-11-09 Thread DuyHai Doan
https://issues.apache.org/jira/browse/CASSANDRA-8844

On Wed, Nov 9, 2016 at 8:34 PM, sat  wrote:

> Hi,
>
> We are doing POC on Cassandra for our business needs. We also need some
> kind of notification when a column/attribute is modified
> (insert/update/delete) of a table.
>
> Thanks for sharing information about CDC. Could you please point us to
> some example of how to implement this in Cassandra 3.9.
>
>
> Thanks and Regards
> A.SathishKumar
>
> On Wed, Nov 9, 2016 at 6:18 AM, DuyHai Doan  wrote:
>
>> They are production ready in the sens that they are fully functional. But
>> using them require a *deep* knowledge of Cassandra Internal Write path and
>> is dangerous because the write path is critical.
>>
>> Alternatively if you need a notification system of new mutation, there is
>> a CDC feature, available since 3.9 only (maybe not production ready yet)
>>
>> On Wed, Nov 9, 2016 at 3:11 PM, Nethi, Manoj  wrote:
>>
>>> Hi,
>>>
>>> Are Triggers in  Cassandra production ready ?
>>>
>>> Version: Cassandra 3.3.0
>>>
>>>
>>>
>>> Thanks
>>>
>>> Manoj
>>>
>>>
>>
>
>
> --
> A.SathishKumar
> 044-24735023
>


Re: How to confirm TWCS is fully in-place

2016-11-09 Thread Lahiru Gamathige
Hi Kurt,

Thanks !

Lahiru

On Wed, Nov 9, 2016 at 12:02 AM, kurt Greaves  wrote:

> What compaction strategy are you migrating from? If you're migrating from
> STCS it's likely that when switching to TWCS no extra compactions are
> necessary, as the SSTables will be put into their respective windows but
> there won't be enough candidates for compaction within a window.
>
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>
> On 8 November 2016 at 21:11, Oskar Kjellin 
> wrote:
>
>> Hi,
>>
>> You could manually trigger it with nodetool compact.
>>
>> /Oskar
>>
>> > On 8 nov. 2016, at 21:47, Lahiru Gamathige  wrote:
>> >
>> > Hi Users,
>> >
>> > I am thinking of migrating our timeseries tables to use TWCS. I am
>> using JMX to set the new compaction and one node at a time and I am not
>> sure how to confirm that after the flush all the compaction is done in each
>> node. I tried this in a small cluster but after setting the compaction I
>> didn't see any compaction triggering  and ran nodetool flush and still
>> didn't see a compaction triggering.
>> >
>> > Now I am about to do the same thing in our staging cluster, so curious
>> how do I confirm compaction ran in each node before I change the table
>> schema because I am worried it will start the compaction in all the nodes
>> at the same time.
>> >
>> > Lahiru
>>
>
>


repair -pr in crontab

2016-11-09 Thread Artur Siekielski

Hi,
the docs give me an impression that repairing should be run manually, 
and not put in crontab for default. Should each repair run be monitored 
manually?


If I would like to put "repair -pr" in crontab for each node, with a few 
hour difference between the runs, are there any risks with such setup? 
Specifically:
- if two or more "repair -pr" runs on different nodes are running at the 
same time, can it cause any problems besides high load?

- can "repair -pr" be run simultaneously on all nodes at the same time?
- I'm using the default gc_grace_period of 10 days. Are there any 
reasons to run repairing more often that once per 10 days, for a case 
when previous repairing fails?
- how to monitor start and finish times of repairs, and if the runs were 
successful? Does the "nodetool repair" command is guaranteed to exit 
only after the repair is finished and does it return a status code to a 
shell?


答复: 答复: A difficult data model with C*

2016-11-09 Thread Diamond ben
The solution maybe work. However, the play list will grow over time and 
somebody maybe has ten thousands that will slow down the query and sort . Do 
you mean the oldest one should be removed when a new play is added?

BTW, the version is 2.1.16 in our live system.


BRs,

BEN


发件人: Vladimir Yudovin 
发送时间: 2016年11月9日 18:11:26
收件人: user
主题: Re: 答复: A difficult data model with C*

You are welcome! )

>recent ten movies watched by the user within 30 days.
In this case you can't use PRIMARY KEY (user_name, video_id), as video_id is 
demanded to fetch row, so all this stuff may be
CREATE TYPE play (video_id text, position int, last_time timestamp);
CREATE TABLE recent (user_name text PRIMARY KEY, play_list LIST>);
You can easily retrieve play list for specific user by his ID. Instead of LIST 
you can use MAP, I don't think that for ten entries it matters.


Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.


 On Tue, 08 Nov 2016 22:29:48 -0500ben ben  wrote 



Hi Vladimir Yudovin,


Thank you very much for your detailed explaining. Maybe I didn't describe 
the requirement clearly. The use cases should be:

1. a user login our app.

2. show the recent ten movies watched by the user within 30 days.

3. the user can click any one of the ten movie and continue to watch from the 
last position she/he did. BTW, a movie can be watched several times by a user 
and the last positon is needed indeed.


BRs,

BEN



发件人: Vladimir Yudovin mailto:vla...@winguzone.com>>
发送时间: 2016年11月8日 22:35:48
收件人: user
主题: Re: A difficult data model with C*

Hi Ben,

if need very limited number of positions (as you said ten) may be you can store 
them in LIST of UDT? Or just as JSON string?
So you'll have one row per each pair user-video.

It can be something like this:

CREATE TYPE play (position int, last_time timestamp);
CREATE TABLE recent (user_name text, video_id text, review LIST>, 
PRIMARY KEY (user_name, video_id));

UPDATE recent set review = review + [(1234,12345)] where user_name='some user' 
AND video_id='great video';
UPDATE recent set review = review + [(1234,123456)] where user_name='some user' 
AND video_id='great video';
UPDATE recent set review = review + [(1234,1234567)] where user_name='some 
user' AND video_id='great video';

You can delete the oldest entry by index:
DELETE review[0] FROM recent WHERE user_name='some user' AND video_id='great 
video';

or by value, if you know the oldest entry:

UPDATE recent SET review = review - [(1234,12345)]  WHERE user_name='some user' 
AND video_id='great video';

Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.


 On Mon, 07 Nov 2016 21:54:08 -0500ben ben 
mailto:diamond@outlook.com>> wrote 



Hi guys,

We are maintaining a system for an on-line video service. ALL users' viewing 
records of every movie are stored in C*. So she/he can continue to enjoy the 
movie from the last point next time. The table is designed as below:
CREATE TABLE recent (
user_name text,
vedio_id text,
position int,
last_time timestamp,
PRIMARY KEY (user_name, vedio_id)
)

It worked well before. However, the records increase every day and the last ten 
items may be adequate for the business. The current model use vedio_id as 
cluster key to keep a row for a movie, but as you know, the business prefer to 
order by the last_time desc. If we use last_time as cluster key, there will be 
many records for a singe movie and the recent one is actually desired. So how 
to model that? Do you have any suggestions?
Thanks!


BRs,
BEN