Re: 4k keyspaces... Maybe we're doing it wrong?

2010-09-06 Thread Janne Jalkanen
#2. Performance: Will Cassandra work better with a single keyspace  
+ lots of

keys, or thousands of keyspaces?



Thousands is a non-starter.  There is an active memtable for every CF
defined and caches (row and key) are per CF.  Assuming even 2 CFs per
keyspace,  with 4000 keyspaces you will have 8000 active memtables,
8000 _sets_ of sstables to be compacted, etc.  Put everyone in the
same keyspace, use a prefix on keys to distinguish different clients.


So if I read this right, using lots of CF's is also a Bad Idea(tm)?

/Janne


Re: skip + limit support in GetSlice

2010-09-06 Thread Michal Augustýn
Hi Mike,

yes, I read the PDF to the finish. Twice. As I wrote, my application is not
accessed by users, it's accessed by other applications that can access pages
randomly.

So when some application wants to get page 51235 (so skip is 5123500, limit
is 100) then I have to:

1) GetSlice(from: "", to: "", limit: 5123500)
2) Read only the last column name.
3) GetSlice(from: point2value, to: "", limit: 100)

The problem is in 1) - Cassandra has to read 5123500 columns, serialize
them, send them using  Thrift protocol and deserialize them. Finally, I
throw 5123499 of columns away. It doesn't seem to be very efficient.

So I'm looking for another solution for this scenario. I know the right way
for pagination in Cassandra and I'm using them if I can...

So if this kind of pagination cannot be added to standard Cassandra Thrift
API then I should create some separate Thrift API that will handle my
scenario (and avoid high network traffic). Am I right?

Thanks!

Augi


2010/9/5 Mike Peters 

>  Hi Michal,
>
> Did you read the PDF Stu sent over, start to finish?  There are several
> different approaches described there.
>
> With Cassandra, what we found works best for pagination:
> * Keep a separate 'total_records' count and increment/decrement it on every
> insert/delete
> * When getting slices, pass 'last seen' as the 'from' and keep the 'to'
> empty.  Pass the number of records you want to show per page in the 'count'.
> * Avoid letting user skip to page X, using Next/Prev/First/Last only (same
> way GMail does it)
>
>
>
> Michal Augustýn wrote:
>
> I know that "Prev/Next" is good solution for web applications. But when I
> want to access data from another application or when I want to access pages
> randomly...
>
>  I don't know the internal structure of memtables etc., so I don't know if
> columns in row are indexable. If now, then I just want to transfer my
> workaround to server (to avoid huge network traffic)...
>
> 2010/9/5 Stu Hood 
>
>> Cassandra supports the recommended approach from:
>> http://www.percona.com/ppc2009/PPC2009_mysql_pagination.pdf
>>
>> For large numbers of items, skip + limit is extremely inefficent.
>>
>> -Original Message-
>> From: "Michal Augustýn" 
>> Sent: Sunday, September 5, 2010 5:39am
>> To: user@cassandra.apache.org
>> Subject: skip + limit support in GetSlice
>>
>> Hello,
>>
>> probably this is feature request. Simply, I would like to have support for
>> standard pagination (skip + limit) in GetSlice Thrift method. Is this
>> feature on the road map?
>>
>> Now, I have to perform GetSlice call, that starts on "" and "limit" is set
>> to "skip" value. Then I read the last column name returned and
>> subsequently
>> perform the final GetSlice call - I use the last column name as "start"
>> and
>> set "limit" to "limit" value.
>>
>> This workaround is not very efficient when I need to skip a lot of columns
>> (so "skip" is high) - then a lot of data must be transferred via network.
>> So
>> I think that support for Skip in GetSlice would be very useful (to avoid
>> high network traffic).
>>
>> The implementation could be very straightforward (same as the workaround)
>> or
>> maybe it could be more efficient - I think that whole row (so all columns)
>> must fit into memory so if we have all columns in memory...
>>
>> Thank you!
>>
>> Augi
>>
>>
>>
>
>


RE: skip + limit support in GetSlice

2010-09-06 Thread Dr . Martin Grabmüller
Have you considered creating a second column family which acts as an index for
the original column family?  Have the record number as the column name, and the
value as the identifier (primary key) of the original data, and do a 
 
1.  get_slice(, start='00051235', finish='', limit=100)
2.  get_slice(, columns=)
 
This way, only 100 columns are returned on the first call, and 100 columns (or 
super columns)
on the second.  You have two calls instead of one, but it should be faster 
because
much less data is transferred (and the latency can be hidden by concurrency).
 
Martin




From: Michal Augustýn [mailto:augustyn.mic...@gmail.com] 
Sent: Monday, September 06, 2010 10:26 AM
To: user@cassandra.apache.org
Subject: Re: skip + limit support in GetSlice


Hi Mike, 

yes, I read the PDF to the finish. Twice. As I wrote, my application is 
not accessed by users, it's accessed by other applications that can access 
pages randomly.

So when some application wants to get page 51235 (so skip is 5123500, 
limit is 100) then I have to:

1) GetSlice(from: "", to: "", limit: 5123500)
2) Read only the last column name.
3) GetSlice(from: point2value, to: "", limit: 100)

The problem is in 1) - Cassandra has to read 5123500 columns, serialize 
them, send them using  Thrift protocol and deserialize them. Finally, I throw 
5123499 of columns away. It doesn't seem to be very efficient.

So I'm looking for another solution for this scenario. I know the right 
way for pagination in Cassandra and I'm using them if I can...

So if this kind of pagination cannot be added to standard Cassandra 
Thrift API then I should create some separate Thrift API that will handle my 
scenario (and avoid high network traffic). Am I right?

Thanks!

Augi


2010/9/5 Mike Peters 


Hi Michal,

Did you read the PDF Stu sent over, start to finish?  There are 
several different approaches described there.

With Cassandra, what we found works best for pagination:
* Keep a separate 'total_records' count and increment/decrement 
it on every insert/delete
* When getting slices, pass 'last seen' as the 'from' and keep 
the 'to' empty.  Pass the number of records you want to show per page in the 
'count'.
* Avoid letting user skip to page X, using Next/Prev/First/Last 
only (same way GMail does it) 



Michal Augustýn wrote: 

I know that "Prev/Next" is good solution for web 
applications. But when I want to access data from another application or when I 
want to access pages randomly... 

I don't know the internal structure of memtables etc., 
so I don't know if columns in row are indexable. If now, then I just want to 
transfer my workaround to server (to avoid huge network traffic)...


2010/9/5 Stu Hood 


Cassandra supports the recommended approach 
from: http://www.percona.com/ppc2009/PPC2009_mysql_pagination.pdf

For large numbers of items, skip + limit is 
extremely inefficent.


-Original Message-
From: "Michal Augustýn" 

Sent: Sunday, September 5, 2010 5:39am
To: user@cassandra.apache.org
Subject: skip + limit support in GetSlice

Hello,

probably this is feature request. Simply, I 
would like to have support for
standard pagination (skip + limit) in GetSlice 
Thrift method. Is this
feature on the road map?

Now, I have to perform GetSlice call, that 
starts on "" and "limit" is set
to "skip" value. Then I read the last column 
name returned and subsequently
perform the final GetSlice call - I use the 
last column name as "start" and
set "limit" to "limit" value.

This workaround is not very efficient when I 
need to skip a lot of columns
(so "skip" is high) - then a lot of data must 
be transferred via network. So
I think that support for Skip in GetSlice would 
be very useful (to avoid
hi

Re: How to implement (generic) ACID on application level

2010-09-06 Thread Michal Augustýn
Thank you for the great link!
The mentioned solution is using locking but I would prefer some optimistic
strategy (because the conflicts are rare in my situation) but I'm afraid
that this is really the best solution...

So the solution is probably to use some kind of

2010/9/6 Reza Lesmana 

> I read an article about using CAGES with Cassandra to achieve locking
> and transaction...
>
> Here is the link :
>
>
> http://ria101.wordpress.com/2010/05/12/locking-and-transactions-over-cassandra-using-cages/
>
> On 9/5/10, Michal Augustýn  wrote:
> > Hello,
> >
> > we can read everywhere that Cassandra (and similar NoSQL solutions)
> doesn't
> > support full ACID and (when we want to have ACID) we have to implement
> ACID
> > in higher layers of our application. Are there some good resources on how
> to
> > implement ACID on higher layers? I.e. how to implement repository
> > pattern/DAO with ACID support when Cassandra is the database.
> >
> > I'm sure that some pessimistic solution (locks) is absolutely unsuitable
> for
> > Cassandra so the solution probably would deal with optimistic
> concurrency...
> >
> > Thank you!
> >
> > Augi
> >
>


Re: skip + limit support in GetSlice

2010-09-06 Thread Michal Augustýn
Thank you! This solve my issue.

But what about index recomputing (after new columns are inserted) ?

Should I use asynchronous triggers?
https://issues.apache.org/jira/browse/CASSANDRA-1311
Or will 0.7's
secondary indexes handle this?

Augi

2010/9/6 Dr. Martin Grabmüller 

>  Have you considered creating a second column family which acts as an
> index for
> the original column family?  Have the record number as the column name, and
> the
> value as the identifier (primary key) of the original data, and do a
>
> 1.  get_slice(, start='00051235', finish='',
> limit=100)
> 2.  get_slice(, columns= values>)
>
> This way, only 100 columns are returned on the first call, and 100 columns
> (or super columns)
> on the second.  You have two calls instead of one, but it should be faster
> because
> much less data is transferred (and the latency can be hidden by
> concurrency).
>
> Martin
>
>  --
> *From:* Michal Augustýn [mailto:augustyn.mic...@gmail.com]
> *Sent:* Monday, September 06, 2010 10:26 AM
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: skip + limit support in GetSlice
>
> Hi Mike,
>
> yes, I read the PDF to the finish. Twice. As I wrote, my application is not
> accessed by users, it's accessed by other applications that can access pages
> randomly.
>
> So when some application wants to get page 51235 (so skip is 5123500, limit
> is 100) then I have to:
>
> 1) GetSlice(from: "", to: "", limit: 5123500)
> 2) Read only the last column name.
> 3) GetSlice(from: point2value, to: "", limit: 100)
>
> The problem is in 1) - Cassandra has to read 5123500 columns, serialize
> them, send them using  Thrift protocol and deserialize them. Finally, I
> throw 5123499 of columns away. It doesn't seem to be very efficient.
>
> So I'm looking for another solution for this scenario. I know the right way
> for pagination in Cassandra and I'm using them if I can...
>
> So if this kind of pagination cannot be added to standard Cassandra Thrift
> API then I should create some separate Thrift API that will handle my
> scenario (and avoid high network traffic). Am I right?
>
> Thanks!
>
> Augi
>
>
> 2010/9/5 Mike Peters 
>
>> Hi Michal,
>>
>> Did you read the PDF Stu sent over, start to finish?  There are several
>> different approaches described there.
>>
>> With Cassandra, what we found works best for pagination:
>> * Keep a separate 'total_records' count and increment/decrement it on
>> every insert/delete
>> * When getting slices, pass 'last seen' as the 'from' and keep the 'to'
>> empty.  Pass the number of records you want to show per page in the 'count'.
>> * Avoid letting user skip to page X, using Next/Prev/First/Last only (same
>> way GMail does it)
>>
>>
>>
>> Michal Augustýn wrote:
>>
>> I know that "Prev/Next" is good solution for web applications. But when I
>> want to access data from another application or when I want to access pages
>> randomly...
>>
>> I don't know the internal structure of memtables etc., so I don't know if
>> columns in row are indexable. If now, then I just want to transfer my
>> workaround to server (to avoid huge network traffic)...
>>
>> 2010/9/5 Stu Hood 
>>
>>> Cassandra supports the recommended approach from:
>>> http://www.percona.com/ppc2009/PPC2009_mysql_pagination.pdf
>>>
>>> For large numbers of items, skip + limit is extremely inefficent.
>>>
>>> -Original Message-
>>> From: "Michal Augustýn" 
>>> Sent: Sunday, September 5, 2010 5:39am
>>> To: user@cassandra.apache.org
>>> Subject: skip + limit support in GetSlice
>>>
>>> Hello,
>>>
>>> probably this is feature request. Simply, I would like to have support
>>> for
>>> standard pagination (skip + limit) in GetSlice Thrift method. Is this
>>> feature on the road map?
>>>
>>> Now, I have to perform GetSlice call, that starts on "" and "limit" is
>>> set
>>> to "skip" value. Then I read the last column name returned and
>>> subsequently
>>> perform the final GetSlice call - I use the last column name as "start"
>>> and
>>> set "limit" to "limit" value.
>>>
>>> This workaround is not very efficient when I need to skip a lot of
>>> columns
>>> (so "skip" is high) - then a lot of data must be transferred via network.
>>> So
>>> I think that support for Skip in GetSlice would be very useful (to avoid
>>> high network traffic).
>>>
>>> The implementation could be very straightforward (same as the workaround)
>>> or
>>> maybe it could be more efficient - I think that whole row (so all
>>> columns)
>>> must fit into memory so if we have all columns in memory...
>>>
>>> Thank you!
>>>
>>> Augi
>>>
>>>
>>>
>>
>>
>


Re: Migration from 6.X to 7.X

2010-09-06 Thread Edward Capriolo
On Mon, Sep 6, 2010 at 3:33 AM, Ran Tavory  wrote:
> we don't have one version that supports both versions.
> you can hack it if you download the source code (create two java package
> trees for 0.6.0 and 0.7.0) but it's not on the shelf, sorry...
>
> On Mon, Sep 6, 2010 at 12:39 AM, Edward Capriolo 
> wrote:
>>
>> I am looking to move from 6.0 to 7.0 soon. Will one version of hector
>> support both 6.0 and 7.0? This would be great as performing a
>> cassandra upgrade and an app server upgrade at the same time is always
>> tricky?
>>
>> Thank you,
>> Edward
>
>
>
> --
> /Ran
>

I am going to cross post this to get the vibe on what people are
thinking. Does it make sense that the thrift api for 7.X should also
have deprecated methods that match the signature of 6.X? In this way,
code that was linked to the old signatures would not have to be
recoded.

As i said above timing, an upgrade and deploying new code across two
clusters with minimal downtime is tricky.

Edward


Re: Migration from 6.X to 7.X

2010-09-06 Thread Jonathan Ellis
Thrift does not support method overloading (methods with the same name
but different parameters).

On Mon, Sep 6, 2010 at 9:09 AM, Edward Capriolo  wrote:
> On Mon, Sep 6, 2010 at 3:33 AM, Ran Tavory  wrote:
>> we don't have one version that supports both versions.
>> you can hack it if you download the source code (create two java package
>> trees for 0.6.0 and 0.7.0) but it's not on the shelf, sorry...
>>
>> On Mon, Sep 6, 2010 at 12:39 AM, Edward Capriolo 
>> wrote:
>>>
>>> I am looking to move from 6.0 to 7.0 soon. Will one version of hector
>>> support both 6.0 and 7.0? This would be great as performing a
>>> cassandra upgrade and an app server upgrade at the same time is always
>>> tricky?
>>>
>>> Thank you,
>>> Edward
>>
>>
>>
>> --
>> /Ran
>>
>
> I am going to cross post this to get the vibe on what people are
> thinking. Does it make sense that the thrift api for 7.X should also
> have deprecated methods that match the signature of 6.X? In this way,
> code that was linked to the old signatures would not have to be
> recoded.
>
> As i said above timing, an upgrade and deploying new code across two
> clusters with minimal downtime is tricky.
>
> Edward
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com


Re: Migration from 6.X to 7.X

2010-09-06 Thread Edward Capriolo
On Monday, September 6, 2010, Jonathan Ellis  wrote:
> Thrift does not support method overloading (methods with the same name
> but different parameters).
>
> On Mon, Sep 6, 2010 at 9:09 AM, Edward Capriolo  wrote:
>> On Mon, Sep 6, 2010 at 3:33 AM, Ran Tavory  wrote:
>>> we don't have one version that supports both versions.
>>> you can hack it if you download the source code (create two java package
>>> trees for 0.6.0 and 0.7.0) but it's not on the shelf, sorry...
>>>
>>> On Mon, Sep 6, 2010 at 12:39 AM, Edward Capriolo 
>>> wrote:

 I am looking to move from 6.0 to 7.0 soon. Will one version of hector
 support both 6.0 and 7.0? This would be great as performing a
 cassandra upgrade and an app server upgrade at the same time is always
 tricky?

 Thank you,
 Edward
>>>
>>>
>>>
>>> --
>>> /Ran
>>>
>>
>> I am going to cross post this to get the vibe on what people are
>> thinking. Does it make sense that the thrift api for 7.X should also
>> have deprecated methods that match the signature of 6.X? In this way,
>> code that was linked to the old signatures would not have to be
>> recoded.
>>
>> As i said above timing, an upgrade and deploying new code across two
>> clusters with minimal downtime is tricky.
>>
>> Edward
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptano.com
>


I was not aware of that. Also is the default for 6.o non framed and
7.o framed? I was thinking possibly replace cassanda.client detect the
server version and use reflection. This way hector sees the same
interface to thrift


Re: Migration from 6.X to 7.X

2010-09-06 Thread Benjamin Black
Welcome to Thrift.

On Mon, Sep 6, 2010 at 4:04 PM, Edward Capriolo  wrote:
>
> I was not aware of that. Also is the default for 6.o non framed and
> 7.o framed? I was thinking possibly replace cassanda.client detect the
> server version and use reflection. This way hector sees the same
> interface to thrift
>


Re: Migration from 6.X to 7.X

2010-09-06 Thread Jonathan Ellis
On Mon, Sep 6, 2010 at 4:04 PM, Edward Capriolo  wrote:
> I was not aware of that. Also is the default for 6.o non framed and
> 7.o framed?

Yes.

>I was thinking possibly replace cassanda.client detect the
> server version and use reflection.

This would have to be tested; the speed penalty for using reflection
in Java is fairly high.

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com


Re: 4k keyspaces... Maybe we're doing it wrong?

2010-09-06 Thread Benjamin Black
On Mon, Sep 6, 2010 at 12:41 AM, Janne Jalkanen
 wrote:
>
> So if I read this right, using lots of CF's is also a Bad Idea(tm)?
>

Yes, lots of CFs is bad means lots of CFs is also bad.


Flush and compaction happens frequently

2010-09-06 Thread Mubarak Seyed
I have a 8 nodes cluster, MemtableThreshold is 2 GB/CF, MemtableObjectsCount
is 1.2, heap (min/max) is 30 GB and only 4 ColumnFamilies

It appears from system.log that flush happens for every < 50 operations
(read or write) and compaction is happening very frequently and i could see
lots of sstable is getting created (with smaller size). For a just 1000
inserts, i could see around 20 sstables.

When i change the MemtableThreshold to 1 GB/CF, everything works as desired.

Any idea, what could be the problem when i specify MemtableThreshold to 2
GB/CF even though i specified the large heap?

-- 
Thanks,
Mubarak Seyed.


Re: busy thread on IncomingStreamReader ?

2010-09-06 Thread Anty
I  encounter the same problem, does anyone have sovled the problem?

On Tue, Apr 20, 2010 at 11:03 AM, Ingram Chen  wrote:

> I check system.log both, but there is no exception logged.
>
> On Tue, Apr 20, 2010 at 10:40, Jonathan Ellis  wrote:
>
>> I don't see csArena-tmp-6-Index.db in the incoming files list.  If
>> it's not there, that means that it did break out of that while loop.
>>
>> Did you check both logs for exceptions?
>>
>> On Mon, Apr 19, 2010 at 9:36 PM, Ingram Chen 
>> wrote:
>> > Ouch ! I talk too early !
>> >
>> > We still suffer same problems after upgrade to 1.6.0_20.
>> >
>> > In JMX StreamingService, I see several wired incoming/outgoing transfer:
>> >
>> > In Host A, 192.168.2.87
>> >
>> > StreamingService Status:
>> > Done with transfer to /192.168.2.88
>> >
>> > StreamingService StreamSources:
>> > [/192.168.2.88]
>> >
>> > StreamingService StreamDestinations:
>> > [/192.168.2.88]
>> >
>> > StreamingService getIncomingFiles=192.168.2.88
>> > [
>> > UserState: /var/lib/cassandra/data/UserState/multiMine-tmp-11-Index.db
>> > 0/5718,
>> > UserState: /var/lib/cassandra/data/UserState/multiMine-tmp-11-Filter.db
>> > 0/325,
>> > UserState: /var/lib/cassandra/data/UserState/multiMine-tmp-11-Data.db
>> > 0/29831,
>> > UserState: /var/lib/cassandra/data/UserState/csArena-tmp-13-Index.db
>> > 0/47623,
>> >
>> > ... omit several 0 received pending files.
>> >
>> > UserState: /var/lib/cassandra/data/UserState/battleCity2-tmp-19-Data.db
>> > 0/355041,
>> >
>> > UserState: /var/lib/cassandra/data/UserState/mahjong-tmp-12-Data.db
>> > 27711/2173906,
>> > UserState: /var/lib/cassandra/data/UserState/darkChess-tmp-12-Data.db
>> > 27711/18821998,
>> > UserState: /var/lib/cassandra/data/UserState/battleCity2-tmp-6-Data.db
>> > 27711/743037,
>> > UserState: /var/lib/cassandra/data/UserState/big2-tmp-12-Index.db
>> > 27711/189214,
>> > UserState:
>> /var/lib/cassandra/data/UserState/facebookPoker99-tmp-6-Data.db
>> > 27711/1892375,
>> > UserState:
>> /var/lib/cassandra/data/UserState/facebookPoker99-tmp-6-Index.db
>> > 27711/143216,
>> > UserState: /var/lib/cassandra/data/UserState/csArena-tmp-6-Data.db
>> > 27711/201188,
>> > UserState: /var/lib/cassandra/data/UserState/darkChess-tmp-12-Index.db
>> > 27711/354923,
>> > UserState: /var/lib/cassandra/data/UserState/big2-tmp-12-Data.db
>> > 27711/1260768,
>> > UserState: /var/lib/cassandra/data/UserState/mahjong-tmp-12-Index.db
>> > 27711/332649,
>> > UserState: /var/lib/cassandra/data/UserState/battleCity2-tmp-6-Index.db
>> > 27711/39739
>> > ]
>> >
>> > lots of files stalled after receiving 27711 bytes. this strange number
>> is
>> > the length of first file to income, see Host B
>> >
>> > Host B, 192.168.2.88
>> >
>> > StreamingService Status:
>> > Receiving stream
>> >
>> > StreamingService StreamSources:
>> > StreamSources: [/192.168.2.87]
>> >
>> > StreamingService StreamDestinations:
>> >  [/192.168.2.87]
>> >
>> > StreamingService getOutgoingFiles=192.168.2.87
>> > [
>> > /var/lib/cassandra/data/UserState/stream/csArena-6-Index.db 27711/27711,
>> > /var/lib/cassandra/data/UserState/stream/csArena-6-Filter.db 0/1165,
>> > /var/lib/cassandra/data/UserState/stream/csArena-6-Data.db 0/201188,
>> >
>> > ... omit pending outgoing files 
>> > ]
>> >
>> > It seems that outgoing files does not terminate properly. and cause the
>> > receiver goes into infinite loop to cause busy thread. From thread dump,
>> it
>> > looks like fc.transferFrom() in IncomingStreamReader never return:
>> >
>> > while (bytesRead < pendingFile.getExpectedBytes()) {
>> > bytesRead += fc.transferFrom(socketChannel, bytesRead,
>> > FileStreamTask.CHUNK_SIZE);
>> > pendingFile.update(bytesRead);
>> > }
>> >
>> >
>> > On Tue, Apr 20, 2010 at 05:48, Rob Coli  wrote:
>> >>
>> >> On 4/17/10 6:47 PM, Ingram Chen wrote:
>> >>>
>> >>> after upgrading jdk from  1.6.0_16 to  1.6.0_20, the problem solved.
>> >>
>> >> FYI, this sounds like it might be :
>> >>
>> >> https://issues.apache.org/jira/browse/CASSANDRA-896
>> >>
>> >>
>> http://bugs.sun.com/view_bug.do;jsessionid=60c39aa55d3666c0c84dd70eb826?bug_id=6805775
>> >>
>> >> Where garbage collection issues in JVM/JDKs before 7.b70 leads to GC
>> >> storming which hoses performance.
>> >>
>> >> =Rob
>> >
>> >
>> >
>> >
>> >
>>
>
>
>
> --
> Ingram Chen
> online share order: http://dinbendon.net
> blog: http://www.javaworld.com.tw/roller/page/ingramchen
>



-- 
Best Regards
Anty Rao


Re: How to implement (generic) ACID on application level

2010-09-06 Thread Jonathan Shook
... some kind of what?

On Mon, Sep 6, 2010 at 3:38 AM, Michal Augustýn
 wrote:
> Thank you for the great link!
> The mentioned solution is using locking but I would prefer some optimistic
> strategy (because the conflicts are rare in my situation) but I'm afraid
> that this is really the best solution...
> So the solution is probably to use some kind of
> 2010/9/6 Reza Lesmana 
>>
>> I read an article about using CAGES with Cassandra to achieve locking
>> and transaction...
>>
>> Here is the link :
>>
>>
>> http://ria101.wordpress.com/2010/05/12/locking-and-transactions-over-cassandra-using-cages/
>>
>> On 9/5/10, Michal Augustýn  wrote:
>> > Hello,
>> >
>> > we can read everywhere that Cassandra (and similar NoSQL solutions)
>> > doesn't
>> > support full ACID and (when we want to have ACID) we have to implement
>> > ACID
>> > in higher layers of our application. Are there some good resources on
>> > how to
>> > implement ACID on higher layers? I.e. how to implement repository
>> > pattern/DAO with ACID support when Cassandra is the database.
>> >
>> > I'm sure that some pessimistic solution (locks) is absolutely unsuitable
>> > for
>> > Cassandra so the solution probably would deal with optimistic
>> > concurrency...
>> >
>> > Thank you!
>> >
>> > Augi
>> >
>
>