Re: Quorum read after quorum write guarantee

2013-03-10 Thread Chuan-Heng Hsiao
Hi André, I am just a user of cassandra and did not look into the code deeply. However, my guess is that cassandra only guarantee that if you successfully write and you successfully read, then quorum will give you the latest data. not finding the just inserted data may be due to the failure of s

Re: about take snaphots

2012-11-26 Thread Chuan-Heng Hsiao
Hi Francisco, I think it's the normal behavior of nodetool with cassandra 1.1.6. Sincerely, Hsiao On Mon, Nov 26, 2012 at 10:12 PM, Francisco Trujillo Hacha wrote: > Hi > > I have a one node cassandra installation (1.1.6) with only one column > family. When i tried to execute: > > ./nodetool -

Re: continue seeing "Finished hinted handoff of 0 rows to endpoint"

2012-11-25 Thread Chuan-Heng Hsiao
ot;scrub system HintsColumnFamily" trick worked this time. I'll try to reproduce the situation and check the phantom TCP connections. Sincerely, Hsiao On Mon, Nov 26, 2012 at 8:50 AM, Mina Naguib wrote: > > > On 2012-11-24, at 10:37 AM, Chuan-Heng Hsiao > wrote: > >

Re: huge commitlog

2012-11-25 Thread Chuan-Heng Hsiao
e ERROR about "Keys must not be empty". > > Do you have the full error stack ? > > Cheers > > - > Aaron Morton > Freelance Cassandra Developer > New Zealand > > @aaronmorton > http://www.thelastpickle.com > > On 25/11/2012, at 4:

Re: continue seeing "Finished hinted handoff of 0 rows to endpoint"

2012-11-24 Thread Chuan-Heng Hsiao
e: > >> Some people (myself included) have seen issues when upgrading from 1.1.2 >> to 1.1.6 with tombstoned rows in the HintsColumnFamily >> >> Some (mysql included) have fixed this by doing a >> >> nodetool scrub system HintsColumnFamily >> -mike &g

continue seeing "Finished hinted handoff of 0 rows to endpoint"

2012-11-24 Thread Chuan-Heng Hsiao
Hi Cassandra Devs, I intended to reduce the size of the db by the following steps: 1. removing all keys from one cf (somehow I can get all keys from the cf). 2. run nodetool cleanup on that cf one-node-by-one-node. the size of the cf on one node is about 150 G, I've made another cf with the same

Re: huge commitlog

2012-11-24 Thread Chuan-Heng Hsiao
in. Sincerely, Hsiao On Mon, Nov 19, 2012 at 11:21 AM, Chuan-Heng Hsiao wrote: > I have RF = 3. Read/Write consistency has already been set as TWO. > > It did seem that the data were not consistent yet. > (There are some CFs that I expected empty after the operations, but stil

Re: huge commitlog

2012-11-18 Thread Chuan-Heng Hsiao
) Sincerely, Hsiao On Mon, Nov 19, 2012 at 11:14 AM, Tupshin Harper wrote: > What consistency level are you writing with? If you were writing with ANY, > try writing with a higher consistency level. > > -Tupshin > > On Nov 18, 2012 9:05 PM, "Chuan-Heng Hsiao" > wrote:

Re: huge commitlog

2012-11-18 Thread Chuan-Heng Hsiao
> > As a work around nodetool flush should checkpoint the log. > > Cheers > > - > Aaron Morton > Freelance Cassandra Developer > New Zealand > > @aaronmorton > http://www.thelastpickle.com > > On 17/11/2012, at 2:30 PM, Chuan-Heng Hsiao >

huge commitlog

2012-11-16 Thread Chuan-Heng Hsiao
hi Cassandra Developers, I am experiencing huge commitlog size (200+G) after inserting huge amount of data. It is a 4-node cluster with RF= 3, and currently each has 200+G commit log (so there are around 1T commit log in total) The setting of commitlog_total_space_in_mb is default. I am using 1.

Re: Strange delay in query

2012-11-06 Thread Chuan-Heng Hsiao
Hi Andre, I am just a cassandra user, the following suggestions may not be valid. I assume you are using cassandra-cli and connecting to some specific node. You can check the following steps: 1. Can you still reproduce this issue? (not -> maybe the system/node issue) 2. What's the result when q

Re: Cassandra with large number of columns per row

2012-08-21 Thread Chuan-Heng Hsiao
orton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com > > On 20/08/2012, at 8:15 PM, Chuan-Heng Hsiao > wrote: > > I think the limit of the size per row in cassandra is 2G? > > 1 x 1M = 10G. > > Hsiao > > On Mon, Aug 20, 2012 at 1:07 PM, ou

Re: Cassandra with large number of columns per row

2012-08-20 Thread Chuan-Heng Hsiao
I think the limit of the size per row in cassandra is 2G? 1 x 1M = 10G. Hsiao On Mon, Aug 20, 2012 at 1:07 PM, oupfevph wrote: > I setup cassandra with default configuration in clean AWS instance, and I > insert 1 columns into a row, each column has a 1MB data. I use this > ruby(versio