Hi André,
I am just a user of cassandra and did not look into the code deeply.
However, my guess is that cassandra only guarantee that
if you successfully write and you successfully read, then quorum will
give you the latest data.
not finding the just inserted data may be due to the failure of
s
Hi Francisco,
I think it's the normal behavior of nodetool with cassandra 1.1.6.
Sincerely,
Hsiao
On Mon, Nov 26, 2012 at 10:12 PM, Francisco Trujillo Hacha
wrote:
> Hi
>
> I have a one node cassandra installation (1.1.6) with only one column
> family. When i tried to execute:
>
> ./nodetool -
ot;scrub system HintsColumnFamily" trick worked this time.
I'll try to reproduce the situation and check the phantom TCP connections.
Sincerely,
Hsiao
On Mon, Nov 26, 2012 at 8:50 AM, Mina Naguib
wrote:
>
>
> On 2012-11-24, at 10:37 AM, Chuan-Heng Hsiao
> wrote:
>
>
e ERROR about "Keys must not be empty".
>
> Do you have the full error stack ?
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 25/11/2012, at 4:
e:
>
>> Some people (myself included) have seen issues when upgrading from 1.1.2
>> to 1.1.6 with tombstoned rows in the HintsColumnFamily
>>
>> Some (mysql included) have fixed this by doing a
>>
>> nodetool scrub system HintsColumnFamily
>> -mike
&g
Hi Cassandra Devs,
I intended to reduce the size of the db by the following steps:
1. removing all keys from one cf (somehow I can get all keys from the cf).
2. run nodetool cleanup on that cf one-node-by-one-node.
the size of the cf on one node is about 150 G,
I've made another cf with the same
in.
Sincerely,
Hsiao
On Mon, Nov 19, 2012 at 11:21 AM, Chuan-Heng Hsiao
wrote:
> I have RF = 3. Read/Write consistency has already been set as TWO.
>
> It did seem that the data were not consistent yet.
> (There are some CFs that I expected empty after the operations, but stil
)
Sincerely,
Hsiao
On Mon, Nov 19, 2012 at 11:14 AM, Tupshin Harper wrote:
> What consistency level are you writing with? If you were writing with ANY,
> try writing with a higher consistency level.
>
> -Tupshin
>
> On Nov 18, 2012 9:05 PM, "Chuan-Heng Hsiao"
> wrote:
>
> As a work around nodetool flush should checkpoint the log.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/11/2012, at 2:30 PM, Chuan-Heng Hsiao
>
hi Cassandra Developers,
I am experiencing huge commitlog size (200+G) after inserting huge
amount of data.
It is a 4-node cluster with RF= 3, and currently each has 200+G commit
log (so there are around 1T commit log in total)
The setting of commitlog_total_space_in_mb is default.
I am using 1.
Hi Andre,
I am just a cassandra user, the following suggestions may not be valid.
I assume you are using cassandra-cli and connecting to some specific node.
You can check the following steps:
1. Can you still reproduce this issue? (not -> maybe the system/node issue)
2. What's the result when q
orton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/08/2012, at 8:15 PM, Chuan-Heng Hsiao
> wrote:
>
> I think the limit of the size per row in cassandra is 2G?
>
> 1 x 1M = 10G.
>
> Hsiao
>
> On Mon, Aug 20, 2012 at 1:07 PM, ou
I think the limit of the size per row in cassandra is 2G?
1 x 1M = 10G.
Hsiao
On Mon, Aug 20, 2012 at 1:07 PM, oupfevph wrote:
> I setup cassandra with default configuration in clean AWS instance, and I
> insert 1 columns into a row, each column has a 1MB data. I use this
> ruby(versio
13 matches
Mail list logo