Thanks a lot. It seems that a fix is commited now and fix will appear in
the next release, so I won't need my own patched cassandra :)
Best regards, Vitalii Tymchyshyn.
2012/5/3 Andrey Kolyadenko
> Hi Vitalii,
>
> I sent patch.
>
>
> 2012/4/24 Віталій Тимчишин
>
>> Glad you've got it working p
Glad you've got it working properly. I've tried to make as "local" changes
as possible, so changed only single value calculation. But it's possible
your way is better and will be accepted by cassandra maintainer. Could you
attach your patch to the ticket. I'd like for any fix to be applied to the
t
I agree with your observations.
>From another hand I found that ColumnFamily.size() doesn't calculate object
size correctly. It doesn't count two fields Objects sizes and returns 0 if
there is no object in columns container.
I increased initial size variable value to 24 which is size of two
objects
I've not looked into the CASSANDRA-3721 ticket but…
If you reduce the yaml config setting commitlog_total_space_in_mb you can get
similar behaviour to the old memtable_flush_* setting the flushed every CF
after X minutes.
Not pretty but it may work in this case.
Cheers
-
Aar
Hello.
For me " there are no dirty column families" in your message tells it's
possibly the same problem.
The issue is that column families that gets full row deletes only do not
get ANY SINGLE dirty byte accounted and so can't be picked by flusher.
Any ratio can't help simply because it is mu
Thank you Vitalii.
Looking at the Jonathan's answer to your patch I think it's probably not my
case. I see that LiveRatio is calculated in my case, but calculations look
strange:
WARN [MemoryMeter:1] 2012-04-23 23:29:48,430 Memtable.java (line 181)
setting live ratio to maximum of 64 instead of I
See https://issues.apache.org/jira/browse/CASSANDRA-3741
I did post a fix there that helped me.
2012/4/24 crypto five
> Hi,
>
> I have 50 millions of rows in column family on 4G RAM box. I allocatedf
> 2GB to cassandra.
> I have program which is traversing this CF and cleaning some data there,
>
Hi,
I have 50 millions of rows in column family on 4G RAM box. I allocatedf 2GB
to cassandra.
I have program which is traversing this CF and cleaning some data there, it
generates about 20k delete statements per second.
After about of 3 millions deletions cassandra stops responding to queries:
it