Thank you very much. This was very helpfull. I'll post an update here when I
managed to finish my datastructure design.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Datastructure-time-tracking-tp7005672p7011370.html
Sent from the cassandra-u...
What do you mean "performance loss"? For example are you seeing it on
the read or write side? During compactions? Are deletions them selves
expensive (they shouldn't be) but if you have a lot of tombstones that
haven't been compacted away that will make reads slower since there is
more data to sc
I recently changed the default_validation_class on a bunch of CFs from
BytesType to UTF8Type and I observed two things: first I saw a number of
compactions during the migration that showed ~200% to ~400% of original
in the log entry. Second, it seems that compaction speed has now halved.
I'm using
I'm guessing something else is responsible for the compaction
difference you're seeing -- Bytes, UTF8, and Ascii types all use the
same lexical byte comparison code. The only place you should expect
to lose a small amount of performance by using the latter two is on
insert when it sanity-checks th
Hi,
We got "Added column does not sort as the last column" error in the logs
after upgrading to cass 1.0.3 from 0.6.13. After running scrub, we still
getting the error.
Here is stack trace:
java.lang.AssertionError: Added column does not sort as the last column
at
org.apache.cassandr
Hi All,
>From what I've read in the source, a Memtable's "live ratio" is the ratio of
>Memtable usage to the current write throughput. If this is too high, I
>imagine the system could be in a possibly unsafe state, as the comment in
>Memtable.java indicates.
Today, while bulk loading some dat
Dne 17.11.2011 17:42, Dan Hendry napsal(a):
What do you mean by ' better file offset caching'? Presumably you mean
'better page cache hit rate'?
fs metadata used to find blocks in smaller files are cached better.
Large files are using indirect blocks and you need more reads to find
correct bloc
Hi,
On my computer with 2G RAM and a core 2 duo CPU E4600 @ 2.40GHz, I am testing
the
performance of Cassandra. The write performance is good: It can write a million
records
in 10 minutes. However, the query performance is poor and it takes 10 minutes
to read
10K records with sequential keys
Try to see if there is a lot of paging going on,
and run some benchmarks on the disk itself.
Are you running Windows or Linux? Do you think
the disk may be fragmented?
Maxim
On 11/19/2011 8:58 PM, Kent Tong wrote:
Hi,
On my computer with 2G RAM and a core 2 duo CPU E4600 @ 2.40GHz, I am
tes