On Thu, Dec 16, 2010 at 7:15 PM, Robert Coli <rc...@digg.com> wrote:
> On Thu, Dec 16, 2010 at 2:35 PM, Wayne <wav...@gmail.com> wrote:
>
>> I have read that read latency goes up with the total data size, but to what
>> degree should we expect a degradation in performance?
>
> I'm not sure this is generally answerable because of data modelling
> and workload variability, but there are some known
> performance-impacting issues with very large data files.
>
> For one example, this error :
>
> "
> WARN [COMPACTION-POOL:1] 2010-09-28 12:17:11,932 BloomFilter.java
> (line 82) Cannot provide an optimal BloomFilter for 245256960 elements
> (8/15 buckets per element).
> "
>
> Which I saw on a SSTable which was 90gb, around the size of one of your files.
>
> https://issues.apache.org/jira/browse/CASSANDRA-1555
>
> Is open with some great work from the Twitter guys to deal with this
> particular problem.
>
> Generally, I'm sure that there are other similar issues, because the
> simple fact is that the set of people running very large datasets with
> Apache Cassandra in production is still relatively small, and
> non-squeaking wheels usually get less grease.. ;D
>
> =Rob
>

What you are seeing is expected. Your latency and overall performance
will get slower as your data gets larger. You mention the size of
column families but not how much physical RAM is on your system.
Regardless of your key cache or row cache settings if you have more
RAM then data all of your sstables, bloom-filters, and index files
live in your VFS cache, and everything is going to be fast. At the
point that your data gets larger then main memory things can take a
step down. It may be a small step depending on the dynamics of your
traffic. As your data gets larger compared to main memory less will be
key|row|vfs cached. At this point your disks will start becoming more
active. If you are at the point where all your bloom filters and
indexes are getting much larger then main memory you can expect
another big step down.

In a nutshell if you can not keep some proportion of disk data to RAM
size you can expect latency to go up, fasts disks make this proportion
larger as does how large your active set it.

Commonly the Cassandra model calls for de-normalization. "Put
everything in one big column family" some might say. This is true in
some cases and not true in others. For example, you may run into a
situation where some columns for a key need to be read 100 times a
minute, while other columns need to be read 10 times a minute. If you
store all these columns together in a 300 GB table  each read searches
that 300 GB table. But if the data is sized drastically differently
and you can make those 100 reads per minute read a 30 GB CF and the
other 10 reads sort through the 270 GB CF everything will perform
better. Also in a situation like this you can tune the caching on the
column families independently. Maybe you can mix and match key and row
cache or size them differently. Also this depends heavily on how you
store your data. Is it big wide rows and small indexes? or small tiny
rows and large indexes? Many variables to take into account.

To get to 1TB have fast fast disk, and or lots of RAM. Then again you
might be able to accomplish this with 4 smaller nodes vs one beefy
one.

Reply via email to