ction={'class': 'LeveledCompactionStrategy'} AND
> compression={'chunk_length_kb': '8', 'crc_check_chance': '0.1',
> 'sstable_compression': 'LZ4Compressor'};
>
> From: Igor
> Reply-To: "user@cassand
ssion': 'LZ4Compressor'};
From: Igor mailto:i...@4friends.od.ua>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Thursday, May 16, 2013 4:27 PM
To: "user@cassandra.apache.org<mailto:user
r@cassandra.apache.org
<mailto:user@cassandra.apache.org>" mailto:user@cassandra.apache.org>>
Subject: Re: SSTable size versus read performance
My 5 cents: I'd check blockdev --getra for data drives - too
high values for readahead (default to 256 for debian) can hurt
read performance.
e tried decreasing my SSTable size
>> to 5 MB and changing the chunk size to 8 kb
>>
>> From: Igor
>> Reply-To: "user@cassandra.apache.org"
>> Date: Thursday, May 16, 2013 1:55 PM
>>
>> To: "user@cassandra.apache.org"
>> Subj
set to 512. I have tried decreasing my SSTable size
> to 5 MB and changing the chunk size to 8 kb
>
> From: Igor
> Reply-To: "user@cassandra.apache.org"
> Date: Thursday, May 16, 2013 1:55 PM
>
> To: "user@cassandra.apache.org"
> Subject: Re: SSTable s
PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: SSTable size versus read performance
My 5 cents: I'd check blockdev --getra for data drives - too high values for
readahead (default to 256 for debian) can
My 5 cents: I'd check blockdev --getra for data drives - too high values
for readahead (default to 256 for debian) can hurt read performance.
On 05/16/2013 05:14 PM, Keith Wright wrote:
Hi all,
I currently have 2 clusters, one running on 1.1.10 using CQL2 and
one running on 1.2.4 using CQ
;
mailto:user@cassandra.apache.org>>
Subject: Re: SSTable size versus read performance
With you use compression you should play with your block size. I believe the
default may be 32K but I had more success with 8K, nearly same compression
ratio, less young gen memory pressure.
On T
y, May 16, 2013 10:23 AM
> To: "user@cassandra.apache.org"
> Subject: Re: SSTable size versus read performance
>
> I am not sure of the new default is to use compression, but I do not
> believe compression is a good default. I find compression is better for
> larger column fa
y-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Thursday, May 16, 2013 10:23 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: SSTable
I am not sure of the new default is to use compression, but I do not
believe compression is a good default. I find compression is better for
larger column families that are sparsely read. For high throughput CF's I
feel that decompressing larger blocks hurts performance more then
compression adds.
Hi all,
I currently have 2 clusters, one running on 1.1.10 using CQL2 and one
running on 1.2.4 using CQL3 and Vnodes. The machines in the 1.2.4 cluster are
expected to have better IO performance as we are going from 1 SSD data disk per
node in the 1.1 cluster to 3 SSD data disks per node
12 matches
Mail list logo