cold_reads_to_omit defaults to 0.0 which disabled the feature, so it may not 
have been responsible in this case. 

There are a couple of things that could explain the difference:

* after nodetool compaction there was one SSTable, so one -Filter.db file 
rather than 8 that each had 700 entires. However 700 entries is not very many 
so this would have been a small size on disk.
 
* Same story with the -Index.db files, they would have all had the same values 
but that would not have been very with big with 700 entries. However with the 
wide rows column indexes would have also been present in the -Index.db file.

* Compression may have been better. In the when you have one SSTable all the 
columns for the row will be stored sequentially and it may have just had better 
compression. 

If most of the difference was in the -Data.db files I would guess compression, 
nodetool cfstats will tell you the compression ratio.

Hope that helps. 
Aaron


-----------------
Aaron Morton
New Zealand
@aaronmorton

Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

On 23/05/2014, at 9:46 am, Phil Luckhurst <phil.luckhu...@powerassure.com> 
wrote:

> Hi Andreas,
> 
> So does that mean it can compact the 'hottest' partitions into a new sstable
> but the old sstables may not immediately be removed so the same data could
> be in more that one sstable? That would certainly explain the difference we
> see when we manually run nodetool compact.
> 
> Thanks
> Phil
> 
> 
> Andreas Finke wrote
>> Hi Phil,
>> 
>> I found an interesting blog entry that may address your problem.
>> 
>> http://www.datastax.com/dev/blog/optimizations-around-cold-sstables
>> 
>> It seems that compaction is skipped for stables which so mit satisfy a
>> certain read rate. Please check.
>> 
>> 
>> Kind regards
>> 
>> Andreas Finke
>> Java Developer
>> Solvians IT-Solutions GmbH
>> 
>> 
>> ---- Phil Luckhurst wrote ----
>> 
>> Definitely no TTL and records are only written once with no deletions.
>> 
>> Phil
>> 
>> 
>> DuyHai Doan wrote
>>> Are you sure there is no TTL set on your data? It might explain the
>>> shrink
>>> in sstable size after compaction.
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Can-SSTables-overlap-with-SizeTieredCompactionStrategy-tp7594574p7594644.html
>> Sent from the 
> 
>> cassandra-user@.apache
> 
>> mailing list archive at Nabble.com.
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Can-SSTables-overlap-with-SizeTieredCompactionStrategy-tp7594574p7594658.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
> Nabble.com.

Reply via email to