Thanks for the response. It makes sense to periodically truncate as it is only 
for debugging purposes Naidu Saladi 
 

    On Wednesday, October 5, 2016 8:03 PM, Chris Lohfink <clohfin...@gmail.com> 
wrote:
 

 The only current solution is to truncate it periodically. I opened 
https://issues.apache.org/jira/browse/CASSANDRA-12701 about it if interested in 
following
On Wed, Oct 5, 2016 at 4:23 PM, Saladi Naidu <naidusp2...@yahoo.com> wrote:

We are seeing following warnings in system.log,  As compaction_large_ 
partition_warning_threshold_mb   in cassandra.yaml file is as default value 
100, we are seeing these warnings
110:WARN  [CompactionExecutor:91798] 2016-10-05 00:54:05,554 
BigTableWriter.java:184 - Writing large partition system_distributed/repair_ 
history:gccatmer:mer_admin_job (115943239 bytes)
111:WARN  [CompactionExecutor:91798] 2016-10-05 00:54:13,303 
BigTableWriter.java:184 - Writing large partition system_distributed/repair_ 
history:gcconfigsrvcks:user_ activation (163926097 bytes)
When I looked at the table definition it is partitioned by keyspace and 
cloumnfamily, under this partition, repair history is maintained. When I looked 
at the count of rows in this partition, most of the paritions have >200,000 
rows and these will keep growing because of the partition strategy right. There 
is no TTL on this so any idea what is the solution for reducing partition size. 

I also looked at size_estimates table for this column family and found that the 
mean partition size for each range is 50,610,179 which is very large compared 
to any other tables. 



   

Reply via email to