What's the compaction strategy are you using? level compaction or size
tiered compaction?

On Fri, Oct 13, 2017 at 4:31 PM, Bruce Tietjen <
bruce.tiet...@imatsolutions.com> wrote:

> I hadn't noticed that is is now attempting two impossible compactions:
>
>
> id                                   compaction type keyspace      table
> completed total    unit  progress
> a7d1b130-b04c-11e7-bfc8-79870a3c4039 Compaction      perfectsearch cxml
> 1.73 TiB  5.04 TiB bytes 34.36%
> b7b98890-b063-11e7-bfc8-79870a3c4039 Compaction      perfectsearch cxml
> 867.4 GiB 6.83 TiB bytes 12.40%
> Active compaction remaining time :        n/a
>
>
> On Fri, Oct 13, 2017 at 5:27 PM, Jon Haddad <j...@jonhaddad.com> wrote:
>
>> Can you paste the output of cassandra compactionstats?
>>
>> What you’re describing should not happen.  There’s a check that drops
>> sstables out of a compaction task if there isn’t enough available disk
>> space, see https://issues.apache.org/jira/browse/CASSANDRA-12979 for
>> some details.
>>
>>
>> On Oct 13, 2017, at 4:24 PM, Bruce Tietjen <bruce.tietjen@imatsolutions.c
>> om> wrote:
>>
>>
>> We are new to Cassandra and have built a test cluster and loaded some
>> data into the cluster.
>>
>> We are seeing compaction behavior that seems to violate what we read
>> about it's behavior.
>>
>> Our cluster is configured with JBOD with 3 3.6T disks. Those disks
>> currently respectively have the following used/available space:
>> Disk          Used             Available
>> sdb1          1.8T                 1.7T
>> sdc1          1.8T                1.6T
>> sdd1           1.5T                2.0T
>>
>> nodetool compactionstats -H reports that the compaction system is
>> attempting to do a compaction that has a total of 6.83T
>>
>> The system hasn't had that much free space since sometime after we
>> started loading data and there has never been that much free space on a
>> single disk, so why would it ever attempt such a compaction?
>>
>> What have we done wrong, or am I reading this wrong?
>>
>> We have seen the same behavior on most of our 8 nodes.
>>
>> Can anyone tell us what is happening or what we have done wrong?
>>
>> Thanks
>>
>>
>>
>


-- 
Dikang

Reply via email to