Thank Tyler for your comments. Created following ticket:
https://issues.apache.org/jira/browse/CASSANDRA-11920
Adarsh
On Thu, May 26, 2016 at 9:37 PM, Tyler Hobbs wrote:
>
> On Thu, May 26, 2016 at 4:36 AM, Adarsh Kumar
> wrote:
>
>>
>> 1). Is there any other way to configure no of buckets al
We are using Cassandra for our social network and we are designing/data
modeling tables we need, it is confusing for us and we don't know how to
design some tables and we have some little problems!
*As we understood for every query we have to have different tables*, and
for example user A is follo
We took backup of commitlogs and restarted the node, it started fine. As
the node was down for more than one day we can say for sure that it was
stuck and was not processing.
Wondering how we can tune our settings so as to avoid a similar scenario in
future, possibly not taking a hacky measure.
O
Those are rough guidelines, actual effective node size is going to depend
on your read/write workload and the compaction strategy you choose. The
biggest reason data density per node usually needs to be limited is due to
data grooming overhead introduced by compaction. Data at rest essentially
be
Let us assume that there is a table which gets only inserts and under
normal circumstances no reads on it. If we assume TTL to be 7 days,
what event
will trigger a compaction/purge of old data if the old data is not in
the mem cache and no session needs it.
thanks.
Your compaction strategy gets triggered whenever you flush memtables to disk.
Most compaction strategies, especially those designed for write-only
time-series workloads, check for fully expired sstables
(getFullyExpiredSStables()) “often” (DTCS does it every 10 minutes, because
it’s fairly exp