Thanks for the update. Good to know that TWCS give you more stability
On Wed, Feb 8, 2017 at 6:20 PM, John Sanda wrote:
> I wanted to provide a quick update. I was able to patch one of the
> environments that is hitting the tombstone problem. It has been running
> TWCS for five days now, and thi
I wanted to provide a quick update. I was able to patch one of the
environments that is hitting the tombstone problem. It has been running
TWCS for five days now, and things are stable so far. I also had a patch to
the application code to implement date partitioning ready to go, but I
wanted to see
In theory, you're right and Cassandra should possibly skip reading cells
having time < 50. But it's all theory, in practice Cassandra read chunks of
xxx kilobytes worth of data (don't remember the exact value of xxx, maybe
64k or far less) so you may end up reading tombstones.
On Sun, Jan 29, 2017
Check out our post on how to use TWCS before 3.0.
http://thelastpickle.com/blog/2017/01/10/twcs-part2.html
On Sun, Jan 29, 2017 at 11:20 AM John Sanda wrote:
> It was with STCS. It was on a 2.x version before TWCS was available.
>
> On Sun, Jan 29, 2017 at 10:58 AM DuyHai Doan wrote:
>
> Did y
Thanks for the clarification. Let's say I have a partition in an SSTable
where the values of time range from 100 to 10 and everything < 50 is
expired. If I do a query with time < 100 and time >= 50, are there
scenarios in which Cassandra will have to read cells where time < 50? In
particular I am w
"Should the data be sorted by my time column regardless of the compaction
strategy" --> It does
What I mean is that an old "chunk" of expired data in SSTABLE-12 may be
compacted together with a new chunk of SSTABLE-2 containing fresh data so
in the new resulting SSTable will contain tombstones AND
>
> Since STCS does not sort data based on timestamp, your wide partition may
> span over multiple SSTables and inside each SSTable, old data (+
> tombstones) may sit on the same partition as newer data.
Should the data be sorted by my time column regardless of the compaction
strategy? I didn't t
Ok so give it a try with TWCS. Since STCS does not sort data based on
timestamp, your wide partition may span over multiple SSTables and inside
each SSTable, old data (+ tombstones) may sit on the same partition as
newer data.
When reading by slice, even if you request for fresh data, Cassandra ha
It was with STCS. It was on a 2.x version before TWCS was available.
On Sun, Jan 29, 2017 at 10:58 AM DuyHai Doan wrote:
> Did you get this Overwhelming tombstonne behavior with STCS or with TWCS ?
>
> If you're using DTCS, beware of its weird behavior and tricky
> configuration.
>
> On Sun, Jan
Did you get this Overwhelming tombstonne behavior with STCS or with TWCS ?
If you're using DTCS, beware of its weird behavior and tricky configuration.
On Sun, Jan 29, 2017 at 3:52 PM, John Sanda wrote:
> Your partitioning key is text. If you have multiple entries per id you are
>> likely hitti
>
> Your partitioning key is text. If you have multiple entries per id you are
> likely hitting older cells that have expired. Descending only affects how
> the data is stored on disk, if you have to read the whole partition to find
> whichever time you are querying for you could potentially hit to
Your partitioning key is text. If you have multiple entries per id you are
likely hitting older cells that have expired. Descending only affects how
the data is stored on disk, if you have to read the whole partition to find
whichever time you are querying for you could potentially hit tombstones i
Maybe trace your queries to see what's happening in detail.
Am 28.01.2017 21:32 schrieb "John Sanda" :
Thanks for the response. This version of the code is using STCS.
gc_grace_seconds was set to one day and then I changed it to zero since RF
= 1. I understand that expired data will still generat
Thanks for the response. This version of the code is using STCS.
gc_grace_seconds was set to one day and then I changed it to zero since RF
= 1. I understand that expired data will still generate tombstones and that
STCS is not the best. More recent versions of the code use DTCS, and we'll
be switc
Since you didn't specify a compaction strategy I'm guessing you're using
STCS. Your TTL'ed data is becoming a tombstone. TWCS is a better strategy
for this type of workload.
On Sat, Jan 28, 2017 at 8:30 AM John Sanda wrote:
> I have a time series data model that is basically:
>
> CREATE TABLE met
When the data expired (after TTL of 7 days), at the next compaction they
are transformed into tombstonnes and will still stay there during
gc_grace_seconds. After that, they (the tombstonnes) will be completely
removed at the next compaction, if there is any ...
So doing some maths, supposing that
16 matches
Mail list logo