Yea, it means they're effecitvely invalid files, and would not be loaded at
startup.



On Mon, Jul 31, 2017 at 9:07 PM, Sotirios Delimanolis <
sotodel...@yahoo.com.invalid> wrote:

> I don't want to go down the TTL path because this behaviour is also
> occurring for tables without a TTL. I don't have hard numbers about the
> amount of writes, but there's definitely been enough to trigger compaction
> in the ~year since.
>
> We've never changed the topology of this cluster. Ranges have always been
> the same.
>
> I can't remember about repairs, but running sstablemetadata shows
>
> Repaired at: 0
>
> across all files. The Cassandra process has been restarted multiple times
> in the last year.
>
> I'v noticed there are only -Data.db and -Index.db files in some rare
> cases. The compression info, filter, summary, and statistics files are
> missing. Does that hint at anything?
>
>
>
>
> On Monday, July 31, 2017, 3:39:11 PM PDT, Jeff Jirsa <jji...@apache.org>
> wrote:
>
>
>
>
> On 2017-07-31 15:00 (-0700), kurt greaves <k...@instaclustr.com> wrote:
> > How long is your ttl and how much data do you write per day (ie, what is
> > the difference in disk usage over a day)? Did you always TTL?
> > I'd say it's likely there is live data in those older sstables but you're
> > not generating enough data to push new data to the highest level before
> it
> > expires.
>
> >
>
> This is a pretty good option. Other options:
>
> 1) You changed topology on Nov 28, and the ranges covered by those
> sstables are no longer intersecting with the ranges on the node, so they're
> not being selected as LCS compaction candidates (and if you run nodetool
> cleanup, they probably get deleted)
>
> 2) You ran incremental repairs once, and stopped on the 28th, and now
> those sstables have a repairedAt set, so they won't be compacted with other
> (unrepaired) sstables
>
> 3) There's some horrible bug where the sstables got lost from the running
> daemon, and if you restart it'll magically get sucked in and start working
> again (this is really unlikely, and it would be a very bad bug).
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
>

Reply via email to