I looked at the compaction history on the affected node when it was
affected and it was not affected.

The number of compactions is fairly similar and the amount of work also.

*Not affected time*
[root@cassandra7 ~]# nodetool compactionhistory | grep 02T22
fda43ca0-9696-11e8-8efb-25b020ed0402 demodb        topic_message
2018-08-02T22:59:47.946 433124864  339496194  {1:3200576, 2:2025936,
3:262919}
8a83e2c0-9696-11e8-8efb-25b020ed0402 demodb        topic_message
2018-08-02T22:56:34.796 133610579  109321990  {1:1574352, 2:434814}

01811e20-9696-11e8-8efb-25b020ed0402 demodb        topic_message
2018-08-02T22:52:44.930 132847372  108175388  {1:1577164, 2:432508}

*Experiencing more ioread*
[root@cassandra7 ~]# nodetool compactionhistory | grep 03T12
389aa220-970c-11e8-8efb-25b020ed0402 demodb        topic_message
2018-08-03T12:58:57.986 470326446  349948622  {1:2590960, 2:2600102,
3:298369}
81fe6f10-970b-11e8-8efb-25b020ed0402 demodb        topic_message
2018-08-03T12:53:51.617 143850880  115555226  {1:1686260, 2:453627}

ce418e30-970a-11e8-8efb-25b020ed0402 demodb        topic_message
2018-08-03T12:48:50.067 147035600  119201638  {1:1742318, 2:452226}

During a read operation the row can mostly be in one sstable since was only
inserted and then read so its strange.

We have a partition key and then a clustering key.

Rows that are written should be in kernel buffers and the rows which are
lost to delete are never read again either so the kernel should have only
the most recent data.

I remain puzzled



On Fri, Aug 3, 2018 at 3:58 PM, Jeff Jirsa <jji...@gmail.com> wrote:

> Probably Compaction
>
> Cassandra data files are immutable
>
> The write path first appends to a commitlog, then puts data into the
> memtable. When the memtable hits a threshold, it’s flushed to data files on
> disk (let’s call the first one “1”, second “2” and so on)
>
> Over time we build up multiple data files on disk - when Cassandra reads,
> it will merge data in those files to give you the result you expect,
> choosing the latest value for each column
>
> But it’s usually wasteful to lots of files around, and that merging is
> expensive, so compaction combines those data files behind the scenes in a
> background thread.
>
> By default they’re combined when 4 or more files are approximately the
> same size, so if your write rate is such that you fill and flush the
> memtable every 5 minutes, compaction will likely happen at least every 20
> minutes (sometimes more). This is called size tiered compaction; there are
> 4 strategies but size tiered is default and easiest to understand.
>
> You’re seeing mostly writes because the reads are likely in page cache
> (the kernel doesn’t need to go to disk to read the files, it’s got them in
> memory for serving normal reads).
>
> --
> Jeff Jirsa
>
>
> > On Aug 3, 2018, at 12:30 AM, Mihai Stanescu <mihai.stane...@gmail.com>
> wrote:
> >
> > Hi all,
> >
> > I am perftesting cassandra over a longrun in a cluster of 8 nodes and i
> noticed the rate of service drops.
> > Most of the nodes have the CPU between 40-65% however one of the nodes
> has a higher CPU and also started performing a lot of read IOPS as seen in
> the image. (green is read IOPS)
> >
> > My test has a mixed rw scenario.
> > 1. insert row
> > 2. after 60 seconds read row
> > 3. delete row.
> >
> > The rate of inserts is bigger than the rate of deletes so some delete
> will not happen.
> >
> > I have checked the client it it does not accumulate RAM, GC is a
> straight line so o don't understand whats going on.
> >
> > Any hints?
> >
> > Regards,
> > MIhai
> >
> > <image.png>
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

Reply via email to