Using a sigle data disk.
Also, it is performing mostly heavy read operations according to the
metrics cillected.

On Wed, 1 May 2019, 20:14 Jeff Jirsa <[email protected]> wrote:

> Do you have multiple data disks?
> Cassandra 6696 changed behavior with multiple data disks to make it safer
> in the situation that one disk fails . It may be copying data to the right
> places on startup, can you see if sstables are being moved on disk?
>
> --
> Jeff Jirsa
>
>
> On May 1, 2019, at 6:04 AM, Evgeny Inberg <[email protected]> wrote:
>
> I have upgraded a Cassandra cluster from version 2.0.x to 3.11.4 going
> trough 2.1.14.
> After the upgrade, noticed that each node is taking about 10-15 minutes to
> start, and server is under a very heavy load.
> Did some digging around and got view leads from the debug log.
> Messages like:
> *Keyspace.java:351 - New replication settings for keyspace system_auth -
> invalidating disk boundary caches *
> *CompactionStrategyManager.java:380 - Recreating compaction strategy -
> disk boundaries are out of date for system_auth.roles.*
>
> This is repeating for all keyspaces.
>
> Any suggestion to check and what might cause this to happen on every
> start?
>
> Thanks!e
>
>

Reply via email to