Hi Paulo,
No we are not using JBOD.Just a bunch of disks.
Thanks

On Thu, 25 Jan 2018 at 5:44 PM, Paulo Motta <pauloricard...@gmail.com>
wrote:

> Are you using JBOD? A thread dump (jstack <pid>) on the affected nodes
> would probably help troubleshoot this.
>
> 2018-01-25 6:45 GMT-02:00 shini gupta <gupta.sh...@gmail.com>:
> > Hi,
> >
> >
> > We have upgraded the system from Cassandra 2.1.16 to 3.11.1. After about
> > 335M of data loading, repair  with –pr and –full option was triggered for
> > all keyspaces (7 keyspaces). Data size of main_k1 is ~170G.
> >
> >
> >
> > Please find the command executed for repair:
> >
> >
> >
> > …./cassandra/bin/nodetool -u <username> -<password> -p 7199 repair -full
> –tr
> >
> >
> >
> > It was observed that the repair for the first keyspace i.e main_k1 got
> > finished in 2 hrs at 13:39 hrs. Even after about 1:30 hrs, there were
> just
> > warning messages about the orphaned sstables. The repair command for the
> > first keyspace completed successfully but then the nodetool repair
> command
> > got stuck and there was no output for remaining 6 keyspaces.
> >
> >
> >
> > Please find the system logs below:
> >
> >
> >
> > INFO  [Repair-Task-2] 2018-01-18 11:35:01,161 RepairRunnable.java:139 -
> > Starting repair command #1 (84dbf590-fc15-11e7-85c6-1594c4c73c8e),
> repairing
> > keyspace main_k1 with repair options (parallelism: parallel, primary
> range:
> > true, incremental: false, job threads: 1, ColumnFamilies: [],
> dataCenters:
> > [], hosts: [], # of ranges: 256, pull repair: false)
> >
> >
> >
> > INFO  [CompactionExecutor:39] 2018-01-18 13:39:58,174
> > RepairRunnable.java:343 - Repair command #1 finished in 2 hours 4
> minutes 57
> > seconds
> >
> >
> >
> > WARN  [ValidationExecutor:12] 2018-01-18 14:30:49,356
> > LeveledCompactionStrategy.java:273 - Live sstable
> >
> …../data/main_k1/table1-4b9c1fd0f4f411e7889bd9124bc6a6eb/mc-22184-big-Data.db
> > from level 3 is not on corresponding level in the leveled manifest. This
> is
> > not a problem per se, but may indicate an orphaned sstable due to a
> failed
> > compaction not cleaned up properly.
> >
> > ……………………………………………………………….
> >
> >
> >
> > WARN  [ValidationExecutor:26] 2018-01-18 15:03:53,598
> > LeveledCompactionStrategy.java:273 - Live sstable …../data
> > /main_k1/table1-4b9c1fd0f4f411e7889bd9124bc6a6eb/mc-22291-big-Data.db
> from
> > level 2 is not on corresponding level in the leveled manifest. This is
> not a
> > problem per se, but may indicate an orphaned sstable due to a failed
> > compaction not cleaned up properly.
> >
> > WARN  [ValidationExecutor:26] 2018-01-18 15:03:53,598
> > LeveledCompactionStrategy.java:273 - Live sstable …../data
> > /main_k1/table1-4b9c1fd0f4f411e7889bd9124bc6a6eb/mc-22216-big-Data.db
> from
> > level 3 is not on corresponding level in the leveled manifest. This is
> not a
> > problem per se, but may indicate an orphaned sstable due to a failed
> > compaction not cleaned up properly.
> >
> >
> >
> > Is anybody facing such issues with repair –pr for cassandra 3.11.1 ? Is
> this
> > behavior due to the warning messages reported in logs.
> >
> >
> >
> > Thanks.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
> --
-Shini Gupta

""Trusting in God won't make the mountain smaller,
But will make climbing easier.
Do not ask God for a lighter load
But ask Him for a stronger back... ""

Reply via email to