Hi Bryan,
Changing disk_failure_policy to best_effort, and running nodetool scrub,
did not work, it generated another error:
java.nio.file.AccessDeniedException
Also tried to remove all files (data, commitlog, savedcaches) and restart
the node fresh, and still I am getting corruption.
and Still
Should also add that if the scope of corruption is _very_ large, and you
have a good, aggressive repair policy (read: you are confident in the
consistency of the data elsewhere in the cluster), you may just want to
decommission and rebuild that node.
On Fri, Aug 12, 2016 at 11:55 AM, Bryan Cheng
Looks like you're doing the offline scrub- have you tried online?
Here's my typical process for corrupt SSTables.
With disk_failure_policy set to stop, examine the failing sstables. If they
are very small (in the range of kbs), it is unlikely that there is any
salvageable data there. Just delete
Hi Jason,
Thanks for your input...
Thats what I am afraid of?
Did you find any HW error in the VMware and HW logs? any indication that
the HW is the reason? I need to make sure that this is the reason before
asking the customer to spend more money?
Thanks,
Alaa
On Thu, Aug 11, 2016 at 11:02 PM,
Hi,
Try this and check the yaml file path: strace -f -e open nodetool
upgradesstables 2>&1 | grep cassandra.yaml
How C* is installed (package, tarball)? Other nodetool commands run fine?Also,
did you try offline SSTable upgrade with the sstableupgrade tool?
Best,
Romain
Le Vendredi 12 aoû
Hi All,
We are in process of migrating from 2.0.14 to 2.1.13 and we are able to
successfully install binaries and make Cassandra 2.1.13 running up and fine.
But issue comes up when we try to run nodetool upgradesstables , it gets
finished in few seconds only which means it does not find any old
It's a bit more involved than that. C* uses a "Phi accrual failure
detector":https://docs.datastax.com/en/cassandra/3.x/cassandra/architecture/archDataDistributeFailDetect.html
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L878See also
https://dspace.jaist.ac.jp/dspace/bitstr
One more thing I noticed..
The corrupted SSTable is mentioned twice in the log file
[CompactionExecutor:10253] 2016-08-11 08:59:01,952 - Compacting (.)
[...la-1104-big-Data.db, ]
[CompactionExecutor:10253] 2016-08-11 09:32:04,814 - Compacting (.)
[...la-1104-big-Data.db]
Is
cassandra run on virtual server (vmware)?
> I tried sstablescrub but it crashed with hs-err-pid-...
maybe try with larger heap allocated to sstablescrub
this sstable corrupt i ran into it as well (on cassandra 1.2), first i
try nodetool scrub, still persist, then offline sstablescrub still
persis