If you are not deleting or updating data then it should be safe to use 2nd
approach.
Regards,
Nitan
Cell: 510 449 9629
> On Aug 13, 2020, at 11:48 AM, Pushpendra Rajpoot
> wrote:
>
>
> Hi,
>
> I have a cluster of 2 DC, each DC has 5 nodes in production. This cluster is
> based on active-
Hi,
I have a cluster of 2 DC, each DC has 5 nodes in production. This cluster
is based on active-passive model i.e. application is writing data on one DC
(Active) & it's replicated to other DC (Passive).
My Passive DC has corrupt sstables (3 nodes out of 5 nodes) whereas there
are no corrupt ssta
Thanks all for your support.
I executed the discussed process (barring repair, as table was read for
reporting only) and it worked fine in production.
Regards
Manish
>
The risk is you violate consistency while you run repair
Assume you have three replicas for that range, a b c
At some point b misses a write, but it’s committed on a and c for quorum
Now c has a corrupt sstable
You empty c and bring it back with no data and start repair
Then the app reads at
;s strictly correct to do is treat every corrupt sstable
> exception as a failed host, and replace it just like you would a failed
> host.
>
>
> On Thu, Feb 13, 2020 at 10:55 PM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>> Thanks Erick
>>
>>
Agree this is both strictly possible and more common with LCS. The only
thing that's strictly correct to do is treat every corrupt sstable
exception as a failed host, and replace it just like you would a failed
host.
On Thu, Feb 13, 2020 at 10:55 PM manish khandelwal <
manishk
. Now the SSTable containing purgeable tombstone in one node is corruputed.
4. The node with corrupt SSTable cannot compact the data and purgeable
tombstone
6. From other two nodes Data A is removed after compaction.
7. Remove the corrupt SSTable from impacted node.
8. When you run repair Data A is
The log shows that the the problem occurs when decompressing the SSTable
but there's not much actionable info from it.
I would like to know what will be "ordinary hammer" in this case. Do you
> want to suggest that deleting only corrupt sstable file ( in this case
> mc-12
Hi Erick
Thanks for your quick response. I have attached the full stacktrace which
show exception during validation phase of table repair.
I would like to know what will be "ordinary hammer" in this case. Do you
want to suggest that deleting only corrupt sstable file ( in this cas
It will achieve the outcome you are after but I doubt anyone would
recommend that approach. It's like using a sledgehammer when an ordinary
hammer would suffice. And if you were hitting some bug then you'd run into
the same problem anyway.
Can you post the full stack trace? It might provide us som
ttps://www.linkedin.com/company/datastax>
> <https://www.facebook.com/datastax> <https://twitter.com/datastax>
> <http://feeds.feedburner.com/datastax> <https://github.com/datastax/>
>
> <https://www.datastax.com/accelerate>
>
>
>
> On Fri, 14 Feb
n Fri, 14 Feb 2020 at 04:39, manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Hi
>
> I see a corrupt SSTable in one of my keyspace table on one node. Cluster
> is 3 nodes with replication 3. Cassandra version is 3.11.2.
> I am thinking on following lines to resolve the cor
Hi
I see a corrupt SSTable in one of my keyspace table on one node. Cluster is
3 nodes with replication 3. Cassandra version is 3.11.2.
I am thinking on following lines to resolve the corrupt SSTable issue.
1. Run nodetool scrub.
2. If step 1 fails, run offline sstabablescrub.
3. If step 2 fails
Hi,
It seems like dropping a column can cause a "java.io.IOException:
Corrupt empty row found in unfiltered partition" exception when existing
SSTables are later compacted.
This seems to happen with all Cassandra 3.x versions and is very easy to
replicate. I've created a jira with all the d
This might not be good news to you. But my experience is that C*
2.X/Windows is not ready for production yet. I've seen various file system
related errors. And in one of the JIRAs I was told major work (or rework)
is done in 3.X to improve C* stability on Windows.
On Tue, Aug 16, 2016 at 3:44 AM,
Hi Alaa,
Sounds like you have problems that go beyond Cassandra- likely filesystem
corruption or bad disks. I don't know enough about Windows to give you any
specific advice but I'd try a run of chkdsk to start.
--Bryan
On Fri, Aug 12, 2016 at 5:19 PM, Alaa Zubaidi (PDF)
wrote:
> Hi Bryan,
>
>
Hi Bryan,
Changing disk_failure_policy to best_effort, and running nodetool scrub,
did not work, it generated another error:
java.nio.file.AccessDeniedException
Also tried to remove all files (data, commitlog, savedcaches) and restart
the node fresh, and still I am getting corruption.
and Still
Should also add that if the scope of corruption is _very_ large, and you
have a good, aggressive repair policy (read: you are confident in the
consistency of the data elsewhere in the cluster), you may just want to
decommission and rebuild that node.
On Fri, Aug 12, 2016 at 11:55 AM, Bryan Cheng
Looks like you're doing the offline scrub- have you tried online?
Here's my typical process for corrupt SSTables.
With disk_failure_policy set to stop, examine the failing sstables. If they
are very small (in the range of kbs), it is unlikely that there is any
salvageable data there. Just delete
Hi Jason,
Thanks for your input...
Thats what I am afraid of?
Did you find any HW error in the VMware and HW logs? any indication that
the HW is the reason? I need to make sure that this is the reason before
asking the customer to spend more money?
Thanks,
Alaa
On Thu, Aug 11, 2016 at 11:02 PM,
One more thing I noticed..
The corrupted SSTable is mentioned twice in the log file
[CompactionExecutor:10253] 2016-08-11 08:59:01,952 - Compacting (.)
[...la-1104-big-Data.db, ]
[CompactionExecutor:10253] 2016-08-11 09:32:04,814 - Compacting (.)
[...la-1104-big-Data.db]
Is
cassandra run on virtual server (vmware)?
> I tried sstablescrub but it crashed with hs-err-pid-...
maybe try with larger heap allocated to sstablescrub
this sstable corrupt i ran into it as well (on cassandra 1.2), first i
try nodetool scrub, still persist, then offline sstablescrub still
persis
Hi,
I have a 16 Node cluster, Cassandra 2.2.1 on Windows, local installation
(NOT on the cloud)
and I am getting
Error [CompactionExecutor:2] 2016-08-12 06:51:52, 983 Cassandra
Daemon.java:183 - Execption in thread Thread[CompactionExecutor:2,1main]
org.apache.cassandra.io.FSReaderError:
org.apac
I'm running 2.1.9
On Fri, Oct 2, 2015 at 8:40 AM, Tyler Hobbs wrote:
> What version of Cassandra are you running? It sounds like the disk
> failure policy is incorrectly being applied to scrub, which kind of defeats
> the purpose of it. I'd recommend opening a ticket (
> https://issues.apache.
What version of Cassandra are you running? It sounds like the disk failure
policy is incorrectly being applied to scrub, which kind of defeats the
purpose of it. I'd recommend opening a ticket (
https://issues.apache.org/jira/browse/CASSANDRA) with the information you
posted here.
On Fri, Oct 2,
I'm also facing problems regarding corrupt sstables and also couldn't run
sstablescrub successfully.
I restarted my nodes with disk failure policy "best_effort", then I run the
"nodetool scrub "
Once done I removed the corrupt tables manually and started repair
On Thu, Oct 1, 2015 at 7:27 PM, J
I have a 25 node cluster and we lost power on one of the racks last night
and now 6 of our nodes will not start up and we are getting the following
error:
INFO [main] 2015-10-01 10:19:22,111 CassandraDaemon.java:122 - JMX is
enabled to receive remote connections on port: 7199
INFO [main] 2015-10
On Fri, May 24, 2013 at 7:07 AM, Hiller, Dean wrote:
> We have a corrupt sstable databus5-nreldata-ib-36763-Data.db. How do we
> safely blow this away? (and then we would run repair to make sure all data
> is still there)…
>
> Can we just move the file out from under cassan
, Hiller, Dean wrote:
> We have a corrupt sstable databus5-nreldata-ib-36763-Data.db. How do we
> safely blow this away? (and then we would run repair to make sure all data
> is still there)…
>
> Can we just move the file out from under cassandra? (or would cassandra
> frea
We have a corrupt sstable databus5-nreldata-ib-36763-Data.db. How do we safely
blow this away? (and then we would run repair to make sure all data is still
there)…
Can we just move the file out from under cassandra? (or would cassandra freak
out?)
Thanks,
Dean
nd since data is critical
>>> afterall :)
>>>
>>> --
>>> View this message in context:
>>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Strange-corrupt-sstable-tp6314052p6314218.html
>>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>>> Nabble.com.
>>
>
>
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com
is message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Strange-corrupt-sstable-tp6314052p6314218.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>> Nabble.com.
>
rg.3065146.n2.nabble.com/Strange-corrupt-sstable-tp6314052p6314218.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
-org.3065146.n2.nabble.com/Strange-corrupt-sstable-tp6314052p6314218.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
When I have seen this in the past it has been bad memory on the server.
On Thu, Apr 28, 2011 at 11:58 AM, Daniel Doubleday
wrote:
> Hi all
> on one of our dev machines we ran into this:
> INFO [CompactionExecutor:1] 2011-04-28 15:07:35,174 SSTableWriter.java (line
> 108) Last written key : Decora
range-corrupt-sstable-tp6314052p6314067.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Hi all
on one of our dev machines we ran into this:
INFO [CompactionExecutor:1] 2011-04-28 15:07:35,174 SSTableWriter.java (line
108) Last written key : DecoratedKey(12707736894140473154801792860916528374,
74657374)
INFO [CompactionExecutor:1] 2011-04-28 15:07:35,174 SSTableWriter.java (line
comes back
as the JSON key ("USc5494dfa-678a-4175-a1d5-65730065c69d:UH"). These match
on a small sample of other nodes and other keys (as I would expect).
If this is a corrupt sstable index, how can I repair this? We're running
0.6.11 and have RF=3 with the 5 nodes and have been us
38 matches
Mail list logo