Hi All,
Is there any possibility of restoring cassandra snapshots to point in time
without using opscenter ?
Thanks and Regards
Rahul Bhardwaj
I remember a very similar question on the list some months ago.
The short answer is that there is no short answer. I'd recommend you search
the mailing list archive for "backup" or "recover".
2017-03-08 10:17 GMT+01:00 Bhardwaj, Rahul :
> Hi All,
>
>
>
> Is there any possibility of restoring cass
Use nodetool getsstables to discover which sstables contain the data and
then dump it with sstable2json -k to explore the content of the
data/mutations for those keys.
Arvydas
On Tue, Mar 7, 2017 at 4:13 AM, Michael Fong <
michael.f...@ruckuswireless.com> wrote:
> Hi, all,
>
>
>
>
>
> We recent
Yes,
It's possible. I haven't seen good instructions online though. The
Cassandra docs are quite bad as well.
I think I asked about it in this list and therefore I suggest you check the
mailing list archive as Mr. Roth suggested.
Hannu
On Wed, 8 Mar 2017 at 10.50, benjamin roth wrote:
> I reme
DISCLAIMER: This is only my personal opinion. Evaluate the situation carefully
and if you find below suggestions useful, follow them at your own risk.
If I have understood the problem correctly, malicious deletes would actually
lead to deletion of data. I am not sure how everything is normal aft
That's a good point - a snapshot is certainly in order ASAP, if not already
done.
One more thing I'd add about "data has to be consolidated from all the
nodes" (from #3 below):
- EITHER run the sstable2json ops on each node
- OR if size permits, copy the relevant sstables (containing the de
I’m running C* 2.1.13 and I have two rings that are replicating data from our
data center to one in AWS.
We would like to keep both of them for a while but we have a need to disconnect
them. How can this be done?
it's a bit tricky and I don't advise it, but the typical pattern is (say
you have DC1 and DC2):
1. partition the data centers from one another..kill the routing however
you can (firewall, etc)
2. while partitioned log onto DC1 alter schema so that DC2 is not
replicating), repeat for other.
2a. If
I was hoping I could do the following
· Change seeds
· Change the topology back to simply
· Stop nodes in datacenter 2
· Remove nodes in datacenter 2
· Restart nodes in datacenter 2
Somehow Cassandra holds on to the information about who was in the clus
Those future tombstones are going to continue to cause problems on those
partitions. If you're still writing to those partitions, you might be
losing data in the mean time. It's going to be hard to get the tombstone
out of the way so that new writes can begin to happen there (newly written
data w
I guess it depends on the experience one has. This is a common process to
bring up, move, build full prod copies, etc.
What is outlined is pretty much exactly what I have done 20-50 times (too
many to remember).
FYI, some of this should be done with nodes DOWN.
*...*
*Daemeon C.M. Reiyd
Do not change the cluster name - the cassandra service will not start on
the same sstables if the cluster name is changed.
Arvydas
On Wed, Mar 8, 2017 at 4:57 PM, Chuck Reynolds
wrote:
> I was hoping I could do the following
>
> · Change seeds
>
> · Change the topology back to s
On 2017-03-08 07:57 (-0800), Chuck Reynolds wrote:
> I was hoping I could do the following
>
> · Change seeds
Definitely.
>
> · Change the topology back to simply
>
Not necessary, can just remove the "other" datacenter from the replication
strategy.
> · Stop
13 matches
Mail list logo