Hi Thomas,
in 2.1.18, the default repair mode was full repair while since 2.2 it is
incremental repair.
So running "nodetool repair -pr" since your upgrade to 3.0.14 doesn't
trigger the same operation.
Incremental repair cannot run on more than one node at a time on a cluster,
because you risk to
Hi Alex,
thanks a lot. Somehow missed that incremental repairs are the default now.
We have been happy with full repair so far, cause data what we currently
manually invoke for being prepared is a small (~1GB or even smaller).
So I guess with full repairs across all nodes, we still can stick wi
Right, you should indeed add the "--full" flag to perform full repairs, and
you can then keep the "-pr" flag.
I'd advise to monitor the status of your SSTables as you'll probably end up
with a pool of SSTables marked as repaired, and another pool marked as
unrepaired which won't be compacted toget
Hello,
we have a test (regression) environment hosted in AWS, which is used for auto
deploying our software on a daily basis and attach constant load across all
deployments. Basically to allow us to detect any regressions in our software on
a daily basis.
On the Cassandra-side, this is single-
Alex,
thanks again! We will switch back to the 2.1 behavior for now.
Thomas
From: Alexander Dejanovski [mailto:a...@thelastpickle.com]
Sent: Freitag, 15. September 2017 11:30
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails after upgrading to 3.0.14
Right, you should indeed ad
Few notes:
- in 3.0 the default changed to incremental repair which will have to
anticompact sstables to allow you to repair the primary ranges you've specified
- since you're starting the repair on all nodes at the same time, you end up
with overlapping anticompactions
Generally you should stag
Most people find 3.0 slightly slower than 2.1. The only thing that really
stands out in your email is the huge change in 95% latency - that's atypical.
Are you using thrift or native 9042)? The decrease in compression metadata
offheap usage is likely due to the increased storage efficiency of t
Hi Jeff,
we are using native (CQL3) via Java DataStax driver (3.1). We also have
OpsCenter running (to be removed soon) via Thrift, if I remember correctly.
As said, the write request latency for our keyspace hasn’t really changed, so
perhaps another one (system related, OpsCenter …?) is affect
I've finished the migration to NetworkTopologyStrategy using
GossipingPropertyFileSnitch.
Now I have 4 nodes at zone a (rack1) and another 4 nodes at zone b
(rack2) only one dc, there's no zone c at Frankfurt.
Can I get QUORUM consistency for reading (for writing I'm using ANY)
adding a tiny node
Hi,
we have a cassandra cluster with 7 nodes each in 3 datacenters. We are
using C* 2.1.15.4 version.
Network bandwidth between DC1 and DC2 is very good (10Gbit/s) and a dedicated
one. However network pipe between DC1 and DC3 and between DC2 and DC3 is very
poor and has only 100 MBit/s an
Hi,
usually automatic minor compactions are fine, but you may need much more free
disk space to reclaim disk space via automatic minor compactions, especially in
a time series use case with size-tiered compaction strategy (possibly with
leveled as well, I’m not familiar with this strategy type)
Hi Kishore,
Just to make sure we're all on the same page, I presume you're doing full
repairs using something like 'nodetool repair -pr', which repairs all data
for a given token range across all of your hosts in all of your dcs. Is
that a correct assumption to start?
In addition to throttling in
Hi Jeff,
Thanks for your reply.
Infact I have tried with all the options.
1. We use Cassandra reaper for our repair, which does the sub range repair.
2. I have also developed a shell script, which exactly does the same, as
what reaper does. But this can control, how ma
You can add a tiny node with 3 tokens. it will own a very small amount of
data and be responsible for replicas of that data and thus included in
quorum queries for that data. What is the use case? This won't give you any
real improvement in meeting consistency.
14 matches
Mail list logo