Hello Ajay,
Have a look in the *max_hint_window_in_ms* :
http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
My understanding is that if a node remains down for more than
*max_hint_window_in_ms*, then you will need to repair that node.
Thanks,
Vasilis
Thanks Vasileios for the reply !!!
That makes sense !!!
I will be grateful if you could point me to the node-repair command for
Cassandra-2.1.10.
I don't want to get stuck in a wrong-versioned documentation (already
bitten once hard when setting up replication).
Thanks again...
Thanks and Regar
Hi All.
I have been doing extensive testing, and replication works fine, even if
any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
Syncing always takes place (obviously, as long as continuous-downtime-value
does not exceed *max_hint_window_in_ms*).
However, things behave
Hello Ajay,
Here is a good link:
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesManualRepair.html
Generally, I find the DataStax docs to be OK. You could consult them for
all usual operations etc. Ofc there are occasions where a given concept is
not as clear, but you
I saw an average 10% cpu usage on each node when the cassandra cluster has no
load at all.
I checked which thread was using the cpu, and I got the following 2 metric
threads each occupying 5% cpu.
jstack output:
"metrics-meter-tick-thread-2" daemon prio=10 tic=...
java.lang.Thread.State
The cassandra version is 2.0.12. We have 1500 tables in the cluster of 6
nodes, with a total 2.5 billion rows.
在2015年10月24 20时52分, "Xu Zhongxing"写道:
I saw an average 10% cpu usage on each node when the cassandra cluster has no
load at all.
I checked which thread was using the cpu, and I go
Thanks a ton Vasileios !!
Just one last question ::
Does running "nodetool repair" affect the functionality of cluster for
current-live data?
It's ok if the insertions/deletions of current-live data become a little
slow during the process, but data-consistency must be maintained. If that
is the c
Ideas please, on what I may be doing wrong?
On Sat, Oct 24, 2015 at 5:48 PM, Ajay Garg wrote:
> Hi All.
>
> I have been doing extensive testing, and replication works fine, even if
> any permuatation of CAS11, CAS12, CAS21, CAS22 are downed and brought up.
> Syncing always takes place (obviously
I am not sure I fully understand the question, because nodetool repair is
one of the three ways for Cassandra to ensure consistency. If by "affect"
you mean "make your data consistent and ensure all replicas are
up-to-date", then yes, that's what I think it does.
And yes, I would expect nodetool r
Never mind Vasileios, you have been a great help !!
Thanks a ton again !!!
Thanks and Regards,
Ajay
On Sat, Oct 24, 2015 at 10:17 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> I am not sure I fully understand the question, because nodetool repair is
> one of the three ways for Ca
Max hint window is only part of the equation. If it is down longer than
Max hint window, a repair will still fix up the node for you.
The max time a node can be down before it must be re built is determined by
the lowest gc grace setting on your various tables. By default gc grace is
10 days, but
On Sat, Oct 24, 2015 at 9:47 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> I am not sure I fully understand the question, because nodetool repair is
> one of the three ways for Cassandra to ensure consistency. If by "affect"
> you mean "make your data consistent and ensure all repli
>
>
> All other means of repair are optimizations which require a certain amount
> of luck to happen to result in consistency.
>
Is that true regardless of the CL one uses? So, for example if writing
QUORUM and reading QUORUM, wouldn't an increased read_repair_chance
probability be sufficient? If
I would imagine you are running on fairly slow machines (given the CPU usage),
but 2.0.12 and 2.1 use a fairly old version of the yammer/codehale metrics
library.
It is waking up every 5 seconds, and updating Meters… there are a bunch of
these Meters per table (embedded in Timers), so your larg
Thank you very much. I figured out the number of tables is the cause yesterday.
Your analysis confirmed that.
在2015年10月25 05时23分, "Graham Sanderson"写道:
I would imagine you are running on fairly slow machines (given the CPU usage),
but 2.0.12 and 2.1 use a fairly old version of the yammer/c
15 matches
Mail list logo