ok thanks, so if we want to use -pr option ( which i suppose we should to
prevent duplicate checks) in 2.0 then if we run the repair on all nodes in
a single DC then it should be sufficient and we should not need to run it
on all nodes across DC's ?
On Wed, Aug 10, 2016 at 5:01 PM, Paulo Motta
Thank you for your response, we have updated datastax driver to 3.1.0 using
V3 protocol, i think there are still some webapp that still using the 2.1.6
java driver..we will upgrade thembut we noticed strange things, on web
apps upgraded to 3.1.0 some queries return zero results even if data
exi
thanks Romain, this had been a doubt for quiet a while.
thanks
On Wed, Aug 10, 2016 at 4:59 PM, Romain Hardouin
wrote:
> Yes. You can even see that some caution is taken in the code
> https://github.com/apache/cassandra/blob/trunk/
> src/java/org/apache/cassandra/config/Config.java#L131
> (But
Yes. You can even see that some caution is taken in the code
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/config/Config.java#L131
(But if I were you I would not rely on this. It's always better to be
explicit.)
Best,
Romain
Le Mercredi 10 août 2016 17h50, sai
Another thing to note is that according to NEWS.txt upgrade from 2.1.x is
only supported from version 2.1.9, so if this is not an effect of that I'm
actually surprised upgrade from 2.1.2 worked without any issues.
2016-08-10 15:48 GMT-03:00 Tyler Hobbs :
> That just means that a client/driver dis
That just means that a client/driver disconnected. Those log messages are
supposed to be suppressed, but perhaps that stopped working in 3.x due to
another change.
On Wed, Aug 10, 2016 at 10:33 AM, Adil wrote:
> Hi guys,
> We have migrated our cluster (5 nodes in DC1 and 5 nodes in DC2) from
>
hi;
if there are any missed attributes in the YAML file, will Cassandra pick
up default values for those attributes.
thanks
Hi Yuji,
ok, perhaps you are seeing a different issue than I do.
Have you tried with durable_writes=False? If the issue is caused by the
commitlog, then it should work if you disable durable_writes.
Cheers,
Christian
On Tue, Aug 9, 2016 at 3:04 PM, Yuji Ito wrote:
> Thanks Christian
>
> can
Hi guys,
We have migrated our cluster (5 nodes in DC1 and 5 nodes in DC2) from
cassandra 2.1.2 to 3.0.8, all seems fine, executing nodetool status shows
all nodes UN, but in each node's log there is this log error continuously:
java.io.IOException: Error while read(...): Connection reset by peer
at
That's what I was thinking. Maybe GC pressure?
Some more details: during anticompaction I have some CFs exploding to 1K
SStables (to be back to ~200 upon completion).
HW specs should be quite good (12 cores/32 GB ram) but, I admit, still
relying on spinning disks, with ~150GB per node.
Current vers
That's pretty low already, but perhaps you should lower to see if it will
improve the dropped mutations during anti-compaction (even if it increases
repair time), otherwise the problem might be somewhere else. Generally
dropped mutations is a signal of cluster overload, so if there's nothing
else w
On 2.0 repair -pr option is not supported together with -local, -hosts or
-dc, since it assumes you need to repair all nodes in all DCs and it will
throw and error if you try to run with nodetool, so perhaps there's
something wrong with range_repair options parsing.
On 2.1 it was added support to
Not yet. Right now I have it set at 16.
Would halving it more or less double the repair time?
On Tue, Aug 9, 2016 at 7:58 PM, Paulo Motta
wrote:
> Anticompaction throttling can be done by setting the usual
> compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool
> setcompactiont
Hello,
We have 2.0.17 cassandra cluster(*DC1*) with a cross dc setup with a
smaller cluster(*DC2*). After reading various blogs about
scheduling/running repairs looks like its good to run it with the following
-pr for primary range only
-st -et for sub ranges
-par for parallel
-dc to make sure
14 matches
Mail list logo