did not recognize that so far.
thank you for the hint. i will definitely give it a try
On Fri, 2017-03-17 at 22:32 +0100, benjamin roth wrote:
The fork from thelastpickle is. I'd recommend to give it a try over pure
nodetool.
2017-03-17 22:30 GMT+01:00 Roland Otta
mailto:roland.o...@willhaben.
All,
I've been experimenting with Cassandra 3.10 now, with the hope that SASI has
improved. To much disappointment, it seems it still doesn't support simple
operation like IN. Have others tried the same? Also with a small test data set
(160K records), the performance is also not better than jus
The fork from thelastpickle is. I'd recommend to give it a try over pure
nodetool.
2017-03-17 22:30 GMT+01:00 Roland Otta :
> forgot to mention the version we are using:
>
> we are using 3.0.7 - so i guess we should have incremental repairs by
> default.
> it also prints out incremental:true when
... maybe i should just try increasing the job threads with --job-threads
shame on me
On Fri, 2017-03-17 at 21:30 +, Roland Otta wrote:
forgot to mention the version we are using:
we are using 3.0.7 - so i guess we should have incremental repairs by default.
it also prints out incremental:tr
forgot to mention the version we are using:
we are using 3.0.7 - so i guess we should have incremental repairs by default.
it also prints out incremental:true when starting a repair
INFO [Thread-7281] 2017-03-17 09:40:32,059 RepairRunnable.java:125 - Starting
repair command #7, repairing keyspac
It depends a lot ...
- Repairs can be very slow, yes! (And unreliable, due to timeouts, outages,
whatever)
- You can use incremental repairs to speed things up for regular repairs
- You can use "reaper" to schedule repairs and run them sliced, automated,
failsafe
The time repairs actually may var
hello,
we are quite inexperienced with cassandra at the moment and are playing
around with a new cluster we built up for getting familiar with
cassandra and its possibilites.
while getting familiar with that topic we recognized that repairs in
our cluster take a long time. To get an idea of our c
check for level 2 (stop the world) garbage collections.
*...*
*Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8144 9872*
On Fri, Mar 17, 2017 at 11:51 AM, Chuck Reynolds
wrote:
> I have a large Cassandra 2.1.13 ring (60 nodes) in AWS that has
> consistently random high r
A wrinkle further confounds the issue: running a repair on the node which
was servicing the queries has cleared things up and all the queries now
work.
That doesn't make a whole lot of sense to me - my assumption was that a
repair shouldn't have fixed it.
On Fri, Mar 17, 2017 at 12:03 PM, Voytek
Probably Jvm pauses. Check your logs for long GC times.
On Fri, Mar 17, 2017 at 11:51 AM Chuck Reynolds
wrote:
> I have a large Cassandra 2.1.13 ring (60 nodes) in AWS that has
> consistently random high read times. In general most reads are under 10
> milliseconds but with in the 30 request the
I have a large Cassandra 2.1.13 ring (60 nodes) in AWS that has consistently
random high read times. In general most reads are under 10 milliseconds but
with in the 30 request there is usually a read time that is a couple of seconds.
Instance type: r4.8xlarge
EBS GP2 volumes, 3tb with 9000 IOPS
Oh, thanks! :)
On Fri, 17 Mar 2017, 14:22 Paulo Motta, wrote:
> It's safe to truncate this table since it's just used to inspect repairs
> for troubleshooting. You may also set a default TTL to avoid it from
> growing unbounded (this is going to be done by default on CASSANDRA-12701).
>
> 2017-0
Cassandra 3.9, 4 nodes, rf=3
Hi folks, we're see 0 results returned from queries that (a) should return
results, and (b) will return results with minor tweaks.
I've attached the sanitized trace outputs for the following 3 queries (pk1
and pk2 are partition keys, ck1 is clustering key, val1 is SAS
It's safe to truncate this table since it's just used to inspect repairs
for troubleshooting. You may also set a default TTL to avoid it from
growing unbounded (this is going to be done by default on CASSANDRA-12701).
2017-03-17 8:36 GMT-03:00 Gábor Auth :
> Hi,
>
> I've discovered a relative hug
Hi,
On Wed, Mar 15, 2017 at 11:35 AM Ben Slater
wrote:
> When you say you’re running repair to “rebalance” do you mean to populate
> the new DC? If so, the normal/correct procedure is to use nodetool rebuild
> rather than repair.
>
Oh, thank you! :)
Bye,
Gábor Auth
>
Hi,
I've discovered a relative huge size of data in the system_distributed
keyspace's repair_history table:
Table: repair_history
Space used (live): 389409804
Space used (total): 389409804
What is the purpose of this data? There is any safe method to p
16 matches
Mail list logo