But if you use ALL, and RF=3, it will be expecting 3 replicas. After you
switch to NetworkTopologyStrategy, you potentially only have 1 replica until
the repair is done. Won’t the query fail until the repair is done because the
other two replicas won’t be ready yet (the ALL condition can’t be
Paulo,
as requested: https://issues.apache.org/jira/browse/CASSANDRA-13885
Feel free to adjust any properties of the ticket. Hopefully it gets proper
attention. Thanks.
Thomas
-Original Message-
From: Paulo Motta [mailto:pauloricard...@gmail.com]
Sent: Dienstag, 19. September 2017 08:5
The replicas will exist, they just may not have the data we expect them to
have. But the read queries will still get the right data through read
repair, so it'll be fine.
On Tue, Sep 19, 2017 at 5:29 AM, Myron A. Semack
wrote:
> But if you use ALL, and RF=3, it will be expecting 3 replicas. Af
unsubscribe
Anthony P. Scism
Info Tech-Risk Mgmt/Client Sys - Capacity Planning
Work: 402-544-0361 Mobile: 402-707-4446
From: "Durity, Sean R"
To: "user@cassandra.apache.org"
Date: 09/19/2017 09:25 AM
Subject:RE: Multi-node repair fails afte
Hi Techies,
I need to configure Apache Cassandra for my upcoming project on 2 DCs.
Both DCs should have 3 nodes respective.
Details are :-
DC1 nodes --
Node 1 ->10.0.0.1
Node 2 -> 10.0.0.2
Node 3 -> 10.0.0.3
DC2 nodes --
Node 1 -> 10.0.0.4
Node 2 -> 10.0.0.5
Node 3 -> 10.0.0.6
On all nodes , I wa
Nandan,
Use one node from each DC in seeds parameter on all nodes.
Use right DC names and DC suffix to ensure correctly identify.
Use network topology and RF on all keyspaces for both DCs
Please post specific questions if you have any.
Regards,
Nitan K.
Cassandra and Oracle Architect/SME
Datas
Nandan,
you may find the following useful.
Slideshare:
https://www.slideshare.net/DataStax/apache-cassandra-multidatacenter-essentials-julien-anguenot-iland-internet-solutions-c-summit-2016
Youtube:
https://www.youtube.com/watch?v=G6od16YKSsA
From a client perspective, if you are targeting quor
You're right of course. Part of the reason it's changing so frequently is
to try and improve repairs so that they at least actually work reliably. C*
3 hasn't been the smoothest ride for repairs. Incremental repairs wasn't
really ready for 3.0 so it was a mistake to make it a default.
Unfortunately
unsubscribe
Anthony P. Scism
Info Tech-Risk Mgmt/Client Sys - Capacity Planning
Work: 402-544-0361 Mobile: 402-707-4446
From: kurt greaves
To: User
Date: 09/19/2017 04:53 PM
Subject:RE: Multi-node repair fails after upgrading to 3.0.14
Th
Hello,
In our production cluster, we had multiple times that after a *unclean*
shutdown, cassandra sever can not start due to commit log exceptions:
2017-09-17_06:06:32.49830 ERROR 06:06:32 [main]: Exiting due to error while
processing commit log during initialization.
2017-09-17_06:06:32.49831
o
Dear All,
The default row_cache_save_period=0,looks Row Cache does not work in this
situation?
but we can still see the row cache hit.
Row Cache : entries 202787, size 100 MB, capacity 100 MB, 3095293
hits, 6796801 requests, 0.455 recent hit rate, 0 save period in seconds
Could
And we are using C* 2.1.18.
-- Original --
From: "";<2535...@qq.com>;
Date: Wed, Sep 20, 2017 11:27 AM
To: "user";
Subject: Row Cache hit issue
Dear All,
The default row_cache_save_period=0,looks Row Cache does not work in this
situation?
b
Hi Peng,
C* periodically saves cache to disk, to solve cold start problem. If
row_cache_save_period=0, it means C* does not save cache to disk. But the
cache is still working, if it's enabled in table schema, just the cache
will be empty after restart.
--Dikang.
On Tue, Sep 19, 2017 at 8:27 PM,
unsubscribe
Hi,
additionally, with saved (key) caches, we had some sort of corruption (I think,
for whatever reason) once. So, if you see something like that upon Cassandra
startup:
INFO [main] 2017-01-04 15:38:58,772 AutoSavingCache.java (line 114) reading
saved cache /var/opt/xxx/cassandra/saved_caches/
Thanks All.
-- --
??: "Steinmaurer, Thomas";;
: 2017??9??20??(??) 1:38
??: "user@cassandra.apache.org";
: RE: Row Cache hit issue
Hi,
additionally, with saved (key) caches, we had some sort of corruption (I think,
It certainly violates the principle of least astonishment.
Generally, people with large clusters do it the same way they did in 2.1 - with
ring aware scheduling (which people running large clusters can probably do
because they’re less likely to be using vnodes)
The conversation beyond this bel
17 matches
Mail list logo