Hi all
I'm a bit confused with how read repair works in my case, which is:
- multiple DCs with RF 1 (NetworkTopologyStrategy)
- reads with consistency ONE
The article #1 says that read repair in fact runs RF reads for some percent
of the requests. Let's say I have read_repair_chance = 0.1. Does
You mean that correct table UUID should be specified as suffix in directory
name?
For example:
Table:
cqlsh> select id from system_schema.tables where keyspace_name='test' and
table_name='usr';
id
--
ea2f6da0-f931-11e7-8224-43ca70555242
Directory name:
That’s correct.
On Apr 21, 2018, 5:05 AM -0400, Kyrylo Lebediev ,
wrote:
> You mean that correct table UUID should be specified as suffix in directory
> name?
> For example:
>
> Table:
>
> cqlsh> select id from system_schema.tables where keyspace_name='test' and
> table_name='usr';
>
> id
> --
Read repairs are one anti-entropy measure. Continuous repairs is another. If
you do repairs via Reaper or your own method it will resolve your discrepencies.
On Apr 21, 2018, 3:16 AM -0400, Grzegorz Pietrusza ,
wrote:
> Hi all
>
> I'm a bit confused with how read repair works in my case, which i
I haven't asked about "regular" repairs. I just wanted to know how read
repair behaves in my configuration (or is it doing anything at all).
2018-04-21 14:04 GMT+02:00 Rahul Singh :
> Read repairs are one anti-entropy measure. Continuous repairs is another.
> If you do repairs via Reaper or your
3.0.9
On Fri, Apr 20, 2018 at 10:26 PM, Michael Shuler
wrote:
> On 04/20/2018 08:46 AM, Lou DeGenaro wrote:
> > Could you be more specific? What does one specify exactly to assure
> > SSLv2 is not used for both client-server and server-server
> > communications? Example yaml statements would b
I consume data from Kafka and insert them into Cassandra cluster using Java
API. The table has 4 keys including a timestamp based on millisecond. But
when executing the code, it just inserts 120 to 190 rows and ignores other
incoming data!
What parts can be the cause of the problem? Bad insert cod
Impossible to guess with that info, but maybe one of:
- “Wrong” consistency level for reads or writes
- Incorrect primary key definition (you’re overwriting data you don’t realize
you’re overwriting)
Less likely:
- Broken cluster where hosts are flapping and you’re missing data on read
- using a
Soheil,
As Jeff mentioned that you need to provide more information. There are no known
issues that I can think of that would cause such behavior. It would be great if
you could provide us with a reduced test case so we can try and reproduce this
behavior or at least help you debug the issue be
I haven't checked the code to make sure this is still the case but last
time I checked:
- For any read, if an inconsistency between replicas is detected then this
inconsistency will be repaired. This obviously wouldn’t apply with CL=ONE
because you’re not reading multiple replicas to find inconsist
10 matches
Mail list logo