Hi Eunsu,
By going through the documentation, I think you are right, you shouldn't
use withUsedHostsPerRemoteDc because it will contact nodes in other
datacenters. No i don't use withUsedHostsPerRemoteDc, but instead i use
withLocalDc option.
On Tue, Sep 18, 2018 at 11:02 AM, Eunsu Kim wrote:
By adding new nodes to cluster, should i rebuild SASI indexes on all nodes ?
Yes, I altered the system_auth key space before adding the data center.
However, I suspect that the new data center did not get the system_auth data
and therefore could not authenticate to the client. Because the new data center
did not get the replica count by altering keyspace.
Do your client
Hello Folks,
I need advice in deciding the compaction strategy for my C cluster. There are
multiple jobs that will load the data with less inserts and more updates but no
deletes. Currently I am using Size Tired compaction, but seeing auto
compactions after the data load kicks, and also read ti
Hello Eunsu,
I am also using PasswordAuthenticator in my cassandra cluster. I didn't
come across this issue while doing the exercise on preprod.
Are you sure that you changed the configuration of system_auth keyspace
before adding the new datacenter using this:
ALTER KEYSPACE system_auth WITH RE
Any clues on this topic? Naidu Saladi
On Thursday, September 6, 2018 9:41 AM, Saladi Naidu
wrote:
We are receiving following error
9140- at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.0.10.jar:3.0.10]
9141- at java.lang.Thread.run(Threa
Hi,
I have one table having 2T data saved in c* each node.
And if using LCS, the data will have 5 level:
L1: 160M * 10 = 1.6G
L2: 1.6G * 10 = 16G
L3: 16G * 10 = 160G
L4: 160G * 10 = 1.6T
L5: 1.6T * 10 = 16T
When I looking into the source code, I found an option: fanout_size.
The default value i
In my case, there were authentication issues when adding data centers.
I was using a PasswordAuthenticator.
As soon as the datacenter was added, the following authentication error log was
recorded on the client log file.
com.datastax.driver.core.exceptions.AuthenticationException: Authenticati
Also for the record, I remember Datastax having something called Tiered
Storage that does move data around (folders/disk volume) based on data age.
To be checked
On Mon, Sep 17, 2018 at 10:23 PM, DuyHai Doan wrote:
> Sean
>
> Without transactions à la SQL, how can you guarantee atomicity between
Sean
Without transactions à la SQL, how can you guarantee atomicity between both
tables for upserts ? I mean, one write could succeed with hot table and
fail for cold table
The only solution I see is using logged batch, with a huge overhead and
perf hit on for the writes
On Mon, Sep 17, 2018 at
An idea:
On initial insert, insert into 2 tables:
Hot with short TTL
Cold/archive with a longer (or no) TTL
Then your hot data is always in the same table, but being expired. And you can
access the archive table only for the more rare circumstances. Then you could
have the HOT table on a differe
Hello Alain,
Thank you very much for reviewing it. You answer on seed nodes cleared my
doubts. I will update it as per your suggestion.
I have few followup questions on decommissioning of datacenter:
- Do i need to run nodetool repair -full on each of the nodes (old + new dc
nodes) before starti
Hello Pradeep,
It looks good to me and it's a cool runbook for you to follow and for
others to reuse.
To make sure that cassandra nodes in one datacenter can see the nodes of
> the other datacenter, add the seed node of the new datacenter in any of the
> old datacenter’s nodes and restart that no
> On Sep 17, 2018, at 7:29 AM, Oleksandr Shulgin
> wrote:
>
> On Mon, Sep 17, 2018 at 4:04 PM Jeff Jirsa wrote:
>>> Again, given that the tables are not updated anymore from the application
>>> and we have repaired them successfully multiple times already, how can it
>>> be that any incons
On Mon, Sep 17, 2018 at 4:04 PM Jeff Jirsa wrote:
> Again, given that the tables are not updated anymore from the application
> and we have repaired them successfully multiple times already, how can it
> be that any inconsistency would be found by read-repair or normal repair?
>
> We have seen th
It could also be https://issues.apache.org/jira/browse/CASSANDRA-2503
On Mon, Sep 17, 2018 at 4:04 PM Jeff Jirsa wrote:
>
>
> On Sep 17, 2018, at 2:34 AM, Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
> On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin <
> oleksandr.shul...@zaland
> On Sep 17, 2018, at 2:34 AM, Oleksandr Shulgin
> wrote:
>
>> On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin
>> wrote:
>>> On Tue, 11 Sep 2018, 19:26 Jeff Jirsa, wrote:
>>> Repair or read-repair
>>
>>
>> Could you be more specific please?
>>
>> Why any data would be streamed in if th
From
https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html
> Cassandra allows you to set a default_time_to_live property for an entire
table. Columns and rows marked with regular TTLs are processed as described
above; but when a record exceeds the table-level TTL, **Cassand
Hello everyone,
Can someone please help me in validating the steps i am following to
migrate cassandra snitch.
Regards,
Pradeep
On Wed, Sep 12, 2018 at 1:38 PM, Pradeep Chhetri
wrote:
> Hello
>
> I am running cassandra 3.11.3 5-node cluster on AWS with SimpleSnitch. I
> was testing the process
On Tue, Sep 11, 2018 at 8:10 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Tue, 11 Sep 2018, 19:26 Jeff Jirsa, wrote:
>
>> Repair or read-repair
>>
>
> Could you be more specific please?
>
> Why any data would be streamed in if there is no (as far as I can see)
> possibilities
20 matches
Mail list logo