>
> How can we prevent a disconnected DC from coming back automatically?
You could use firewall rules to prevent the disconnected DC from contacting
your live DCs when it becomes live again
Mark
On Thu, Aug 14, 2014 at 6:48 AM, Lu, Boying wrote:
> Hi, All,
>
>
>
> We are using Cassandra 2.0
Hey,
not sure if that's what you're looking for but you can use
auto_bootstrap=false in your yaml file to prevent nodes from
bootstrapping themselves on startup. This option has been removed and
the default is true. You can add it to your configuration though.
There's a bit of documentation h
Thanks a lot.
But we want to block all the communications to/from the disconnected VDC
without reboot it.
From: Artur Kronenberg [mailto:artur.kronenb...@openmarket.com]
Sent: 2014年8月14日 17:00
To: user@cassandra.apache.org
Subject: Re: How to prevent the removed DC comes back automactically?
He
Hello, I created https://issues.apache.org/jira/browse/CASSANDRA-7766 about
that
Fabrice LARCHER
2014-08-13 14:58 GMT+02:00 DuyHai Doan :
> Hello Fabrice.
>
> A quick hint, try to create your secondary index WITHOUT the "IF NOT
> EXISTS" clause to see if you still have the bug.
>
> Another ide
Hi, All,
We have a Cassandra 2.0.7 running in three connected DCs, say DC1, DC2 and DC3.
DC3 is powered off, so we run 'nodetool removenode' command on DC1 to remove
all nodes of DC3.
Do we need to run the same command on DC2?
Thanks
Boying
Hi,
Gossip will propagate to all nodes in a cluster. So if you have a cluster
spanning DC1, DC2 and DC3 and you then remove all nodes in DC3 via nodetool
removenode from a node in DC1, all nodes in both DC1 and DC2 will be
informed of the nodes removal so no need to run it from a node in DC2.
Ma
Thanks a lot ☺
From: Mark Reddy [mailto:mark.re...@boxever.com]
Sent: 2014年8月14日 18:02
To: user@cassandra.apache.org
Subject: Re: A question to nodetool removenode command
Hi,
Gossip will propagate to all nodes in a cluster. So if you have a cluster
spanning DC1, DC2 and DC3 and you then remove
Hi all,
I have a question about communication between two data-centers, both with
replication-factor three.
If I read data using local_quorum from datacenter1, I see that digest
requests are sent to datacenter2. This is for read-repair I guess. How can
I prevent this from happening? Setting read_
dclocal_read_repair_chance option on the table is your friend
http://www.datastax.com/documentation/cassandra/2.0/cassandra/reference/referenceTableAttributes.html?scroll=reference_ds_zyq_zmz_1k__dclocal_read_repair_chance
On Thu, Aug 14, 2014 at 4:53 PM, Rene Kochen
wrote:
> Hi all,
>
> I hav
I am using 1.0.11, so I only have read_repair_chance.
However, after testing I see that read_repair_chance does work for
local_quorum.
Based on this site:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architectureClientRequestsRead_c.html
I got the impression that rea
On Thu, Aug 14, 2014 at 9:24 AM, Rene Kochen
wrote:
> I am using 1.0.11, so I only have read_repair_chance.
>
I'm sure this goes without saying, but you should upgrade to the head of
1.2.x (probably via 1.1.x) ASAP. I would not want to operate 1.0.11 in
production in 2014.
=Rob
On Thu, Aug 14, 2014 at 1:59 AM, Artur Kronenberg <
artur.kronenb...@openmarket.com> wrote:
> not sure if that's what you're looking for but you can use
> auto_bootstrap=false in your yaml file to prevent nodes from bootstrapping
> themselves on startup. This option has been removed and the defau
Hi all,
I just installed DataStax Enterprise 4.5. I installed OpsCenter
Server on one of my four machines. The port that OpsCenter usually
uses () was used by something else, so I modified
/usr/share/opscenter/conf/opscenterd.conf to set the port to 8889.
When I log into OpsCenter, it says
We use log structured tables to hold logs for analysis.
It's basically append only, and immutable. Every record has a timestamp
for each record inserted.
Having this in ONE big monolithic table can be problematic.
1. compactions have to compact old data that might not even be used often.
2.
When adding nodes via bootstrap to a 27 node 2.0.9 cluster with a
cluster-wide phi_convict_threshold of 12 the nodes fail to bootstrap.
This worked a half dozen times in the past few weeks as we've scaled
this cluster from 21 to 24 and then to 27 nodes. There have been no
configuration or Cassandra
Hi Clint,
You need to configure the datastax-agents so they know what machine
OpsCenter is running on. To do this you will need to edit the address.yaml
of the datastax-agent, located in /var/lib/datastax-agent/conf/. In this
file you need to add the following line:
stomp_interface:
This will
16 matches
Mail list logo