Hi!
I have a 2.0.13 cluster which I have just extended, and I'm now looking
into upgrading it to 2.1.
* The cleanup after the extension is partially done.
* I'm also looking into changing a few tables into Leveled Compaction
Strategy.
In the interest of speeding up things by avoiding unnece
Hi,
my understanding after reading the upgrade doc and looking through the
email lists was the following:
1. Before the upgrade, always make sure you have sstables on the latest
version of the current Cassandra version you are running (by running
nodetool upgradesstables on all nodes - if they are
I'm sorry, I've just noticed I missed to type a few words
1. Before the upgrade, always make sure you have sstables on the latest
version of the current Cassandra version you are running (by running
nodetool upgradesstables on all nodes - if they are latest, they should
return almost immediately;
I remembered that Titan treats edges (and vertices?) as immutable and deletes
the entity and re-creates it on every change.
So I set the gc_grace_seconds to 0 for every table in the Titan keyspace and
ran a major compaction. However, this made the situation worse. Instead of
roughly 2’700 tcp pa
getting is this:
$ ./sstableloader -d 10.211.55.8 -f ../conf/cassandra.yaml -v ~/Downloads/
ams0002-cassandra-20160523-1035/var/lib/cassandra/data/Titan/edgestore-8bcd2300d0d011e5a3ab233f92747e94/
objc[18941]: Class JavaLaunchHelper is implemented in both
/Library/Java/JavaVirtualMachines/jdk1.8.0_77
13:41:18 Binding thrift service to /10.211.55.8:9160
> INFO 13:41:18 Listening for thrift clients...
>
>
> The error I am getting is this:
>
> $ ./sstableloader -d 10.211.55.8 -f ../conf/cassandra.yaml -v ~/Downloads/
>
> ams0002-cassandra-20160523-1035/var/lib/ca
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed that it didn't
actually syn
unsubscribe
Hi Cassandra users,
Is there a way to find if auto_bootstrap is set to false on a Cassandra
node if we didn't know the location of the cassandra.yaml or the cassandra
installation directory (for e.g., through means like JMX, etc) ?
Thank you !
Regards,
Rajath
Rajath Subr
find / -name 'cassandra.yaml' -exec grep -nH auto_bootstrap {} \;
On Mon, May 23, 2016 at 3:44 PM Rajath Subramanyam
wrote:
> Hi Cassandra users,
>
> Is there a way to find if auto_bootstrap is set to false on a Cassandra
> node if we didn't know the location of the cassandra.yaml or the cassan
You may also check in the system.log, loaded properties are logged on node
startup.
2016-05-23 19:55 GMT-03:00 Jonathan Haddad :
>
> find / -name 'cassandra.yaml' -exec grep -nH auto_bootstrap {} \;
>
> On Mon, May 23, 2016 at 3:44 PM Rajath Subramanyam
> wrote:
>
>> Hi Cassandra users,
>>
>> I
Do you have 1 node in each DC or 2? If you're saying you have 1 node in
each DC then a RF of 2 doesn't make sense. Can you clarify on what your set
up is?
On 23 May 2016 at 19:31, Luke Jolly wrote:
> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
> gce-us-east1. I increased the
Hello,
Suppose we have 2 DCs and we know that the data is correctly replicated in
both. In such situation, is it safe to "remove" one of the DCs by simply doing
a "nodetool remove node" followed by "nodetool removenode force" for each node
in that DC (instead of doing a "nodetool decommission"
If you remove a node at a time, you’ll eventually end up with a single node in
the DC you’re decommissioning which will own all of the data, and you’ll likely
overwhelm that node.
It’s typically recommended that you ALTER the keyspace, remove the replication
settings for that DC, and then you c
Thanks for the hint! Indeed I could not telnet to the host. It was the
listen_address that was not properly configured.
Thanks again!
Ralf
> On 23.05.2016, at 21:01, Paulo Motta wrote:
>
> Can you telnet 10.211.55.8 7000? This is the port used for streaming
> communication with the destinati
I used to think it's firewall/network issues too. So I make ufw to be
inactive. I really don't what's the reason.
2016-05-09 19:01 GMT+08:00 kurt Greaves :
> Don't be fooled, despite saying tcp6 and :::*, it still listens on IPv4.
> As far as I'm aware this happens on all 2.1 Cassandra nodes, and
16 matches
Mail list logo