Hello,
after switching from JDK8u152 to JDK8u162, Cassandra fails with the following
stack trace upon startup.
ERROR [main] 2018-01-18 07:33:18,804 CassandraDaemon.java:706 - Exception
encountered during startup
java.lang.AbstractMethodError:
org.apache.cassandra.utils.JMXServerUtils$Exporter.
Didn't know that about auto_bootstrap and the algorithm. We should probably
fix that. Can you create a JIRA for that issue? Workaround for #2 would be
to truncate system.available_ranges after "bootstrap".
On 17 January 2018 at 17:26, Oleksandr Shulgin wrote:
> On Wed, Jan 17, 2018 at 4:21 AM, k
For what it’s worth, it’s going to be a lot faster to rsync the data to a new
node and replace the old one than to decommission and bootstrap.
> On Jan 17, 2018, at 3:20 PM, Jerome Basa wrote:
>
>> What C* version you are working with?
> 3.0.14
>
>> What is the reason you're decommissioning th
> What C* version you are working with?
3.0.14
> What is the reason you're decommissioning the node? Any issues with it?
upgrading instances.
> Pending tasks --- you mean output of 'nodetool tpstats'?
pending tasks when i run `nodetool compactionstats`
eventually it started streaming data to ot
I *strongly* recommend disabling dynamic snitch. I’ve seen it make latency
jump 10x.
dynamic_snitch: false is your friend.
> On Jan 17, 2018, at 2:00 PM, Kyrylo Lebediev wrote:
>
> Avi,
> If we prefer to have better balancing [like absence of hotspots during a node
> down event etc], la
Avi,
If we prefer to have better balancing [like absence of hotspots during a node
down event etc], large number of vnodes is a good solution.
Personally, I wouldn't prefer any balancing over overall resiliency (and in
case of non-optimal setup, larger number of nodes in a cluster decreases
o
Kurt, thanks for your recommendations.
Make sense.
Yes, we're planning to migrate the cluster and change endpoint-snitch to
"AZ-aware" one.
Unfortunately, I'm 'not good enough' in math, have to think of how to calculate
probabilities for the case of vnodes (whereas the case "without vnodes" s
>
> We use Oracle jdk1.8.0_152 on all nodes and as I understand oracle use a
> dot in the protocol name (TLSv1.2) and I use the same protocol name and
> cipher names in the 3.0.14 nodes and the one I try to upgrade to 3.11.1.
>
I agree with Stefan's assessment and share his confusion. Would you be
Hi Jerome,
I don't know reason for this, but compactions run during 'nodetool
decommission'.
What C* version you are working with?
What is the reason you're decommissioning the node? Any issues with it?
Can you see any errors/warnings in system.log on the node being decommissioned?
Pending
Well, that's a shame...
That part of the code has been changed in trunk and now it uses
BootStrapper.getBootstrapTokens() instead of getRandomToken() when auto
boostrap is disabled :
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L938
I wa
Hello!
I am using Cassandra 3.10, on Ubuntu 14.04 and I have a counter
table(RF=1), with the following schema:
CREATE TABLE edges (
src_id text,
src_type text,
source text
weight counter,
PRIMARY KEY ((src_id, src_type), source)
) WITH
compaction = {'class':
'org.apache.cas
On Wed, Jan 17, 2018 at 4:21 AM, kurt greaves wrote:
> I believe you are able to get away with just altering the keyspace to
> include both DC's even before the DC exists, and then adding your nodes to
> that new DC using the algorithm. Note you'll probably want to take the
> opportunity to reduc
hi,
am currently decommissioning a node and monitoring it using `nodetool
netstats`. i’ve noticed that it hasn’t started streaming any data and
it’s doing compaction (like 600+ pending tasks). the node is marked as
“UL” when i run `nodetool status`.
has anyone seen like this before? am thinking o
We have found it very useful to set up an infrastructure where we can execute a
nodetool command (or any other arbitrary command) from a single (non-Cassandra)
host that will get executed on each node across the cluster (or a list of
nodes).
Sean Durity
From: Alain RODRIGUEZ [mailto:arodr...@
Hello!
I am using Cassandra 3.10.
I have a counter table, with the following schema and RF=1
CREATE TABLE edges (
src_id text,
src_type text,
source text
weight counter,
PRIMARY KEY ((src_id, src_type), source)
);
SELECT vs UPDATE requests ratio for this table is 0.1
READ vs W
We use Oracle jdk1.8.0_152 on all nodes and as I understand oracle use a
dot in the protocol name (TLSv1.2) and I use the same protocol name and
cipher names in the 3.0.14 nodes and the one I try to upgrade to 3.11.1.
On 2018-01-17 15:02, Georg Brandemann wrote:
If i remember correctly the pro
If i remember correctly the protocol names differ between some JRE vendors.
With IBM Java for instance the protocol name would be TLSv12 ( without . ).
Are you using the same JRE on all nodes and is the protocol name and cipher
names exactly the same on all nodes?
2018-01-17 14:51 GMT+01:00 Tomm
Thanks for your response.
I got it working by removing my protocol setting from the configuration
on the 3.11.1 node so it use the default protocol setting, I'm not sure
exactly how that change things so I need to investigate that. We don't
have any custom ssl settings that should affect this
Thanks for your response.
I removed the protocol setting from the server_encryption_options in the
3.11.1 node so it use the default value instead and now it works. I have
to analyse if this has any impact on my security requirements but at
least its working now.
/Tommy
On 2018-01-16 17:26
On the flip side, a large number of vnodes is also beneficial. For
example, if you add a node to a 20-node cluster with many vnodes, each
existing node will contribute 5% of the data towards the new node, and
all nodes will participate in streaming (meaning the impact on any
single node will be
I think what this error indicates is that a client is trying to connect
using a SSLv2Hello handshake, while this protocol has been disabled on
the server side. Starting with the mentioned ticket, we use the JVM
default list of enabled protocols. What makes this issue a bit
confusing, is that starti
Justin,
NVMe drives have their own IO queueing mechanism and there is a huge
performance difference vs the linux queue.
Next to properly configured file system and scheduler try setting
"scsi_mod.use_blk_mq=1"
in grub cmdline.
If you are looking for a BFQ scheduler, its probably a module so you wi
22 matches
Mail list logo