The speed at which compactions operate is also physically restricted by the
speed of the disk. If the disks used on the new node are HDDs, then
increasing the compaction throughput will be of little help. However, if
the disks on the new node are SSDs then increasing the compaction
throughput to at
The cassandra version is 3.0.9.
I have changed the heap size (about 32G). Also, the streaming throughput is
set 800MB/sec, and the streaming_socket_timeout_in_ms is default 8640.
I suspect the compactionthroughput has an influence on the new node joining.
The command nodetool | getco
Adding new node is really slow when you have a large load(for me, slow means
several hours). So I'm interested, is there anyway to speed up the addition
when adding a new node?
Best Regards,
-Simon
发件人: qf zhou
发送时间: 2018-01-03 11:30
收件人: user@cassandra.apache.org
主题: Cassandra cluster add new
> I use zipkin (https://github.com/openzipkin/zipkin) to trace my system.
>
> When I upgraded to the latest version ,3.23 be specific. I met a problem
which our monitor keep alerting that there is not enough disk space for
cassandra.
You're right. CONTAINS SASI indexes do indeed use a lot of disk
>
> I want to upgrade from 2.x to 3.x.
>
> I can definitely use the features in 3.11.1 but it's not a must.
> So my question is, is 3.11.1 stable and suitable for Production compared
> to 3.0.15?
>
Use 3.11.1 and don't use any 3.0.x or 3.x features.
3.11.1 is effectively three sequential patch re
I can certainly try that. No problem there.
However wouldn’t we then get this kind of errors if that was the case:
java.lang.RuntimeException: Cannot start multiple repair sessions over the same
sstables
?
Hannu
> On 3 Jan 2018, at 20:50, Nandakishore Tokala
> wrote:
>
> hi Hannu,
>
> I thi
You don't mention the version, but here are some general suggestions
- 2 GB heap is very small for a node, especially with 1 TB+ of data.
What is the physical RAM on the host? In general, you want ½ of physical RAM
for the JVM. (Look in jvm.options or cassandra-env.sh)
- You
hi Hannu,
I think some of the repairs are hanging there. please restart all the nodes
in the cluster and start the repair
Thanks
Nanda
On Wed, Jan 3, 2018 at 9:35 AM, Hannu Kröger wrote:
> Additional notes:
>
> 1) If I run the repair just on those tables, it works fine
> 2) Those tables are
Additional notes:
1) If I run the repair just on those tables, it works fine
2) Those tables are empty
Hannu
> On 3 Jan 2018, at 18:23, Hannu Kröger wrote:
>
> Hello,
>
> Situation is as follows:
>
> Repair was started on node X on this keyspace with —full —pr. Repair fails on
> node Y.
>
Hello,
Situation is as follows:
Repair was started on node X on this keyspace with —full —pr. Repair fails on
node Y.
Node Y has debug logging on (DEBUG on org.apache.cassandra) and I’m looking at
the debug.log. I see following messages related to this repair request:
---
DEBUG [AntiE
Little update.
I've managed to compute the token, and I can indeed SELECT the row from
CQLSH.
Interestingly enough, if I use CQLSH I do not get the exception (even if
the string is printed out).
I am now wondering whether, instead of a data corruption, the error is
related to the reading path use
11 matches
Mail list logo