I just configured a 3 node cluster in this way and was able to reproduce
the warning message:
cqlsh> select peer, rpc_address from system.peers;
peer | rpc_address
---+-
127.0.0.3 | 127.0.0.1
127.0.0.2 | 127.0.0.1
(2 rows)
cqlsh> select rpc_address from system.lo
Thank you for the information !
On Thu, Jun 20, 2019 at 9:50 AM Alexander Dejanovski
wrote:
> Léo,
>
> if a major compaction isn't a viable option, you can give a go at
> Instaclustr SSTables tools to target the partitions with the most
> tombstones :
> https://github.com/instaclustr/cassandra-s
One thing that strikes me is that the endpoint reported is '127.0.0.1'. Is
it possible that you have rpc_address set to 127.0.0.1 on each of your
three nodes in cassandra.yaml? The driver uses the system.peers table to
identify nodes in the cluster and associates them by rpc_address. Can you
ver
There’s a reasonable chance this is a bug in the Datastax driver - may want to
start there when debugging .
It’s also just a warn, and the two entries with the same token are the same
endpoint which doesn’t seem concerning to me, but I don’t know the Datastax
driver that well
> On Jun 20, 2019
It appears that no such warning is issued if I connected to Cassandra from a
remote server, not locally.
From: Котельников Александр
Reply-To: "user@cassandra.apache.org"
Date: Thursday, 20 June 2019 at 10:46
To: "user@cassandra.apache.org"
Subject: Unexpected error while refreshing token map,
Hello,
Assuming you nodes are out for a while and you don't need the data after 60
days (or cannot get it anyway), the way to fix this is to force the node
out. I would try, in this order:
- nodetool removenode HOSTID
- nodetool removenode force
These 2 might really not work at this stage, but i
Hello,
This looks more like a JanusGraph question. Also I would rather try in the
support for that tool instead.
I never saw anyone here or elsewhere using JanusGraph, after searching I
only found 4 threads about it here. Thus I think I think even people
knowing Cassandra monitoring/metrics very w
Hello Aneesh,
Reading your message and answers given, I really think this post I wrote
about 3 years ago now (how quickly time goes through...) about tombstone
might be of interest to you:
https://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html.
Your problem is not related to t
Also about your traces, and according to Jeff in another thread:
the incomplete sstable will be deleted during startup (in 3.0 and newer
> there’s a transaction log of each compaction in progress - that gets
> cleaned during the startup process)
>
maybe that's what you are seeing? Again, I'm not
Hello Asad,
> I’m on environment with apache Cassandra 3.11.1 with java 1.8.0_144.
One Node went OOM and crashed.
If I remember well, firsts minor versions of C* 3.11 have memory leaks. It
seems it was fixed in your version though.
3.11.1
[...]
* BTree.Builder memory leak (CASSANDRA-1375
Hello Maxim.
I think you won't be able to do what you want this way. Collections are
supposed to be (ideally small) sets of data that you'll always read
entirely, at once. At least it seems to be working this way. Not sure about
the latest versions, but I did not hear about new design for collecti
Léo,
if a major compaction isn't a viable option, you can give a go at
Instaclustr SSTables tools to target the partitions with the most
tombstones :
https://github.com/instaclustr/cassandra-sstable-tools/tree/cassandra-2.2#ic-purge
It generates a report like this:
Summary:
+-+-
Hey!
I’ve just configured a test 3-node Cassandra cluster and run very trivial java
test against it.
I see the following warning from java-driver on each CqlSession initialization:
13:54:13.913 [loader-admin-0] WARN c.d.o.d.i.c.metadata.DefaultMetadata -
[loader] Unexpected error while refre
My bad on date formatting, it should have been : %Y/%m/%d
Otherwise the SSTables aren't ordered properly.
You have 2 SSTables that claim to cover timestamps from 1940 to 2262, which
is weird.
Aside from that, you have big overlaps all over the SSTables, so that's
probably why your tombstones are s
14 matches
Mail list logo