if it is around)
--
Lapo Luchini
l...@lapo.it
Hi,
I'm not part of the team, I reply as a fellow user.
Columns which are part of the PRIMARY KEY are always indexed and used to
optimize the query, but it also depends in how the partition key is defined.
Details here in the docs:
https://cassandra.apache.org/doc/latest/cassandra/cql/ddl.
clean-up on the existing nodes).
What will happen when adding new nodes, as you say, though?
If I have a 1GB sstable with 250GB of data that will be no longer useful
(as a new node will be the new owner) will that sstable be reduced to
750GB by "cleanup" or will it retain old data?
T
is strategy still be the best?
PS: Googling around a strategy called "incremental compaction" (ICS)
keeps getting in results, but that's only available in ScyllaDB, right?
--
Lapo Luchini
l...@lapo.it
On 2022-12-06 14:21, Gábor Auth wrote:
No! Just start it and the other nodes in the cluster will acknowledge
the new IP, they recognize the node by id, stored in the data folder of
the node.
Thanks Gábor and Erick!
It worked flawlessly.
--
Lapo Luchini
l...@lapo.it
I delete all the DB on disk and bootstrap from scratch?
thanks,
--
Lapo Luchini
l...@lapo.it
nodes are replaced with nodes either IPv6-only or dual-stack but
broadcasting IPv6 instead, the NAT64 can be removed.
On 09/11/2022 17:27, Lapo Luchini wrote:
I have a (3.11) cluster running on IPv4 addresses on a set of
dual-stack servers; I'd like to add a new IPv6-only server to the
c
ossip does not expect that)
thanks in advance for any suggestion,
--
Lapo Luchini
l...@lapo.it
aCenter, and another single-node
DataCenter (to act as disaster recovery)?
Thanks in advance for any suggestion or comment!
--
Lapo Luchini
l...@lapo.it
Thanks for all your suggestions!
I'm looking into it and so far it seems to be mainly a problem of disk
I/O, as the host is running on spindle disks and being a DR of an entire
cluster gives it many changes to follow.
First (easy) try will be to add an SSD as ZFS cache (ZIL + L2ARC).
Should m
lidation?
Is the DB in a "strange" state?
Would it be useful to issue a "rebuild" on that node, in order to send
all missing data anyways, and this skipping the lenghty validations?
thanks!
--
Lapo Luchini
l...@lapo.it
looking into a migration as I'm already using Cassanrea Reaper and
on the biggest keyspace is often taking more than 7 days to complete (I
set the segment timeout up to 2h, and most of 138 segments take more
than 1h, sometimes even failing the 2h limits due to, AFAICT, lengthy
compactions).
--
Hi, thanks for suggestions!
I'll definitely migrate to 4.0 after all this is done, then.
Old prod DC I fear can't suffer losing a node right now (a few nodes
have the disk 70% full), but I can maybe find a third node for the new
DC right away.
BTW the new nodes have got 3× the disk space, but
se to do it during
the mentioned "num_tokens migration" (at step 1, or 5) or does it make
more sense to do it at step 8, as a in-place rolling upgrade of each of
the 6 (or 8) nodes?
Did I get it correctly?
Can it be done "better"
Thanks for the explanation, Kane!
In case anyone is curious I decommissioned node7 and things re-balanced
themselves automatically: https://i.imgur.com/EOxzJu9.png
(node8 received 422 GiB, while the others did receive 82-153 GiB,
as reported by "nodetool netstats -H")
Lapo
On 2021-03-03 23:59
Hi! The nodes are all in different racks… except for node7 and node8!
Which is something more that makes them similar (which I didn't notice
at first), other than timeline of being added to the cluster.
About the token ring calculation… I'll retry that in NodeJS instead of
awk as a double chec
in the graph above) have millions of records and have a timeuuid
inside the partition key, so they should distribute perfectly well among
all tokens.
Is there any other reason for the load unbalance I didn't think of?
Is there a way to force things back to normal?
--
Lapo Luchini
l...@la
ed data in tables already
3. in upcoming ZFS version I will be able to use Zstd compression
(probably before Cassandra 4.0 is gold)
4. (can inspect compression directly at filesystem level)
But on the other hand application-level compression could have its
advantages.
cheers,
--
Lapo Luchini
l.
soon (and that 4.0 goes out
of beta soon to) so that I'll be able to upgrade that cluster.
--
Lapo Luchini - http://lapo.it/
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands,
;$USER" -pw "$PASS" repair -pr | \
egrep -B1 '(Starting repair|Repair command)'
done
--
Lapo Luchini - http://lapo.it/
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org
20 matches
Mail list logo