Dear Community,
I took a snapshot from a node which was part of a 2 node cluster. There were 2
keyspaces in that cluster K1 and K2. I took snapshot of K1 only. Now I create
both keyspaces in some other cluster having only one node. When I tried to
restore the snapshot(of keyspace K1) in that cl
If this is data that expires after a certain amount of time, you probably
want to look into using TWCS and TTLs to minimize the number of tombstones.
Decreasing gc_grace_seconds then compacting will reduce the number of
tombstones, but at the cost of potentially resurrecting deleted data if the
ta
JBOD before 3.6 or so mixed data between disks in a way that if one disk
failed, you needed to treat them all as failed and replace the host
--
Jeff Jirsa
> On Jun 12, 2018, at 1:53 AM, Kyrylo Lebediev wrote:
>
> Also it worth noting, that usage of JBOD isn't recommended for older
> Cassand
Thank you all.
I don't know if my case is the situation mentioned by JBOD. My cluster is on
Aliyun Cloud and the Cassandra version is 2.2.8. Data imbalance is not a
problem for me if whenever memtable is flushing to sstable, and Cassandra can
choose a disk with sufficient free space.
Thanks,
Hi,
I needed to save a distinct value for a key in each hour, the problem with
saving everything and computing distincts in memory is that there
are too many repeated data.
Table schema:
Table distinct(
hourNumber int,
key text,
distinctValue long
primary key (hourNumber)
)
I want t
Also it worth noting, that usage of JBOD isn't recommended for older Cassandra
versions, as there are known issues with data imbalance on JBOD.
iirc JBOD data imbalance was fixed in some 3.x version (3.2?)
For older versions creation one large filesystem on top md or lvm device seems
to be a be