Also have a look at `nodetool netstats` to check if streaming is
progressing or is halted.
Cheers,
Jens
On Fri, Sep 16, 2016 at 3:18 AM Mark Rose wrote:
> I've done that several times. Kill the process, restart it, let it
> sync, decommission.
>
> You'll need enough space on the receiving nodes
I listened to a talk about the new Cassandra 3 file (sstable) format. One
takeaway was that the new format supports sparse data better. That is, is
you have 2000 columns, but only setting some of the columns, the disk usage
will be much less.
Cheers,
Jens
On Thu, Sep 15, 2016 at 10:24 PM Dorian H
> Is a minute a reasonable upper bound for most clusters?
I have no numbers and I'm sure this differs depending on how large your
cluster is. We have a small cluster of around 12 nodes and I statuses
generally propagate in under 5 seconds for sure. So, it will definitely be
less than 1 minute.
Ch
Thanks Tyler, for identifying that this can be fixed now.
Here is the JIRA ticket : CASSANDRA-12654 :
If this is just removing the now obsolete check, then I hope this makes to
3.10 release.
Regards,
Samba
On Fri, Sep 16, 2016 at 1:33 AM, Tyler Hobbs wrote:
> That ticket was just to improve t
Hello All,
Whats the best way to do data purging in C* column families based on size of
the table in the disk apart from TTL?
Thanks and regards,-- IB
From: Samba
To: user@cassandra.apache.org
Sent: Friday, 16 September 2016 2:34 PM
Subject: Re: CASSANDRA-5376: CQL IN clause on la
Hi,
The only "safe way" to remove data from Cassandra is through tombstones
(TTL / Deletes). I am not sure about the problem you are trying to solve
here. Maybe could you let us know a bit more about what you are trying to
achieve?
If you are trying to expire old data using a stable % of the dis
Hi,
I have a 3 nodes cluster, each with less than 200 GB data. Currently all
nodes have the default 256 value for num_tokens. My colleague told me that
with the data size I have (less than 200 GB on each node), I should change
num_tokens to something like 32 to get better performance, especially s
On Fri, Sep 16, 2016 at 11:29 AM, Li, Guangxing
wrote:
> Hi,
>
> I have a 3 nodes cluster, each with less than 200 GB data. Currently all
> nodes have the default 256 value for num_tokens. My colleague told me that
> with the data size I have (less than 200 GB on each node), I should change
> num
Hi,
We have three node (N1, N2, N3) cluster (RF=3) and data in SSTable as
following:
N1:
SSTable: Partition key K1 is marked as tombstone at time T2
N2:
SSTable: Partition key K1 is marked as tombstone at time T2
N3:
SSTable: Partition key K1 is valid and has data D1 with lower time-stamp T1
(T
What would be the likely causes of large system hint partitions? Normally
large partition warnings are for user defined tables which they are writing
large partitions to. In this case, it appears C* is writing large
partitions to the system.hints table. Gossip is not backed up.
version: C* 2.2.7
Hi Erza,
Have you a dead node in your cluster?
Because the coordinator stores a hint about dead replicas in the local
system.hints when a node is dead or didn't respond to a write request.
--
Nicolas
Le sam. 17 sept. 2016 à 00:12, Ezra Stuetzel a
écrit :
> What would be the likely causes of
Hi Jaydeep,
Yes, dealing with tombstones in Cassandra is very tricky.
Cassandra keeps tombstones to mark deleted columns and distribute (hinted
handoff, full repair, read repair ...) to the other nodes that missed the
initial remove request. But Cassandra can't afford to keep those tombstones
lif
Gossip propagation is generally best modelled by epidemic algorithms.
Luckily for us Cassandra's gossip protocol is fairly simply.
Cassandra will perform one Gossip Task every second. Within each gossip
task it will randomly gossip with another available node in the cluster, it
will also possibly
13 matches
Mail list logo