X==5. I was meant to fill that in...
On 16 Dec. 2017 07:46, "kurt greaves" wrote:
> Yep, if you don't run cleanup on all nodes (except new node) after step x,
> when you decommissioned node 4 and 5 later on, their tokens will be
> reclaimed by the previous owner. Suddenly the data in those SSTab
Yep, if you don't run cleanup on all nodes (except new node) after step x,
when you decommissioned node 4 and 5 later on, their tokens will be
reclaimed by the previous owner. Suddenly the data in those SSTables is now
live again because the token ownership has changed and any data in those
SStable
This is just basic question to ask..but it is worth to ask.
We changed Replication factor from 2 to 3 in our production cluster. We have 2
data centers.
Does nodetool repair -dcpar from single node in one data center is sufficient
for the whole replication to take effect? Please confirm.
Do I
Hi Max,
I don't know if it's related to your issue but on a side note, if you
decide to use Reaper (and use full repairs, not incremental ones), but mix
that with "nodetool repair", you'll end up with 2 pools of SSTables that
cannot get compacted together.
Reaper uses subrange repair which doesn't
The generation (integer id in file names) doesn’t matter for ordering like this
It matters in schema tables for addition of new columns/types, but it’s
irrelevant for normal tables - you could do a user defined compaction on 31384
right now and it’d be rewritten as-is (minus purgable data) with
Typing this on a phone during my commute, please excuse the inevitable typos in
what I expect will be a long email because there’s nothing else for me to do
right now.
There’s a few reasons people don’t typically recommend huge nodes, the biggest
reason being expansion and replacement. This qu
Hi, Kurt.
Thank you for response.
Repairs are marked as 'done' without errors in reaper history.
Example of 'wrong order':
* file mc-31384-big-Data.db contains tombstone:
{
"type" : "row",
"position" : 7782,
"clustering" : [ "9adab970-b46d-11e7-a5cd-a1ba8cfc1426"
Hello, Jeff.
Using your hint I was able to reproduce my situation on 5 VMs.
Simplified steps are:
1) set up 3-node cluster
2) create keyspace with RF=3 and table with gc_grace_seconds=60,
compaction_interval=10 and unchecked_tombstone_compaction=true (to force
compaction later)
3) insert 10..2
Thanks Nicholas. Am aware of the official recommendations. However, in the
last project, we tried with 5 TB and it worked fine.
So asking for expereinces around.
Anybody knows anyone who provides a consultancy on open source cassandra.
Datastax just does it for the enterprise version!
On Fri, De
Hi Amit,
This is way too much data per node, official recommendation are to try to
stay below 2Tb per node, I have seen nodes up to 4Tb but then maintenance
gets really complicated (backup, boostrap, streaming for repair etc etc)
Nicolas
On 15 December 2017 at 15:01, Amit Agrawal
wrote:
> Hi,
Hi,
We are trying to setup a 3 node cluster with 20 TB HD on each node.
its a bare metal setup with 44 cores on each node.
So in total 60 TB, 66 cores , 3 node cluster.
The data velocity is very less, low access rates.
has anyone tried with this configuration ?
A bit urgent.
Regards,
-A
Yes, we try to rely on conditional batches when possible but in this case
it could not be used :
We did some tests with the conditional batches and they could not be
applied when several tables are involved in the batch, even if the tables
use the same partition key : we had the following error "ba
Thanks Jon.
On Fri, Dec 15, 2017 at 12:05 AM, Jon Haddad wrote:
> Heh, hit send accidentally.
>
> You generally can’t run rebuild to upgrade, because it’s a streaming
> operation. Streaming isn’t supported between versions, although on 3.x it
> might work.
>
>
> On Dec 14, 2017, at 11:01 AM, Jo
13 matches
Mail list logo