Hi, all
After adding a new node, all the data was streamed by the newly allocated token.
Since nodetool cleanup has not yet been performed on existing nodes, the total
size has increased.
All data has a short ttl. In this case, will the data remaining on the existing
node be deleted after
Hi Eunsu,
Are you using DateTieredCompactionStrategy? It optimises the deletion of
expired data from disks.
If minor compactions are not solving the problem, I suggest to run nodetool
compact.
Federico
On Thu, 5 Sep 2019 at 09:51, Eunsu Kim wrote:
>
>
> Hi, all
>
>
>
>
>
> After adding a new
Thank you for your response.
I’m using TimeWindowCompactionStrategy.
So if I don't run nodetool compact, will the remaining data not be deleted?
From: Federico Razzoli
Reply-To: "user@cassandra.apache.org"
Date: Thursday, 5 September 2019 at 6:19 PM
To: "user@cassandra.apache.org"
Subject: Re
Hi,
I advise not to run nodetool compact on a TWCS table.
If you do not want to run cleanup and are fine with the extra load on disk
for now, you can wait for data to expire naturally.
It will delete both data that is still owned by the nodes as well as data
that they don't own anymore once the ss
On Thu, Sep 5, 2019 at 11:19 AM Federico Razzoli <
federico.razzoli@gmail.com> wrote:
>
> Are you using DateTieredCompactionStrategy? It optimises the deletion of
> expired data from disks.
> If minor compactions are not solving the problem, I suggest to run
> nodetool compact.
>
Sorry, but b
his data got ttl,so just wait if he do not want do cleanup
Oleksandr Shulgin 于2019年9月5日 周四下午5:48写道:
> On Thu, Sep 5, 2019 at 11:19 AM Federico Razzoli <
> federico.razzoli@gmail.com> wrote:
>
>>
>> Are you using DateTieredCompactionStrategy? It optimises the deletion of
>> expired data from d
Hi all,
We are pleased to announce the release of a new backup tool for
Cassandra called Cassy.
https://github.com/scalar-labs/cassy/
It is licensed under Apache 2.0 License so please give it a try.
Best regards,
Hiroyuki Yamada
--
Hi,
sorry to bring such question, but I want to ask what are the best JVM
options for Cassandra node? In solution I'm implementing the Cassandra
serves as read-only storage (of course populated at beginning) - the
records are not changed in time. Currently each Cassandra node's VM has
this c
Every use case is unique so as such jvm configs go with it. 8G may or may not
be sufficient depending on live data you keep in, or fetch to memory. You can
opt using G1GC, that is easy to start with.
Some good suggestions are made if you want to try G1GC or stick with CMS. Take
a look at [
Lot of variables
- how many reads per second per machine?
- how much data per machine?
- are the reads random or is there a hot working set?
Some of the suggestions online are old.
CASSANDRA-8150 has some old’ish suggestions if you’re running CMS collector.
Running > 16G heap should consider us
ApacheCon NA 2019 is next week in Las Vegas. There’s a Cassandra track, with 3
days of just-about-cassandra talks. If you haven’t signed up, it’s not too late
(but travel / hotels get harder as time gets short):
Register here: https://www.apachecon.com/acna19/register.html
Schedule here: https:
thank you very much!
Jeff Jirsa 于2019年9月5日 周四下午11:16写道:
>
> ApacheCon NA 2019 is next week in Las Vegas. There’s a Cassandra track,
> with 3 days of just-about-cassandra talks. If you haven’t signed up, it’s
> not too late (but travel / hotels get harder as time gets short):
>
> Register here: ht
Hello,
Is it wise and advisable to build multi cloud environment for Cassandra for
High Availability.
AWS as one datacenter and Azure as another datacenter.
If yes are there any challenges involved?
Thanks and regards,
Goutham.
you can build cassandra under multi cloud environment ,but there network
can be connect with each other.☺
Goutham reddy 于2019年9月6日周五 上午12:36写道:
> Hello,
> Is it wise and advisable to build multi cloud environment for Cassandra
> for High Availability.
> AWS as one datacenter and Azure as another
Technically, not a problem. Use GossipingPropertyFileSnitch to keep things
simple and you can go across whatever cloud providers you want without
issue.
The biggest issue you're going to have isn't going to be Cassandra, it's
having the expertise in the different cloud providers to understand the
Thanks Jon that explained me everything.
On Thu, Sep 5, 2019 at 10:00 AM Jon Haddad wrote:
> Technically, not a problem. Use GossipingPropertyFileSnitch to keep
> things simple and you can go across whatever cloud providers you want
> without issue.
>
> The biggest issue you're going to have is
Hello all
I've given a thought to this multi-cloud marketing buzz with Cassandra
Theoretically feasible (with GossipingPropertyFileSnitch) but practically a
headache if you want a minimum of performance and security
The problem comes from the network "devils in the details"
Suppose DC1 in AWS i
Hello,
I am trying to enable debug logs on the Cassandra cluster running with
3.11.3, I do see some debug logs but I don't see the same level of DEBUG
logs I used to see with 2.1.16 version of cassandra.
I am using the below command to set debug logging.
$ nodetool setlogginglevel org.apache.cas
Hello Jai,
If you want to fetch queries from the logs.
Kindly run the below command:
nodetool setlogginglevel org.apache.cassandra.transport TRACE
this will help you see all your queries in logs.
Just beware that this fills up logs really fast.
Thanks,
Allen
On Fri, 6 Sep, 2019, 10:57 AM Jai
19 matches
Mail list logo