Hi Team,
What is the best way to patch OS of 1000 nodes Multi DC Cassandra cluster
where we cannot suspend application traffic( we can redirect traffic to one
DC).
Please suggest if anyone has any best practice around it.
--
*C*heers,*
*Anshu V*
Team ,
I want to validate and POC on production data. Data on production is huge.
What could be optimal method to move the data from Prod to Dev
environment? I know there are few solutions but what/which is most
efficient method do refresh for dev env?
--
*C*heers,*
*Anshu V*
, although on 3.x it
> might work.
>
>
> On Dec 14, 2017, at 11:01 AM, Jon Haddad wrote:
>
> no
>
> On Dec 14, 2017, at 10:59 AM, Anshu Vajpayee
> wrote:
>
> Thanks! I am aware with these steps.
>
> I m just thinking , is it possible to do the upgrade u
ec 14, 2017, at 11:01 AM, Jon Haddad wrote:
>
> no
>
> On Dec 14, 2017, at 10:59 AM, Anshu Vajpayee
> wrote:
>
> Thanks! I am aware with these steps.
>
> I m just thinking , is it possible to do the upgrade using nodetool
> rebuild like we rebuld new dc ?
>
> Ha
1
>
> Cheers,
> Hannu
>
> On 14 December 2017 at 12:33:49, Anshu Vajpayee (anshu.vajpa...@gmail.com)
> wrote:
>
> Hi -
>
> Is it possible to upgrade a cluster ( DC wise) using nodetool rebuild ?
>
>
>
> --
> *C*heers,*
> *Anshu V*
>
>
> --
*C*heers,*
*Anshu V*
Hi -
Is it possible to upgrade a cluster ( DC wise) using nodetool rebuild ?
--
*C*heers,*
*Anshu V*
You will require to rebuild each node with nodetool rebuild command. it
would be 60TB.
On Thu, Dec 14, 2017 at 11:35 AM, Peng Xiao <2535...@qq.com> wrote:
> Hi there,
>
> if we have a Cassandra DC1 with data size 60T,RF=3,then we rebuild a new
> DC2(RF=3),how much data will stream to DC2?20T or
gt; here if interested: https://www.instaclustr.com/instaclustr-
> dynamic-resizing-for-apache-cassandra/
>
> Cheers
> Ben
>
> On Sat, 18 Nov 2017 at 05:02 Anshu Vajpayee
> wrote:
>
>> Cassandra supports elastic scalability - meaning on demand we can
>> increase
Cassandra supports elastic scalability - meaning on demand we can increase
or decrease #of nodes as per scaling demand from the application.
Let's consider we have 5 node cluster and each node has data pressure of
about 3 TB.
Now as per sudden load, we want to add 1 node in the cluster as quick
Sure, I will update this thread.
On Sat, Nov 18, 2017 at 12:26 AM, Jonathan Haddad wrote:
> It should work with DSE, but we don’t explicitly test it.
>
> Mind testing it and posting your results? If you could include the DSE
> version it would be great.
> On Thu, Nov 16, 2017 at
Thanks John for your efforts and nicley putting it on website & youtube .
Just quick question - Is it compactiable with DSE versions? I know under
the hood they have cassandra only , but just wanted to listen your
thoughts.
On Thu, Nov 16, 2017 at 1:23 AM, Jon Haddad wrote:
> Apache 2 Licens
Thank you Jonathan and all.
On Tue, Nov 14, 2017 at 10:53 PM, Jonathan Haddad wrote:
> Anthony’s suggestions using replace_address_first_boot lets you avoid that
> requirement, and it’s specifically why it was added in 2.2.
> On Tue, Nov 14, 2017 at 1:02 AM Anshu Vajpayee
> wrote:
ntents of the following directories:
>> - data/
>> - commitlog/
>> - saved_caches/
>>
>> Forget rejoining with repair -- it will just cause more problems. Cheers!
>>
>> On Mon, Nov 13, 2017 at 2:54 PM, Anshu Vajpayee > > wrote:
>>
>>> Hi
Hi All ,
There was a node failure in one of production cluster due to disk failure.
After h/w recovery that node is noew ready be part of cluster, but it
doesn't has any data due to disk crash.
I can think of following option :
1. replace the node with same. using replace_address
2. Set boo
in too? or we should invoke
>> >> drain and then stopdaemon?
>> >>
>> >> On Mon, Oct 16, 2017 at 4:54 AM, Simon Fontana Oscarsson
>> >> > >> <mailto:simon.fontana.oscars...@ericsson.com>> wrote:
>> >>
>
Why are you killing when we have nodetool stopdaemon ?
On Fri, Oct 13, 2017 at 1:49 AM, Javier Canillas
wrote:
> That's what I thought.
>
> Thanks!
>
> 2017-10-12 14:26 GMT-03:00 Hannu Kröger :
>
>> Hi,
>>
>> Drain should be enough. It stops accepting writes and after that
>> cassandra can be s
Hello -
Ihave very generic question regarding compaction. How does cassandra
internally generate the number of comapction tasks? How does it get
affected with compaction throughput ?
If we increase the number of compaction throughput, will the per second
compaction task increase for same ki
Hi All -
Is it possible to restrict cassandra with some limited number of
cpus/cores on a given bix?
There is one JVM parameter to do that but all thread pools dont respect
that setting.
-D cassandra.available_processors
Is there any other way to achieve this ?
Setup is not on cloud. We have few nodes in one DC(1) and same number of
nodes in other DC(2). We have dedicated firewall in-front on nodes.
Read and write happen with local quorum so those dont get affected but
hints get accumulated from one DC to other DC for replications. Hints are
also gett
Gossip shows - all nodes are up.
But when we perform writes , coordinator stores the hints. It means -
coordinator was not able to deliver the writes to few nodes after meeting
consistency requirements.
The nodes for which writes were failing, are in different DC. Those nodes
do not have any l
Cassandra verison 2.1.13
On Jan 4, 2017 12:34 AM, wrote:
> Version number may help.
>
>
>
> Sean Durity
>
>
>
> *From:* Anshu Vajpayee [mailto:anshu.vajpa...@gmail.com]
> *Sent:* Tuesday, January 03, 2017 10:09 AM
> *To:* user@cassandra.apache.org
> *Subjec
.
Please let me know if you have any question. Thanks
On Dec 29, 2016 10:06 AM, "Anshu Vajpayee" wrote:
> Hello All
> We have one unusual issue on our cluster. We are seeing growing hints
> table on node although all the nodes are up and coming online with
> notet
Hello All
We have one unusual issue on our cluster. We are seeing growing hints table
on node although all the nodes are up and coming online with notetool
status.
I know Cassandra appends the hints in case if there is write timeout for
other nodes. In our case all nodes are up and functiona
12 Sep 2016 9:50 p.m., "Jeff Jirsa" wrote:
> On 2016-09-08 18:53 (-0700), Anshu Vajpayee
> wrote:
> > Is there any way to get partition size for a partition key ?
> >
>
> Anshu,
>
> The simple answer to your question is that it is not currently possible to
Is there any way to get partition size for a partition key ?
We have seen read time out issue in cassandra due to high droppable
tombstone ratio for repository.
Please check for high droppable tombstone ratio for your repo.
On Mon, Sep 5, 2016 at 8:11 PM, Romain Hardouin wrote:
> Yes dclocal_read_repair_chance will reduce the cross-DC traffic and
> laten
un 13, 2016 at 2:19 PM, Anshu Vajpayee
> wrote:
>
>> I just tested. It doesn't flush memtables like nodetool drain/flush
>> command. Means it only does crash for the node, no graceful shutdown.
>>
>>
>>
>> On Mon, Jun 13, 2016 at 10:51 PM, Jake Luciani
I just tested. It doesn't flush memtables like nodetool drain/flush
command. Means it only does crash for the node, no graceful shutdown.
On Mon, Jun 13, 2016 at 10:51 PM, Jake Luciani wrote:
> Yeah same as drain. Just exits at the end.
>
> On Mon, Jun 13, 2016 at 1:11 PM,
ecause C* was designed to be
> crash-only.
>
> https://www.usenix.org/conference/hotos-ix/crash-only-software
>
> Since this is great for the project but bad for operators experience we
> have later added this stopdaemon command.
>
> On Mon, Jun 13, 2016 at 12:37 PM, Ansh
Doan wrote:
> In Cassandra 3.x, I think there is a "nodetool stopdaemon" command
>
> On Mon, Jun 13, 2016 at 6:28 PM, Anshu Vajpayee
> wrote:
>
>> Hi All
>>
>> Why we dont have native shutdown command in Cassandra ?
>>
>> Every softw
were all rows same? If not what was different ?
What was droppable tombstone compaction ratio for that table/CF?
On Mon, Jun 13, 2016 at 6:11 PM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:
> Running nodetool compact fixed the issue.
>
> Could someone help out as why it occurred.
>
detool flush.
On Mon, Jun 13, 2016 at 10:00 PM, Jeff Jirsa
wrote:
> `nodetool drain`
>
>
>
>
>
> *From: *Anshu Vajpayee
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Monday, June 13, 2016 at 9:28 AM
> *To: *"user@cassandra.apache.org"
Hi All
Why we dont have native shutdown command in Cassandra ?
Every software provides graceful shutdown command.
Regards,
Anshu
Hi All,
I have question regarding max disk space limit on a node.
As per Data stax, We can have 1TB max disk space for rotational disks and
up to 5 TB for SSDs on a node.
Could you please suggest per your experience what would be limit for space
on a single node with out causing so much stress o
34 matches
Mail list logo