Dear All,
We noticed that when bootstrap new node,the source node is also quite busy
doing compactions which impact the rt severely.Is it reasonable to disable
compaction on all the source node?
Thanks,
Peng Xiao
Dear All,
when one node failure with hardware errors,it will be in DN status in the
cluster.Then if we are not able to handle this error in three hours(max hints
window),we will loss data,right?we have to run repair to keep the consistency.
And as per
https://blog.alteroot.org/articles/2014-03
Hi Peng,
Depending on the hardware failure you can do one of two things:
1. If the disks are intact and uncorrupted you could just use the disks
with the current data on them in the new node. Even if the IP address
changes for the new node that is fine. In that case all you need to do is
run repa
Hello,
> Is it reasonable to disable compaction on all the source node?
I would say no, as a short answer.
You can, I did it for some operations in the past. Technically no problem
you can do that. It will most likely improve the response time of the
queries immediately as it seems that in you
Hi All,
Is there a way to protect C* on the server side from tracing commands that
are executed from clients?
Thanks!
Execute ‘nodetool settraceprobability 0’ on all nodes. It does zero percentage
of he tracing.
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Mar 22, 2018, 11:10 AM -0500, shalom sagges , wrote:
> Hi All,
>
> Is there a way to protect C* on the server side from tracing commands that
>
Hi,
Wanted to know the community’s experiences and feedback on using Apache
Spark to delete data from C* transactional cluster.
We have spark installed in our analytical C* cluster and so far we have been
using Spark only for analytics purposes.
However, now with advanced features of Spark 2.
We have a cluster that is subject to the one-year gossip bug.
We'd like to update the seed node list via JMX without restart, since our
foolishly single-seed-node in this forsaken cluster is being autoculled in
AWS.
Is this possible? It is not marked volatile in the Config of the source
code, so
Hi,
Can we listen to Cassandra on IPV4 and IPV6 at the same time? When I refer
to some documents on the internet, it says I can only bind to one address
at a point in time.
In our application, we are talking to Cassandra on FQDN and application
gets either of IPv4 or IPv6 connecting to Cassandra.
This capability was *just* added in CASSANDRA-14190 and only in trunk.
Previously (as described in the ticket above), the seed node list is only
updated when doing a shadow round, removing an endpoint or restarting (look
for callers of o.a.c.gms.Gossiper#buildSeedsList() if you're curious).
A rol
Thanks a lot Rahul! :-)
On Thu, Mar 22, 2018 at 8:03 PM, Rahul Singh
wrote:
> Execute ‘nodetool settraceprobability 0’ on all nodes. It does zero
> percentage of he tracing.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Mar 22, 2018, 11:10 AM -0500, shalom sagges ,
> w
Thanks. The rolling restart triggers the gossip bug so that's a no go.
We'lre going to migrate off the clsuter. Thanks!
On Thu, Mar 22, 2018 at 5:04 PM, Nate McCall wrote:
> This capability was *just* added in CASSANDRA-14190 and only in trunk.
>
> Previously (as described in the ticket above)
What's the "one-year gossip bug" in this context?
On Thu, Mar 22, 2018 at 3:26 PM, Carl Mueller
wrote:
> Thanks. The rolling restart triggers the gossip bug so that's a no go.
> We'lre going to migrate off the clsuter. Thanks!
>
>
>
> On Thu, Mar 22, 2018 at 5:04 PM, Nate McCall
> wrote:
>
>>
Short answer : it works. You can even run “delete” statements from within Spark
once you know which keys to delete. Not elegant but it works.
It will create a bunch of tombstones and you may need to spread your deletes
over days. Another thing to consider is instead of deleting setting a TTL whi
Thanks Alain.We are using C* 2.1.18,7core/30G/1.5T ssd,as the cluster is
growing too fast,we are painful in bootstrap/rebuild/remove node.
Thanks,
Peng Xiao
-- --
??: "Alain RODRIGUEZ";
: 2018??3??22??(??) 7:31
??: "user cassandr
Sorry Alain,maybe some misunderstanding here,I mean to disable Compaction in
the bootstrapping process,then enable it after the bootstrapping.
-- --
??: ""<2535...@qq.com>;
: 2018??3??23??(??) 10:54
??: "user";
:
Yeah, I also had to grab a new version of the cassandra-driver which was fixed
in 2.1.16 https://issues.apache.org/jira/browse/CASSANDRA-11850 otherwise cqlsh
would not work with python 2.7.12.
I’m surprised dh-python is not a requirement on the Cassandra package in your
debian/control 😮
I als
Hi Anthony,
there is a problem with replacing dead node as per the blog,if the replacement
process takes longer than max_hint_window_in_ms,we must run repair to make the
replaced node consistent again, since it missed ongoing writes during
bootstrapping.but for a great cluster,repair is a pain
Subrange repair of only the neighbors is sufficient
Break the range covering the dead node into ~100 splits and repair those splits
individually in sequence. You don’t have to repair the whole range all at once
--
Jeff Jirsa
> On Mar 22, 2018, at 8:08 PM, Peng Xiao <2535...@qq.com> wrote:
>
`nodetool settraceeprobability` controls the *automated* tracing within a
single node based on the value set. It may be some or none, but it doesn't
effect queries which are explicitly marked for tracing by the driver within
your application. You can test this by running CQLSH and enabling TRACING
Under normal circumstances this is not true.
Take a look at org.apache.cassandra.service.StorageProxy#performWrite, it
grabs both the natural endpoints and the pending endpoints (new nodes).
They're eventually passed through
to
org.apache.cassandra.locator.AbstractReplicationStrategy#getWriteResp
I have been doing some work on a cluster that is impacted by
https://issues.apache.org/jira/browse/CASSANDRA-11363. Reading through the
ticket prompted me to take a closer look at
org.apache.cassandra.concurrent.SEPExecutor. I am looking at the 3.0.14
code. I am a little confused about the Blocked
Ah sorry - I misread the original post - for some reason I had it in my
head the question was about bootstrap.
Carry on.
On Thu, Mar 22, 2018 at 8:35 PM Jonathan Haddad wrote:
> Under normal circumstances this is not true.
>
> Take a look at org.apache.cassandra.service.StorageProxy#performWrit
>
> Is there a way to protect C* on the server side from tracing commands that
> are executed from clients?
>
If you really needed a way to completely disable all and any possibility of
tracing you could start each C* node with tracing switched to a noop
implementation.
eg, add to the jvm.option
dh_python is a build dependency. The runtime dependency is python.
https://github.com/apache/cassandra/blob/cassandra-2.1/debian/control#L6
https://github.com/apache/cassandra/blob/cassandra-2.1/debian/control#L14
Just upgrading to the latest 2.1.x should fix all these issues your
having. :)
Mic
Looked at your error again. The cassandra-stress example appears as if
thrift is disabled, which is the default stress profile. Try
`cassandra-stress write -mode native cql3 ...` for native cql stress runs.
Michael
On 03/22/2018 11:36 PM, Michael Shuler wrote:
> dh_python is a build dependency. T
Hi Peng,
Correct, you would want to repair in either case.
Regards,
Anthony
On Fri, 23 Mar 2018 at 14:09, Peng Xiao <2535...@qq.com> wrote:
> Hi Anthony,
>
> there is a problem with replacing dead node as per the blog,if the
> replacement process takes longer than max_hint_window_in_ms,we must
27 matches
Mail list logo