Hi All,
I accidentally created a test table on the system_traces keyspace.
When I tried to drop the table with the Cassandra user, I got the following
error:
*Unauthorized: Error from server: code=2100 [Unauthorized] message="Cannot
DROP "*
Is there a way to drop this table permanently?
Thanks!
t; Anant Corporation
>
> On Mar 19, 2018, 9:08 AM -0500, Chris Lohfink , wrote:
>
> No.
>
> Why do you want to? If you don't use tracing they will be empty, and if
> were able to drop them you will no longer be able to use tracing in
> debugging.
>
> Chris
>
ee.
>
> With https://issues.apache.org/jira/browse/CASSANDRA-13813 you wont be
> able to drop the table, but would be worth a ticket to prevent creation in
> those keyspaces or allow some sort of override if allowing create.
>
> Chris
>
>
> On Mar 19, 2018, at 9:15 AM, shalom sa
themselves but it doesnt hurt anything to have the
> table there. Just ignore it and its existence will not cause any issues.
>
> Chris
>
>
> On Mar 19, 2018, at 10:27 AM, shalom sagges
> wrote:
>
> That's weird... I'm using 3.0.12, so I should've still
If the problem is recurring, then you might have a corrupted SSTable.
Check the system log. If a certain file is corrupted, you'll find it.
grep -i corrupt /system.log*
On Wed, Mar 21, 2018 at 2:18 PM, Jerome Basa wrote:
> hi,
>
> when i run `nodetool compactionstats` there’s this one compacti
Hi All,
Is there a way to protect C* on the server side from tracing commands that
are executed from clients?
Thanks!
Thanks a lot Rahul! :-)
On Thu, Mar 22, 2018 at 8:03 PM, Rahul Singh
wrote:
> Execute ‘nodetool settraceprobability 0’ on all nodes. It does zero
> percentage of he tracing.
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Mar 22, 2018,
Thanks Guys!
This really helps!
On Fri, Mar 23, 2018 at 7:10 AM, Mick Semb Wever
wrote:
> Is there a way to protect C* on the server side from tracing commands that
>> are executed from clients?
>>
>
>
> If you really needed a way to completely disable all and any possibility
> of tracing you
Hi All,
I ran nodetool cfstats (v2.0.14) on a keyspace and found that there are a
few large partitions. I assume that since "Compacted partition maximum
bytes": 802187438 (~800 MB) and since
"Compacted partition mean bytes": 100465 (~100 KB), it means that most
partitions are in okay size and only
unauthorized person acting, or refraining from acting,
> on any information contained in this message. For security purposes, staff
> training, to assist in resolving complaints and to improve our customer
> service, email communications may be monitored and telephone calls may be
> reco
Hi All,
A certain application is writing ~55,000 characters for a single row. Most
of these characters are entered to one column with "text" data type.
This looks insanely large for one row.
Would you suggest to change the data type from "text" to BLOB or any other
option that might fit this scen
at 3:28 PM, DuyHai Doan wrote:
> Compress it and stores it as a blob.
> Unless you ever need to index it but I guess even with SASI indexing a so
> huge text block is not a good idea
>
> On Wed, Apr 4, 2018 at 2:25 PM, shalom sagges
> wrote:
>
>> Hi All,
>>
Hi All,
I have a 44 node cluster (22 nodes on each DC).
Each node has 24 cores and 130 GB RAM, 3 TB HDDs.
Version 2.0.14 (soon to be upgraded)
~10K writes per second per node.
Heap size: 8 GB max, 2.4 GB newgen
I deployed Reaper and GC started to increase rapidly. I'm not sure if it's
because the
Thanks a lot Hitesh!
I'll try to re-tune the heap to a lower level
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperso
It's advisable to set the RF to 3 regardless of the consistency level.
If using RF=1, Read CL=LOCAL_ONE and a node goes down in the local DC, you
will not be able to read data related to this node until it goes back up.
For writes and CL=LOCAL_ONE, the write will fail (if it falls on the token
ra
1. How to use sharding partition key in a way that partitions end up in
different nodes?
You could, for example, create a table with a bucket column added to the
partition key:
Table distinct(
hourNumber int,
bucket int, //could be a 5 minute bucket for example
key text,
distinctValue long
primary
The clustering column is ordered per partition key.
So if for example I create the following table:
create table desc_test (
id text,
name text,
PRIMARY KEY (id,name)
) WITH CLUSTERING ORDER BY (name DESC );
I insert a few rows:
insert into desc_test (id , name ) VALUES ( '
Hi Gareth,
If you're using batches for multiple partitions, this may be the root cause
you've been looking for.
https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/
If batches are optimally used and only one node is misbehaving, check if
NTP on the node is properly synced.
Hope
Hi All,
Are there any known caveats for User Defined Types in Cassandra (version
3.0)?
One of our teams wants to start using them. I wish to assess it and see if
it'd be wise (or not) to refrain from using UDTs.
Thanks!
are on 3.0,
> So you are affected by UDT behaviour (stored as BLOB) mentioned in the
> JIRA.
>
> Cheers,
> Anup
>
> On 5 August 2018 at 23:29, shalom sagges wrote:
>
>> Hi All,
>>
>> Are there any known caveats for User Defined Types in Cassandra (version
If there are a lot of droppable tombstones, you could also run User Defined
Compaction on that (and on other) SSTable(s).
This blog post explains it well:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
mohamadrezarosta.
Hi Riccardo,
Does this issue occur when performing a single restart or after several
restarts during a rolling restart (as mentioned in your original post)?
We have a cluster that when performing a rolling restart, we prefer to wait
~10-15 minutes between each restart because we see an increase of
What takes the most CPU? System or User?
Did you try removing a problematic node and installing a brand new one
(instead of re-adding)?
When you decommissioned these nodes, did the high CPU "move" to other nodes
(probably data model/query issues) or was it completely gone? (server
issues)
On Sun,
I guess the code experts could shed more light on
org.apache.cassandra.util.coalesceInternal and SepWorker.run.
I'll just add anything I can think of
Any cron or other scheduler running on those nodes?
Lots of Java processes running simultaneously?
Heavy repair continuously running?
Lots of pe
Hi All,
If I run for example:
select * from myTable limit 3;
Does Cassandra do a full table scan regardless of the limit?
Thanks!
rency Factor)
>
>
>
> On Tue, Nov 6, 2018 at 8:21 AM shalom sagges
> wrote:
>
>> Hi All,
>>
>> If I run for example:
>> select * from myTable limit 3;
>>
>> Does Cassandra do a full table scan regardless of the limit?
>>
>> Thanks!
>>
>
Hi All,
I'm about to start a rolling upgrade process from version 2.0.14 to version
3.11.3.
I have a few small questions:
1. The upgrade process that I know of is from 2.0.14 to 2.1.x (higher
than 2.1.9 I think) and then from 2.1.x to 3.x. Do I need to upgrade first
to 3.0.x or can I upg
Disclaimer: The information provided in above response is my personal
> opinion based on the best of my knowledge and experience. We do
> not take any responsibility and we are not liable for any damage caused by
> actions taken based on above information.
> Thanks
> Anuj
>
>
&
Hi All,
I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to
3.11 (hopefully 3.11.4 if it'd be released very soon).
I have 2 small questions:
1. Currently the Datastax clients are enforcing Protocol Version 2 to
prevent mixed cluster issues. Do I need now to enforce Pro
t's possible or useful.
Thanks a lot Jeff for clarifying this.
I really hoped the answer would be different. Now I need to nag our R&D
teams again :-)
Thanks!
On Mon, Feb 11, 2019 at 8:21 PM Michael Shuler
wrote:
> On 2/11/19 9:24 AM, shalom sagges wrote:
> > I've successfull
Cleanup is a great way to free up disk space.
Just note you might run into
https://issues.apache.org/jira/browse/CASSANDRA-9036 if you use a version
older than 2.0.15.
On Thu, Feb 14, 2019 at 10:20 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Wed, Feb 13, 2019 at 6:47 PM Je
If you're using the PropertyFileSnitch, well... you shouldn't as it's a
rather dangerous and tedious snitch to use
I inherited Cassandra clusters that use the PropertyFileSnitch. It's been
working fine, but you've kinda scared me :-)
Why is it dangerous to use?
If I decide to change the snitch, is
Thanks for the info Alex!
I read
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsSwitchSnitch.html
but still have a few questions:
Our clusters are comprised of 2 DCs with no rack configuration, RF=3 on
each DC.
In this scenario, if I wish to seamlessly change the snitch with 0
Hi All,
Does anyone know what is the most optimal hints configuration (multiple
DCs) in terms of
max_hints_delivery_threads and hinted_handoff_throttle_in_kb?
If it's different for various use cases, is there a rule of thumb I can
work with?
I found this post but it's quite old:
http://www.uberob
d them?
>
>
>
> *From:* shalom sagges [mailto:shalomsag...@gmail.com]
> *Sent:* Monday, March 04, 2019 7:22 AM
> *To:* user@cassandra.apache.org
> *Subject:* A Question About Hints
>
>
>
> Hi All,
>
>
>
> Does anyone know what is the most optimal hints configu
e cluster?
>
> Are both settings definitely on the default values currently?
>
>
>
> I’d try making a single conservative change to one or the other, measure
> and reassess. Then do same to other setting.
>
>
>
> Then of course share your results with us.
>
le if you go to fast or two slow?
>
> BTW, I thought the comments at the end of the article you mentioned were
> really good.
>
>
>
>
>
>
>
> *From:* shalom sagges [mailto:shalomsag...@gmail.com]
> *Sent:* Monday, March 04, 2019 11:04 AM
> *To:* user@cassandra.ap
>
>
> Everyone really should move off of the 2.x versions just like you are
> doing.
>
>
>
> *From:* shalom sagges [mailto:shalomsag...@gmail.com]
> *Sent:* Monday, March 04, 2019 12:34 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: A Question About Hints
>
&
I would just stop the service of the joining node and then delete the data,
commit logs and saved caches.
After stopping the node while joining, the cluster will remove it from the
list (i.e. nodetool status) without the need to decommission.
On Tue, Apr 30, 2019 at 2:44 PM Akshay Bhardwaj <
aks
Hi Simon,
If you haven't did that already, try to drain and restart the node you
deleted the data from.
Then run the repair again.
Regards,
On Thu, May 2, 2019 at 5:53 PM Simon ELBAZ wrote:
> Hi,
>
> I am running Cassandra v2.1 on a 3 node cluster.
>
> *# yum list installed | grep cassa*
> *ca
Hi Rhys,
I encountered this error after adding new SSTables to a cluster and running
nodetool refresh (v3.0.12).
The refresh worked, but after starting repairs on the cluster, I got the
"Validation failed in /X.X.X.X" error on the remote DC.
A rolling restart solved the issue for me.
Hope this he
In a lot of cases, the issue is with the data model.
Can you describe the table?
Can you provide the query you use to retrieve the data?
What's the load on your cluster?
Are there lots of tombstones?
You can set the consistency level to ONE, just to check if you get
responses. Although normally I
Hi Vsevolod,
1) Why such behavior? I thought any given SELECT request is handled by a
limited subset of C* nodes and not by all of them, as per connection
consistency/table replication settings, in case.
When you run a query with allow filtering, Cassandra doesn't know where the
data is located, s
. Even if servers are busy
> with the request seriously becoming non-responsive...?
>
> cheers
> Attila Wind
>
> http://www.linkedin.com/in/attilaw
> Mobile: +36 31 7811355
>
>
> On 2019. 05. 23. 0:37, shalom sagges wrote:
>
> Hi Vsevolod,
>
> 1) Why such behavi
s queries by hand exactly like that over the
> cluster...
>
> thanks!
> Attila Wind
>
> http://www.linkedin.com/in/attilaw
> Mobile: +36 31 7811355
>
>
> On 2019. 05. 23. 11:42, shalom sagges wrote:
>
> a) Interesting... But only in case you do not provide partiti
Hi All,
I'm creating a dashboard that should collect read/write latency metrics on
C* 3.x.
In older versions (e.g. 2.0) I used to divide the total read latency in
microseconds with the read count.
Is there a metric attribute that shows read/write latency without the need
to do the math, such as i
If I only send ReadTotalLatency to Graphite/Grafana, can I run an average
on it and use "scale to seconds=1" ?
Will that do the trick?
Thanks!
On Wed, May 29, 2019 at 5:31 PM shalom sagges
wrote:
> Hi All,
>
> I'm creating a dashboard that should collect read/write lat
ead these measure the
> latency in milliseconds
>
> Thanks
>
> Paul
> www.redshots.com
>
> > On 29 May 2019, at 15:31, shalom sagges wrote:
> >
> > Hi All,
> >
> > I'm creating a dashboard that should collect read/write latency metrics
> on
e.$ks.$cf.ReadTotalLatency.Count),7,8,9),1),'test')
WDYT?
On Thu, May 30, 2019 at 2:29 PM shalom sagges
wrote:
> Thanks for your replies guys. I really appreciate it.
>
> @Alain, I use Graphite for backend on top of Grafana. But the goal is to
> move from Graphite to Prometheus even
t;> finding issues on the larger scale), especially with high volume clusters
>> so the loss in accuracy kinda moot. Your average for local reads/writes
>> will almost always be sub millisecond but you might end up having 500
>> millisecond requests or worse that the me
Hi All,
I'm having a bad situation where after upgrading 2 nodes (binaries only)
from 2.1.21 to 3.11.4 I'm getting a lot of warnings as follows:
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread
Thread[ReadStage-5,5,main]: {}
java.lang.ArrayIndexOutOfBoundsException: null
mpactions, Reaper is turned off I see repair running only in the
logs.
Thanks!
On Wed, Jun 5, 2019 at 2:32 PM shalom sagges wrote:
> Hi All,
>
> I'm having a bad situation where after upgrading 2 nodes (binaries only)
> from 2.1.21 to 3.11.4 I
don't do it :) this is kind of a
> special circumstances where other things have gone wrong.
>
> Thanks
>
> On Wed, Jun 5, 2019, 5:23 PM shalom sagges wrote:
>
>> If anyone has any idea on what might cause this issue, it'd be great.
>>
>> I don't underst
Hi All,
I've been trying to find which queries are run on a Cassandra node.
I've enabled DEBUG and ran *nodetool setlogginglevel
org.apache.cassandra.transport TRACE*
I did get some queries, but it's definitely not all the queries that are
run on this database.
I've also found a lot of DEBUG [Sha
ECUTE *d67e6a07c24b675f492686078b46c9**97*
Thanks!
On Thu, Sep 26, 2019 at 11:14 AM Jeff Jirsa wrote:
> The EXECUTE lines are a prepared statement with the specified number of
> parameters.
>
>
> On Wed, Sep 25, 2019 at 11:38 PM shalom sagges
> wrote:
>
>> Hi All,
>>
>>
ut what queries have run is to use audit
> logging plugin supported in 3.x, 2.2
> https://github.com/Ericsson/ecaudit
>
> On Thu, Sep 26, 2019 at 2:19 PM shalom sagges
> wrote:
>
>> Thanks for the quick response Jeff!
>>
>> The EXECUTE lines are a prepared s
that particular table.
Can anyone explain this behavior? Why would a Select query significantly
increase write count in Cassandra?
Thanks!
Shalom Sagges
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Cre
Thanks for the quick reply Vladimir.
Is it really possible that ~12,500 writes per second (per node in a 12
nodes DC) are caused by memory flushes?
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://w
Yes, I know it's obsolete, but unfortunately this takes time.
We're in the process of upgrading to 2.2.8 and 3.0.9 in our clusters.
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.faceb
t wrote the query in the code,
thought it would be nice to limit the query results to 560,000,000. Perhaps
the ridiculously high limit might have caused this?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<ht
Hi Fabrice,
Just a small (out of the topic) question I couldn't find an answer to. What
is a slice in Cassandra? (e.g. Maximum tombstones per slice)
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<htt
onses$Error$1.decode(Responses.java:58) at
com.datastax.driver.core.Responses$Error$1.decode(Responses.java:38) at
com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:168)
at
org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
.
Hi Guys,
A simple question for which I couldn't find an answer in the docs.
Is Centos 7 supported on DataStax Community Edition v3.0.9?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebo
Thanks Vladimir!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://engage.liveperson.com/idc-mobile-first-consumer/?utm_medium=
I believe the logs should show you what the issue is.
Also, can the node "talk" with the others? (i.e. telnet to the other nodes
on port 7000).
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.f
.
https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandra.html
Hope this helps.
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful
Hi Everyone,
I was wondering how to choose the proper, most stable Cassandra version for
a Production environment.
Should I follow the version that's used in Datastax Enterprise (in this
case 3.0.10) or is there a better way of figuring this out?
Thanks!
Shalom Sagges
DBA
T: +972-74-700
Thanks Vladimir!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://engage.liveperson.com/idc-mobile-first-consumer/?utm_medium=
Do you get this error on specific column families or on all of the
environment?
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://
Hi Everyone,
I have a 24 node cluster (12 in each DC) with a capacity of 3.3 TB per node
for the data directory.
I'd like to increase the capacity per node.
Can anyone tell what is the maximum recommended capacity a node can use?
The disks we use are HDD, not SSD.
Thanks!
Shalom Sagges
y one.
Is this possible?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://engage.liveperson.com/idc-mobile-first-consumer/?u
ade online, why not
install everything anew?
By the way, if I do take the longer way and add a new 2.2.8 node to the
cluster, do I still need to perform upgradesstables on the new node?
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitt
Thanks for the info Kurt,
I guess I'd go with the normal upgrade procedure then.
Thanks again for the help everyone.
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc&g
handle the
storage modifications (please correct me if I'm wrong).
So my question is, if I need a 2.x version (can't upgrade to 3 due to
client considerations), which one should I choose, 2.1.x or 2.2.x? (I'm
don't require any new features available in 2.2).
Thanks!
Shalom Sagges
Hey Kai,
Thanks for the info. Can you please elaborate on the reasons you'd pick
2.2.6 over 3.0?
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meanin
Thanks a lot Kai!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://engage.liveperson.com/idc-mobile-first-consumer/?utm_medium=
CASSANDRA-7317 but saw it was fixed on 2.0.9. The version I'm using
is 2.0.14.
Any ideas?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful
need
gc_grace_seconds to be bigger than 0?
Sorry for all those questions, I'm just really confused from all the
TTL/tombstones subject (still a newbie).
Thanks a lot!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson
TL for deletion, but use updates as well, do I need
gc_grace_seconds to be bigger than 0?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connect
Thanks a lot Alain!!
This really cleared a lot of things for me.
Thanks again!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
On Mon,
machine that divide their
capacity to various instances (not only Cassandra), will this affect
performance, especially when the commitlog directory will probably reside
with the data directory?
I'm at a loss here and don't have any answers for that matter.
Can anyone assist please?
Thanks!
Thanks Vladimir!
I guess I'll just have to deploy and continue from there.
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<http
Thanks for the info Aaron!
I will test it in hope there will be no issues. If no issues will occur,
this could actually be a good idea and would save a lot of resources.
Have a great day!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://tw
host ;-) )
Regarding Trove, I doubt we'll use it in Production any time soon.
Thanks again!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create M
Thanks for the info Romain,
Can you tell me please what are the implications of not using CPU Pinning?
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful
te query at consistency TWO (2 replica were required but only 1
acknowledged the write)"*
Any ideas? I'm quite at a loss here.
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/
ue? Or is there something I'm missing...
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.docsend.com/view/8iiswfp
slowly enabled autocompaction (a few
minutes between each enable) on the nodes one by one.
This helped with the CPU increase you've mentioned.
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.c
Hi Cassandra Users,
I hope someone could help me understand the following scenario:
Version: 3.0.9
3 nodes per DC
3 DCs in the cluster.
Consistency Local_Quorum.
I did a small resiliency test and dropped a node to check the availability
of the data.
What I assumed would happen is nothing at all.
orks on the available replicas in the dc. So if your
> replication factor is 2 and you have 10 nodes you can still only loose 1.
> With a replication factor of 3 you can loose one node and still satisfy the
> query.
> Ryan Svihla schrieb am Do. 9. März 2017 um 18:09:
>
>> wh
Hi daniel,
I don't think that's a network issue, because ~10 seconds after the node
stopped, the queries were successful again without any timeout issues.
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.co
Hi Michael,
If a node suddenly fails, and there are other replicas that can still
satisfy the consistency level, shouldn't the request succeed regardless of
the failed node?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.
is retrieved.
Could this be a bug in 3.0.9? Or some sort of misconfiguration I missed?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connection
Thanks a lot Joel!
I'll go ahead and upgrade.
Thanks again!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.docs
Upgrading to 3.0.12 solved the issue.
Thanks a lot for the help Joel!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://livepe
g the server with the new kernel, can
I first install the upgraded Cassandra version and then bootstrap it to the
cluster?
Since there's already no data on the node, I wish to skip the agonizing
sstable upgrade process.
Does anyone know if this is doable?
Thanks!
Shalom Sagges
DBA
T: +972-7
that's not so user friendly) or perform the backup recommendations shown on
the Centos page (which sounds extremely agonizing as well).
What do you think?
Thanks!
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<h
Data directories are indeed separated from the root filesystem.
Our System team will look into this and hopefully they will be able to
install the new version seamlessly.
Thanks a lot everyone for your points and guidance. Much appreciated!
Shalom Sagges
DBA
T: +972-74-700-4035
<h
aller size.
Also, you can always play with the compaction threshold to suit your needs.
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<http
touching the data
directory which is on a different vg?
Shalom Sagges
DBA
T: +972-74-700-4035
<http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
<http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
<https://liveperson.docsend.com/view/8
1 - 100 of 147 matches
Mail list logo