Hi everyone,
There were many people asking similar questions about the CASSANDRA-13004.
It might be that the issue itself and release notes are somewhat hard to
grasp or might sound ambiguous, so here's a bit more elaborate explanation
what 13004 means in terms of the upgrade process, how it manif
I have a small 3 nodes C* + Spark cluster, when I run any query on spark it
gives me connection refused error on 2 C* nodes. which puts all the
pressure on single node resulting in bad performance. below is the error
from spark-submit
17/07/25 12:00:22 INFO Cluster: New Cassandra host /10.128.1.1:
Hi everyone,
I am looking, but I couldn't find on the web, some real workload Cassandra
traces. Do you know where I can find these and how I can replay them?
Thanks
Hi all,
We're trying to load a snapshot back into a cluster, but are running into
memory issues.
We've got about 190GB of data across 11 sstable-generations. Some of the
smaller ones load, but the larger ones aren't.
We've tried increasing the max-heap-size to 16G, but stil see this
exception:
ss
Hi
We recently upgraded to cassandra 3.10 , one of issue which we are facing
is cassandra node restarts are taking very long times.
On checking the debug logs, it appears cassandra saves all prepared
statements ever executed in *system.prepared_statements *table, and on
startup tries to load all
Hi folks,
We have a cluster with Cassandra 3.9, running on AWS in a single region and
multi AZ setup. We use Priam do our backup to S3 and auto restore on node
bootstrap. We created an index on a simple uuid field on one of our existing
tables, after which the restores fail on that table (see l
+1 bump.
We are experiencing the same issue
> On Jul 25, 2017, at 08:02, Anshul Rathore wrote:
>
> Hi
>
> We recently upgraded to cassandra 3.10 , one of issue which we are facing is
> cassandra node restarts are taking very long times.
>
> On checking the debug logs, it appears cassandra s
This is address on 3.11.1 on CASSANDRA-13641. Workaround for now is
probably truncating system.prepared_statements before restart of node
as being done for now.
2017-07-25 11:12 GMT-05:00 Taylor Cressy :
> +1 bump.
>
> We are experiencing the same issue
>
> On Jul 25, 2017, at 08:02, Anshul Rathor
Hi,
We are trying to figure our highest read/write operations/second in our 9x9
cluster. I checked node tool tablestats but it shows from the time of start of
node. I would like to see by second.
Also, how can i find out size of each right request?
--
Hi,
has anyone seen the following exception before?
Context:
* Cassandra 3.9,
* single node (20 Cores / 256 GB RAM)
* doing lots of counter mutations
* Whenever this exception happens, CPU spikes, node becomes unresponsive
for a few minutes. Eventually, the node will "die", i.e. become
comple
Hi,
Is "alter table t add column..." an expensive operation? For example, if it's
something to be triggered at an admin level of an application (i.e. not
frequently), is it ok? It won’t trigger rewriting all the data, right?
My goal is not to have super wide tables, I know that’s not the pract
are you aware this ticket?
https://issues.apache.org/jira/browse/CASSANDRA-13004
On Tue, Jul 25, 2017 at 1:23 PM, Boris Iordanov
wrote:
> Hi,
>
> Is "alter table t add column..." an expensive operation? For example, if
> it's something to be triggered at an admin level of an application (i.e.
>
No, I hadn’t, thanks much for pointing it out!
The take away for me is then that such an update should be done offline. In
that case, a schema change would be safe and relatively efficient (wouldn’t
take hours). If that assumption is wrong, could anybody let me know?
Thanks much,
Boris
> On Ju
This is a quick informational question. I know that Cassandra can detect
failures of nodes and repair them given replication and multiple DC.
My question is can Cassandra tell if data was lost after a failure and node(s)
“fixed” and resumed operation?
If so where would it log or flag it? O
On 07/25/2017 05:13 AM, Junaid Nasir wrote:
|listen_address: 10.128.1.1 rpc_address: 10.128.1.1|
Are these the values on all three nodes?
If so, try with empty values:
|listen_address: rpc_address:|
or make sure each node has its own IP address configured.
Cassandra doesn't do any automatic repairing. It can tell if your data is
inconsistent, however it's really up to you to manage consistency through
repairs and choice of consistency level for queries. If you lose a node,
you have to manually repair the cluster after replacing the node, but
really y
You will need to use jmx to collect write/read related metrics. not aware
of anything that measures write size, but if there isn't it should be
easily measured on your client.
there are quite a few existing solutions for monitoring Cassandra out
there, you should find some easily with a quick searc
If by "offline" you mean with no reads going to the nodes, then yes that
would be a *potentially *safe time to do it, but it's still not advised.
You should avoid doing any ALTERs on versions of 3 less than 3.0.14 or 3.11
if possible.
Adding/dropping a column does not require a re-write of the dat
Thank you!
Boris
> On Jul 25, 2017, at 7:44 PM, kurt greaves wrote:
>
> If by "offline" you mean with no reads going to the nodes, then yes that
> would be a potentially safe time to do it, but it's still not advised. You
> should avoid doing any ALTERs on versions of 3 less than 3.0.14 or 3.1
Thank you Kurt.
I got read/write requests answered. Getting write request size is my unresolved
question :(
I m sure, it's a common requirement, anybody have some solution.
> On Jul 25, 2017, at 6:23 PM, kurt greaves wrote:
>
> You will need to use jmx to collect write/read related metrics.
Thanks All for your reply.We will begin using RACs in our C* cluster.
Thanks.
-- --
??: "kurt greaves";;
: 2017??7??25??(??) 6:27
??: "User";
"anujw_2...@yahoo.co.in";
: "Peng Xiao"<2535...@qq.com>;
: Re: ?? toler
Keep in mind that you shouldn't just enable multiple racks on an existing
cluster (this will lead to massive inconsistencies). The best method is to
migrate to a new DC as Brooke mentioned.
Thanks for the remind,we will setup a new DC as suggested.
-- 原始邮件 --
发件人: "kurt greaves";;
发送时间: 2017年7月26日(星期三) 上午10:30
收件人: "User";
抄送: "anujw_2...@yahoo.co.in";
主题: Re: 回复: tolerate how many nodes down in the cluster
Keep in mind that you shouldn't just
Looks like you can collect MutationSizeHistogram for each write as well
from the coordinator, in regards to write request size. See the Write
request section under
https://cassandra.apache.org/doc/latest/operating/metrics.html#client-request-metrics
each node has its own ip for listen_address and rpc_address. seed node ip
is fixed to 10.128.1.1 on all nodes. configuration was written using
ansible and I have also verified it.
On Wed, Jul 26, 2017 at 3:52 AM, Erik Forkalsud wrote:
> On 07/25/2017 05:13 AM, Junaid Nasir wrote:
>
> listen_addr
Thanks for the update.
On Tue, Jul 25, 2017 at 10:13 PM Paulo Motta
wrote:
> This is address on 3.11.1 on CASSANDRA-13641. Workaround for now is
> probably truncating system.prepared_statements before restart of node
> as being done for now.
>
> 2017-07-25 11:12 GMT-05:00 Taylor Cressy :
> > +1
26 matches
Mail list logo