Hi, Alain,
Thanks for your reply.
Unfortunately, it is a rather old version of system which comes with Cassandra
v1.2.15, and database upgrade does not seem to be a viable solution. We have
also recently observed a situation that the Cassandra instance froze around one
minute while the other n
Hi all,
I am running a 9 node C* 2.1.12 cluster. I seek advice in data size per
node. Each of my node has close to 1 TB of data. I am not seeing any issues
as of now but wanted to run it by you guys if this data size is pushing the
limits in any manner and if I should be working on reducing data si
Thanks Guys,
I tend to agree that its a viable configuration, (but I’m biased)
We use datadog monitoring to view read writes per node,
We see all the writes are balanced (due to the replication factor) but all
reads only go to DC1.
So with the configuration I believed confirmed :)
Any way to ba
Hi,
I seek advice in data size per node. Each of my node has close to 1 TB of
> data. I am not seeing any issues as of now but wanted to run it by you guys
> if this data size is pushing the limits in any manner and if I should be
> working on reducing data size per node.
There is no real limit
>
> 100% ownership on all nodes isn’t wrong with 3 nodes in each of 2 Dcs with
> RF=3 in both of those Dcs. That’s exactly what you’d expect it to be, and a
> perfectly viable production config for many workloads.
+1, no doubt about it. The only thing is all the nodes own the exact same
data, mea
Hi Alain,
If you look below (chain is getting long I know) but I mentioned that we are
indeed using DCAwareRoundRobinPolicy
"We use the DCAwareRoundRobinPolicy in our java datastax driver in each DC
application to point to that Cassandra DC’s."
Indeed it is a trade off having all data over all
>
> I believe this is because the primary tokens where created in DC1 - due to
> an initial miss-configuration when our application where first started and
> only used DC1 to create the keyspaces ad tables
>
What does 'nodetool describecluster' outputs? If all the nodes share the
same schema then
Thanks for the response Alain. I am using STCS and would like to take some
action as we would be hitting 50% disk space pretty soon. Would adding nodes be
the right way to start if I want to get the data per node down otherwise can
you or someone on the list please suggest the right way to go ab
Does anybody here have any experience, positive or negative, with deploying
Cassandra (or DSE) clusters using Kubernetes? I don't have any immediate
need (or experience), but I am curious about the pros and cons.
There is an example here:
https://github.com/kubernetes/kubernetes/tree/master/exampl
>
> Would adding nodes be the right way to start if I want to get the data per
> node down
Yes, if everything else is fine, the last and always available option to
reduce the disk size per node is to add new nodes. Sometimes it is the
first option considered as it is relatively quick and quite st
The four criteria I would suggest for evaluating node size:
1. Query latency.
2. Query throughput/load
3. Repair time - worst case, full repair, what you can least afford if it
happens at the worst time
4. Expected growth over the next six to 18 months - you don't what to be
scrambling with latenc
That would be a nice solution, but 3.4 is way too bleeding edge. I’ll just go
with the digest for now. Thanks for pointing it out. I’ll have to consider a
migration in the future when production is on 3.x.
On Apr 11, 2016, at 10:19 PM, Jack Krupansky
mailto:jack.krupan...@gmail.com>> wrote:
Ch
You can use Mesos https://github.com/elodina/datastax-enterprise-mesos
~ Joestein
On Apr 14, 2016 10:13 AM, "Jack Krupansky" wrote:
> Does anybody here have any experience, positive or negative, with
> deploying Cassandra (or DSE) clusters using Kubernetes? I don't have any
> immediate need (or
> Does anybody here have any experience, positive or negative, with
> deploying Cassandra (or DSE) clusters using Kubernetes? I don't have any
> immediate need (or experience), but I am curious about the pros and cons.
>
>
The last time I played around with kubernetes+cassandra, you could not
spec
You can do that with the Mesos scheduler
https://github.com/elodina/datastax-enterprise-mesos and layout clusters
and racks for datacenters based on attributes
http://mesos.apache.org/documentation/latest/attributes-resources/
~ Joestein
On Apr 14, 2016 12:05 PM, "Nate McCall" wrote:
>
> Does an
Right now the biggest SST which I have is 210GB on a 3 TB disk, total disk
consumed is around 50% on all nodes, I am using SCTS. Read and Write query
latency is under 15ms. Full repair time is long but am sure when I switch
to incremental repairs this would be taken care of. I am hitting the 50%
di
It was an older upgrade plan so I went ahead and tried to upgrade to 3.0.5 and
I ran into the same error.
Do you know what would cause this error? Is it something to do with tombstoned
or deleted rows?
From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Wednesday, April 13, 2016 6:33 PM
To:
Hi,
Could someone give his opinion on this?
What should be considered more stable, Cassandra 3.0.5 or Cassandra 3.5?
Thank you
Jean
> On 12 Apr,2016, at 07:00, Jean Tremblay
> wrote:
>
> Hi,
> Which version of Cassandra should considered most stable in the version 3?
> I see two main branch: t
Normally, since 3.5 just came out, it would be wise to see if people report
any problems over the next few weeks.
But... the new tick-tock release process is designed to assure that these
odd-numbered releases are only incremental bug fixes from the last
even-numbered feature release, which was 3.
On Thu, Apr 14, 2016 at 2:08 PM, Anthony Verslues <
anthony.versl...@mezocliq.com> wrote:
> It was an older upgrade plan so I went ahead and tried to upgrade to 3.0.5
> and I ran into the same error.
>
Okay, good to know. Please include that info in the ticket when you open
it.
>
>
> Do you kn
Thanks for the info, Bryan!
We are in general assess the support level of GoCQL v.s Java Driver. From
http://gocql.github.io/, looks like it is a WIP (some TODO items, api is
subject to change)? And https://github.com/gocql/gocql suggests the
performance may degrade now and then, and the supported
Just want to put a plug in for gocql and the guys who work on it. I use it
for production applications that sustain ~10,000 writes/sec on an 8 node
cluster and in the few times I have seen problems they have been responsive
on issues and pull requests. Once or twice I have seen the API change but
o
Hello,
Is it a correct statement that both rebuild and bootstrap resume streaming from
where they were left off (meaning they don't stream the entire data again) in
case of node restarting during rebuild / bootstrap process ?
Thanks !
https://issues.apache.org/jira/browse/CASSANDRA-8838
Bootstrap only resumes on 2.2.0 and newer. I’m unsure of rebuild, but I suspect
it does not resume at all.
From: Anubhav Kale
Reply-To: "user@cassandra.apache.org"
Date: Thursday, April 14, 2016 at 3:07 PM
To: "user@cassandra.apache.org"
Hi,
We are running a 6 node cassandra 2.2.4 cluster and we are seeing a spike
in the disk Load as per the ‘nodetool status’ command that does not
correspond with the actual disk usage. Load reported by nodetool was as
high as 3 times actual disk usage on certain nodes.
We noticed that the periodic
I confirmed that rebuild doesn’t resume at all. I couldn’t find a JIRA on this.
Should I open one or can someone explain if there is a design rationale ?
From: Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
Sent: Thursday, April 14, 2016 4:01 PM
To: user@cassandra.apache.org
Subject: Re: Nodetool
On behalf of the development community, I am pleased to announce the
release of YCSB 0.8.0. With the help of other Cassandra community
developers we are continuing to make enhancements to this binding. Help in
testing a Cassandra 3 instance would be greatly appreciated for the next
release.
Hig
27 matches
Mail list logo