3.7 falls under the Tick Tock release cycle, which is almost completely
untested in production by experienced operators. In the cases where it has
been tested, there have been numerous bugs found which I (and I think most
people on this list) consider to be show stoppers. Additionally, the Tick
T
Can you elaborate on why not 3.7?
On Tue, Sep 20, 2016 at 7:41 PM, Jonathan Haddad wrote:
> If you haven't yet deployed to prod I strongly recommend *not* using 3.7.
>
> What network storage are you using? Outside of a handful of highly
> experienced experts using EBS in very specific ways, it
Thanks Nate. We do not have monitoring set up yet, but I should be able to
get the deployment updated with a metrics reporter. I'll update the thread
with my findings.
On Tue, Sep 20, 2016 at 10:30 PM, Nate McCall
wrote:
> If you can get to them in the test env. you want to look in
> o.a.c.metri
If you can get to them in the test env. you want to look in
o.a.c.metrics.CommitLog for:
- TotalCommitlogSize: if this hovers near commitlog_size_in_mb and never
goes down, you are thrashing on segment allocation
- WaitingOnCommit: this is the time spent waiting on calls to sync and will
start to c
I have seen in various threads on the list that 3.0.x is probably best for
prod. Just wondering though if there is anything in particular in 3.7 to be
weary of.
I need to check with one of our QA engineers to get specifics on the
storage. Here is what I do know. We have a blade center running lots
If you haven't yet deployed to prod I strongly recommend *not* using 3.7.
What network storage are you using? Outside of a handful of highly
experienced experts using EBS in very specific ways, it usually ends in
failure.
On Tue, Sep 20, 2016 at 3:30 PM John Sanda wrote:
> I am deploying multi
I am deploying multiple Java web apps that connect to a Cassandra 3.7
instance. Each app creates its own schema at start up. One of the schema
changes involves dropping a table. I am seeing frequent client-side
timeouts reported by the DataStax driver after the DROP TABLE statement is
executed. I d
Hi,
I need to separate clients data into multiple clusters and because I don't
like having multiple cql clients/connections on my app-code, I'm thinking
of creating many keyspaces and storing them into many virtual datacenters
(the servers will be in 1 logical datacenter, but separated by keyspace
thanks Robert; we followed the instructions mentioned in
http://thelastpickle.com/blog/2015/09/30/hardening-cassandra
-step-by-step-part-1-server-to-server.html. It worked great.
Due to the security policies in our company, we were asked to
use 3rd party signed certs. Since we'll requ
Hi Sai,
I would recommend following the approach described in this article via The
Last Pickle: http://thelastpickle.com/blog/2015/09/30/hardening-cassandra
-step-by-step-part-1-server-to-server.html
It does a really good job of laying out a strategy for internode encryption
by rolling your own C
hi;
has anybody enabled SSL using a generic keystore for node-to-node
encryption. We're using 3rd party signed certificates, and want to avoid
the hassle of managing 100's of certificates.
thanks
Sai
Great explanation!
For the single partition read, it makes sense to read data from only one
replica.
Thank you so much Ben!
Jun
From: ben.sla...@instaclustr.com
Date: Tue, 20 Sep 2016 05:30:43 +
Subject: Re: Question about replica and replication factor
To: wuxiaomi...@hotmail.com
CC: user@c
Hi!
Have you had a chance to try your patch or solve the issue in an other way?
Thanks,
Mikhail
> On 15 Sep 2016, at 16:02, DuyHai Doan wrote:
>
> Ok so I've found the source of the issue, it's pretty well hidden because it
> is NOT in the SASI source code directly.
>
> Here is the method wh
Hi,
I am using version 2.0.9. I have been looking into the logs to see if a
repair is finished. Each time a repair is started on a node, I am seeing
log line like "INFO [Thread-112920] 2016-09-16 19:00:43,805
StorageService.java (line 2646) Starting repair command #41, repairing 2048
ranges for ke
Hello Zhiyan,
Replying to the mailing list since this could help others. I'm not sure
what that could be, it's generally related to some kind of corruption,
perhaps CASSANDRA-10791. Although the message is similar to #10971, that is
restricted to streaming so it's a different issue here. Was this
I was a able to find the hotspots causing the load,but the size of these
partitions are in KB and no tombstones and no.of sstables is only 2 what
else i need to debug to find the reason for high load for some nodes.
we are also using unlogged batches is that can be the reason ?? how to
find which
IMPORTANT NOTICE: This e-mail, including any and all attachments, is intended
for the addressee or its representative only. It is confidential and may be
under legal privilege. Any form of unauthorized use, publication, reproduction,
copying or disclosure of the content of this e-mail is not
The Cassandra team is pleased to announce the release of Apache Cassandra
version 3.0.9.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source an
Also for testing purposes, you can send only one replica set to the Test DC.
For instance with a RF=3 and 3 C* racks, you can just rsync/sstableload one
rack. It will be faster and OK for tests.
Best,
Romain
Le Mardi 20 septembre 2016 3h28, Michael Laws a
écrit :
I put together a shel
Hi,
You should make a benchmark with cassandra-stress to find the sweet spot. With
NVMe I guess you can start with a high value, 128?
Please let us know the results of your findings, it's interesting to know if we
can go crazy with such pieces of hardware :-)
Best,
Romain
Le Mardi 20 septem
Hi,
You can read and write the value of the following MBean via
JMX:org.apache.cassandra.db:type=CompactionManager - CoreCompactorThreads
- MaximumCompactorThreads
If you modify CoreCompactorThreads it will be effective immediatly, I mean
assuming you have some pending compactions, you will se
Hello,
We have commented out "concurrent_compactors" in our Cassandra 2.1.13
installation.
We would like to review this setting, as some issues indicate that the default
configuration may affect read/write performance.
https://issues.apache.org/jira/browse/CASSANDRA-8787
https://issues.
Hello,
We are using Cassandra 2.1.13 with each node having a NVMe disk with the
configuration of Total Capacity - 1.2TB, Alloted Capacity - 880GB. We would
like to increase the default value of 32 for the param concurrent_reads. But
the document says
"(Default: 32)note For workloads with
Apologies. The entire error stack:
ERROR [SharedPool-Worker-5] 2016-09-20 11:23:20,039 ErrorMessage.java:251 -
Unexpected exception during request
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException:
org.apache.cassandra.exceptions.ReadTimeoutException: Operat
This appears in the system log:
Caused by: java.lang.RuntimeException:
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out -
received only 2 responses.
at org.apache.cassandra.auth.Auth.selectUser(Auth.java:276)
~[apache-cassandra-2.1.14.jar:2.1.14]
at org.apache.cass
Hi,
> More recent (I think 2.2) don't have this problem since they write hints to
>the file system as per the commit log
Flat files hints were implemented starting from 3.0
https://issues.apache.org/jira/browse/CASSANDRA-6230
Best,
Romain
I am also getting the same error:
cqlsh -u cassandra -p cassandra
Connection error: ('Unable to connect to any servers', {'':
OperationTimedOut('errors=Timed out creating connection (5 seconds),
last_host=None',)})
But it is not consistent. Sometimes I manage to connect. It is random.
Using 2.1.
On Mon, Sep 19, 2016 at 3:07 PM Alain RODRIGUEZ wrote:
...
> - The size of your data
> - The number of vnodes
> - The compaction throughput
> - The streaming throughput
> - The hardware available
> - The load of the cluster
> - ...
>
I've also heard that the number of clustering keys per partit
28 matches
Mail list logo