This is ridiculously slow for that hardware setup. Sounds like you
benchmark with a single thread and / or sync queries or very large writes.
A setup like this should be easily able to handle tens of thousands of
writes / s
2016-11-23 8:02 GMT+01:00 Jonathan Haddad :
> How are you benchmarking th
How are you benchmarking that?
On Tue, Nov 22, 2016 at 9:16 PM Abhishek Kumar Maheshwari <
abhishek.maheshw...@timesinternet.in> wrote:
> Hi,
>
>
>
> I have 8 servers in my Cassandra Cluster. Each server has 64 GB ram and 40
> Cores and 8 SSD. Currently I have below config in Cassandra.yaml:
>
>
>
Hi,
I have 8 servers in my Cassandra Cluster. Each server has 64 GB ram and 40
Cores and 8 SSD. Currently I have below config in Cassandra.yaml:
concurrent_reads: 32
concurrent_writes: 64
concurrent_counter_writes: 32
compaction_throughput_mb_per_sec: 32
concurrent_compactors: 8
With this confi
dclocal_read_repair_chance and read_repair_chance are only really relevant
when using a consistency level wrote:
> Hi Kurt,
>
> Thank you for the suggestion. I ran repair on all the 4 nodes, and after
> the repair, the error “Corrupt empty row found in unfiltered partition”
> disappeared, but the
Hi Kurt,
Thank you for the suggestion. I ran repair on all the 4 nodes, and after the
repair, the error “Corrupt empty row found in unfiltered partition”
disappeared, but the “Mismatch” stopped for a little while and came up again.
When we changed both the “dclocal_read_repair_chance” and the
“r
Yes it could potentially impact performance if there are lots of them. The
mismatch would occur on a read, the error occurs on a write which is why
the times wouldn't line up. The cause for the messages as I mentioned is
when there is a digest mismatch between replicas. The cause is inconsistent
de
We’re seeing a strange issue on our Cassandra cluster wherein 3 nodes out
of 21 appear to have a significant amount of hints piling up. We’re not
seeing a lot in the system log showing that the node is having issues with
hints and nodetool status is not showing any issues with the other nodes in
Sorry, probably I didn't catch your setup fully.
Would you like to use shared data folder for both nodes, assuming you never run
two Cassandra process simultaneously?
Well, I guess it's possible. Running two Cassandra instances on the same data
folder together won't work, so prevent this situ
Yes, change rpc_address to node B.
Immutability aside, if Node A Cassandra and Node B Cassandra are using the
same directory on the same shared filesystem, let's call it
/cassandra/state/database,
would that not be a problem? Or said differently, does not Node A need its
own writable place /cassa
Thanks Nate and Vladimir,
I will give it a try.
On Tue, Nov 22, 2016 at 12:48 AM, Vladimir Yudovin
wrote:
> >if I use the same certificate how does it helps?
> This certificate will be recognized by all existing nodes, and no restart
> will be needed.
>
> Or, as Nate suggested, you can use trus
Hello Shalom.
No I really went from 3.1.1 to 3.0.9 .
Cheers.
Bertrand
On Nov 22, 2016 1:57 AM, "Shalom Sagges" wrote:
>
> *I took that opportunity to upgrade from 3.1.1 to 3.0.9*
>
> If my guess is right and you meant that you upgraded from 2.1.1 to 3.0.9
> directly, then this might cause som
Ok,
I submitted to datastax my question.
Regards,
Raphaël CHAUMIER
De : Vladimir Yudovin [mailto:vla...@winguzone.com]
Envoyé : mardi 22 novembre 2016 16:59
À : user
Objet : RE: cassandra documentation (Multiple datacenter write requests)
question
Is Apache Cassandra community can update this
Hi Lou,
do you mean you set rpc_address (or broadcast_rpc_address) to Node_B_IP on
second machine?
>there would be potential database corruption, no?
Well, so SSTables are immutable, it can lead to unpredictable behavior, I
guess. I don't believe anybody tested such setup before.
>Is t
Is Apache Cassandra community can update this documentation ?
I don't think so, it's hosted on DataStax website and it's not public Wiki.
Anyway you know what is right quorum calculation formula is ))).
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster
We use a single instance of Cassandra on Node A that employs a shared file
system to keep its data and logs.
Let's say we want to fail-over to Node B, by editing the yaml file by
changing Node A to Node B. If we now (mistakenly) bring up Cassandra on
Node B whilst the Cassandra on Node A is still
Thank you Hannu,
Is Apache Cassandra community can update this documentation ?
De : Hannu Kröger [mailto:hkro...@gmail.com]
Envoyé : mardi 22 novembre 2016 14:48
À : user@cassandra.apache.org
Objet : Re: cassandra documentation (Multiple datacenter write requests)
question
Looks like the graph
Looks like the graph is wrong.
Hannu
> On 22 Nov 2016, at 15.43, CHAUMIER, RAPHAËL
> wrote:
>
> Hello everyone,
>
> I don’t know if you have access to DataStax documentation. I don’t understand
> the example about Multiple datacenter write requests
> (http://docs.datastax.com/en/cassandra/
Hello everyone,
I don't know if you have access to DataStax documentation. I don't understand
the example about Multiple datacenter write requests
(http://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlClientRequestsMultiDCWrites.html).
The graph shows there's 3 nodes making up of QUORUM,
It's safe but since the replacement node will stream data from a single
replica per local range, it will potentially propagate any inconsistencies
from the replica it streams from, so it's recommended to run repair after a
replace to reduce entropy specially when replacing a node with the same IP
d
Thanks for the detailed answer Alexander.
We'll look into your suggestions, it's definitely helpful. We have plans
to reduce tombstones and remove the table with the big partitions,
hopefully after we've done that the cluster will be stable again.
Thanks again.
On Tue, Nov 22, 2016, at 09
>if I use the same certificate how does it helps?
This certificate will be recognized by all existing nodes, and no restart will
be needed.
Or, as Nate suggested, you can use trusted root certificate to issue nodes'
certificates.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud
You should be using a root certificate for signing all the node
certificates to create a trust chain. That way nodes won't have to
explicitly know about each other, only the root certificate.
This post has some details:
http://thelastpickle.com/blog/2015/09/30/hardening-cassandra-step-by-step-part
yes, I am generating separate certificate for each node.
even if I use the same certificate how does it helps?
On Mon, Nov 21, 2016 at 9:02 PM, Vladimir Yudovin
wrote:
> Hi Jai,
>
> so do you generate separate certificate for each node? Why not use one
> certificate for all nodes?
>
> Best regar
Hi Vincent,
Here are a few pointers for disabling swap :
-
https://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html
-
http://stackoverflow.com/questions/22988824/why-swap-needs-to-be-turned-off-in-datastax-cassandra
Tombstones are definitely the kind of object th
24 matches
Mail list logo