I started using the datastax driver (coming from astynax driver)
recently. It is awesome! Use it :D
https://github.com/datastax/java-driver
Cheers,
artur
On 15/05/15 10:32, Rohit Naik wrote:
DISCLAIMER == This e-mail may contain privileged an
Hi,
we have had an issue with one of our nodes today:
1. Due to a wrong setup the starting node failed to properly bootstrap.
It was shown as UN in the cluster however did not contain any data and
we shut it down to fix our configuration issue.
2. We figured we need to remove the node from t
Hey,
not sure if that's what you're looking for but you can use
auto_bootstrap=false in your yaml file to prevent nodes from
bootstrapping themselves on startup. This option has been removed and
the default is true. You can add it to your configuration though.
There's a bit of documentation h
Hi,
pretty sure we started out like that and had not seen any problems doing
that. On a side node, that config may become inconsistent anyway after
adding new nodes, because I think you'll need a restart of all your
nodes if you add new seeds to the yaml file. (Though that's just assumption)
Hi,
we are running a 7 node cluster with an RF of 5. Each node holds about
70% of the data and we are now wondering about the backup process.
1. Is there a best practice procedure or a tool that we can use to have
one backup that holds 100 % of the data or is it necessary for us to
take mult
Hi,
to be honest 2 days for 200GB nodes doesn't sound too unreasonable to me
(depending on your hardware of course). We were running a ~20 GB cluster
with regualr hard drives (no SSD) and our first repair ran a day as well
if I recall correctly. We since improved our hardware and got it down t
Hi,
we did something similar. We did utilize some cassandra code though and
wrote a custom commitlog reader that outputs our data into a readable form.
You can look here:
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.cassandra/cassandra-all/1.1.9/org/apache/cassandra/db/commitlog
About repairs,
we encountered a similar problem with our setup where repairs would take
ages to complete. Based on your setup you can try loading data into page
cache before running repairs. Depending on how much data you can hold in
cache, this will speed up your repairs massively.
-- artur
It's been a while since I tried that but here are some things I can
think of:
* the .log.out seems wrong. Unless your cassandra commitlogs don't end
in .log.out. I tried this locally with your script and my commitlogs get
extracted to .log files for me.
* I never tried the restore procedure on
f someone confirms, I am happy to raise a bug.
cheers,
artur
On 11/12/13 14:09, Bonnet Jonathan. wrote:
Artur Kronenberg openmarket.com> writes:
hi Bonnet,
that doesn't seem to be a problem with your archiving, rather with
the restoring. What is your restore comm
hi Bonnet,
that doesn't seem to be a problem with your archiving, rather with the
restoring. What is your restore command?
-- artur
On 11/12/13 13:47, Bonnet Jonathan. wrote:
Bonnet Jonathan externe.bnpparibas.com> writes:
>
>Thanks a lot,
>
>It Works, i see commit log bein archived.
Hi,
There is some docs on the internet for this operations. It is basically
as presented in the archive-commitlog file.
(commitlog_archiving.properties).
The way the operations work: The operation is called automatically with
parameters that give you control over what you want to do with it.
Hi John,
I am trying again :)
The way I understand it is that compression gives you the advantage of
having to use way less IO and rather use CPU. The bottleneck of reads is
usually the IO time you need to read the data from disk. As a figure, we
had about 25 reads/s reading from disk, while
Hi Julien,
I hope I get this right :)
a repair will trigger a mayor compaction on your node which will take up
a lot of CPU and IO performance. It needs to do this to build up the
data structure that is used for the repair. After the compaction this is
streamed to the different nodes in order
m>>
wrote:
> Read latency depends on many factors, don't forget "physics".
> If it meets your requirements, it is good.
>
>
> -Original Message-
> From: Artur Kronenberg [mailto:artur.kronenb...@openmarket.com
<mailto:artur.kronenb...@openmarket.com>
e received this
message in error, please contact the sender immediately and irrevocably delete
this message and any copies.-Original Message-----
From: Artur Kronenberg [mailto:artur.kronenb...@openmarket.com]
Sent: Thursday, October 17, 2013 7:40 PM
To: user@cassandra.apache.org
Subject: Sorting
Hi,
I am looking to somehow increase read performance on cassandra. We are
still playing with configurations but I was thinking if there would be
solutions in software that might help us speed up our read performance.
E.g. one idea, not sure how sane that is, was to sort read-batches by
row-
Hi,
I was playing around with cassandra off-heap options. I configured 3 GB
off-heap for my row cache and 2 GB Heap space for cassandra. After
running a bunch of load tests against it I saw the cache warm up. Doing
a jmap histogram I noticed a lot of offHeapkey objects. At that point my
row c
/cassandracalculator/
On Thu, Oct 10, 2013 at 6:40 AM, Artur Kronenberg
<mailto:artur.kronenb...@openmarket.com>> wrote:
I was reading through configuration tips for cassandra and decided
to use row-cache in order to optimize the read performance on my
cluster.
I have a
I was reading through configuration tips for cassandra and decided to
use row-cache in order to optimize the read performance on my cluster.
I have a cluster of 10 nodes, each of them opeartion with 3 GB off-heap
using cassandra 2.4.1. I am doing local quorum reads, which means that I
will hit
20 matches
Mail list logo