This project implements the graphite api on top of Cassandra and can be
used from grafana:
https://github.com/pyr/cyanite
On Wed, May 9, 2018 at 10:39 AM dba newsql wrote:
> Any one use Cassandra as data storage for Grafana as timeseries DB?
>
> Thanks,
> Fay
>
wrote:
> Yeah I thought that was suspicious too, it's mysterious and fairly
> consistent. (By the way I had error checking but removed it for email
> brevity, but thanks for verifying :) )
>
> On Mon, Mar 2, 2015 at 4:13 PM, Peter Sanford
> wrote:
>
>> Hmm. I was able
Hmm. I was able to reproduce the behavior with your go program on my dev
machine (C* 2.0.12). I was hoping it was going to just be an unchecked
error from the .Exec() or .Scan(), but that is not the case for me.
The fact that the issue seems to happen on loop iteration 10, 100 and 1000
is pretty s
On Mon, Oct 6, 2014 at 1:56 PM, DuyHai Doan wrote:
> Isn't there a video of Ooyala at some past Cassandra Summit demonstrating
> usage of Cassandra for text search using Trigram ? AFAIK they were storing
> kind of bitmap to perform OR & AND operations on trigram
>
That sounds like the talk Matt
For snapshots, yes. For incremental backups you need to delete the files
yourself.
On Wed, Jun 18, 2014 at 6:28 AM, Marcelo Elias Del Valle <
marc...@s1mbi0se.com.br> wrote:
> Wouldn't be better to use "nodetool clearsnapshot"?
> []s
>
>
> 2014-06-14 17:38 GMT-03:00 S C :
>
> I am thinking of "
You should delete the backup files once you have copied them off. Otherwise
they will start to use disk space as the live SSTables diverge from the
snapshots/incrementals.
-psanford
On Sat, Jun 14, 2014 at 10:17 AM, S C wrote:
> Is it ok to delete files from backups directory (hardlinks) once
On Wed, Jun 11, 2014 at 9:17 PM, Jack Krupansky
wrote:
> Hmmm... that multipl-gets section is not present in the 2.0 doc:
>
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAntiPatterns_c.html
>
> Was that intentional – is that anti-pattern no lon
On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jongsma
wrote:
> The big problem seems to have been requesting a large number of row keys
> combined with a large number of named columns in a query. 20K rows with 20K
> columns destroyed my cluster. Splitting it into slices of 100 sequential
> queries fix
f the cluster in such a
> configuration, but I don’t think that it is related to this configuration,
> though it could be.
>
> Mitchell
>
> *From:* Peter Sanford [mailto:psanf...@retailnext.net
> ]
> *Sent:* Monday, June 09, 2014 7:19 AM
> *To:* user@cassandra.apache.
Your general assessments of the limitations of the Ec2 snitches seem to
match what we've found. We're currently using the
GossipingPropertyFileSnitch in our VPCs. This is also the snitch to use if
you ever want to have a DC in EC2 and a DC with another hosting provider.
-Peter
On Mon, Jun 9, 201
The issue you should look at is CASSANDRA-4206.
This is apparently fixed on 2.0 so upgrading is one option. If you are not
ready to upgrade to 2.0 then you can try increasing in_memory_compaction.
We were hitting this exception on one of our nodes and increasing
in_memory_compaction did fix it.
I can't tell you why that one-liner isn't working, but you can try
http://www.cassandraring.com for generating balanced tokens.
On Thu, Oct 31, 2013 at 11:59 PM, Techy Teck wrote:
> I am trying to setup two node Cassandra Cluster on windows machine. I have
> basically two windows machine and I w
We're working on upgrading from 1.0.12 to 1.1.12. After upgrading a test
node I ran into CASSANDRA-4157 which restricts the max length of CF names
to <= 48 characters. It looks like CASSANDRA-4110 will allow us to upgrade
and keep our existing long CF names, but we won't be able to create new CFs
w
That library requires you to serialize and deserialize the data
yourself. So to insert a ruby Float you would
value = 28.21
[value].pack('G')
@client.insert(:somecf, 'key', {'floatval' => [value].pack('G')})
and to read it back out:
value = @client.get(:somecf, 'key', ['floatval']).unpac
It looks like the /cassandra directory is missing from most of the
mirrors right now. The only mirror that I've found to work is
http://www.eu.apache.org
On Fri, Aug 24, 2012 at 2:53 AM, ruslan usifov wrote:
> Hm, from erope servere cassandra packages prestn, but from russian
> servers absent.
>
Just rebooting a machine with ephemeral drives is ok (it does an os
level reboot). You will also keep the same IP address. If you stop and
then start a machine with ephemeral drives you will lose data.
See: http://alestic.com/2011/09/ec2-reboot-stop-start
On Wed, Dec 7, 2011 at 6:43 PM, Stephen
By default, Cassandra is configured to use half the ram of your
system. That's way overkill for playing around with it on a laptop.
Edit /etc/cassandra/cassandra-env.sh and set max_heap_size_in_mb to
something more suited for your environment.
I have it set to 256M for my laptop (with 4G of ram).
I use `watch` to do this:
watch -n 5 nodetool -h localhost tpstats
-psanford
On Wed, Aug 31, 2011 at 1:59 PM, David Hawthorne wrote:
> It would be very useful to be able to get refreshing statistics from tpstats,
> a la top.
>
> nodetool -h localhost tpstats [n]
>
> refresh every second, show
18 matches
Mail list logo