Check out kairosd for a time series db on Cassandra.
On Aug 31, 2015 7:12 AM, "Peter Lin" wrote:
>
> I didn't realize they had added max and min as stock functions.
>
> to get the sample time. you'll probably need to write a custom function.
> google for it and you'll find people that have done i
Hi,
I testing Spark and Cassandra, Spark 1.4, Cassandra 2.1.7 cassandra spark
connector 1.4, running in standalone mode.
I am getting 4000 rows from Cassandra (4mb row), where the row keys are
random.
.. sc.cassandraTable[RES](keyspace,res_name).where(res_where).cache
I am expecting that it
Hello,
I don't know if this is a good place to talk about that, but I think this might
help some people running into the same issue, so I will simply give some
feedback here about what I was running in the last few months
I've been using secondary index (yes, this is bad), but the side effects
Its number of cells and tombstones seen on the partitions during reads.
Just ignore the "last five minutes" part though since thats incorrect.
It being zero probably means theres been no actual reads have been off of
disk on that node. Might want to check if "Local read count" is non-zero
which im
Hi Stan,
Basically restarting a node that is not the one repairing will make repairs
fail for the ranges being repaired and that your node is responsible or a
replica of. Your repair will continue but will be incomplete (you will see
a WARN in the logs off the top of my head).
Restarting the node
The last two metrics of cfstats shows zero for all the tables we have
Average live cells per slice (last five minutes): 0.0
Average tombstones per slice (last five minutes): 0.0
What do these mean and why are they always zero?