Hi
I have a single node Cassandra running datastax distribution. I reinstall
Cassandra and updated cassandra.yaml data_file_directories.
But now I run nodetool cfstats I got SSTable count: 0
This is the output of nodetool cfstats
Keyspace: kairosdbRead Count: 0Read Latency: NaN ms.
Write Count:
Seems like the greeks are all used out, how about moving the the japanese
mythology? it's a brand new pool of names...
http://en.wikipedia.org/wiki/Japanese_mythology
On Fri, Oct 11, 2013 at 8:29 AM, Blair Zajac wrote:
> On 10/10/2013 10:28 PM, Blair Zajac wrote:
>
>> On 10/10/2013 08:53 PM, S
On 10/10/2013 10:28 PM, Blair Zajac wrote:
On 10/10/2013 08:53 PM, Sean McCully wrote:
On Thursday, October 10, 2013 08:30:42 PM Blair Jacuzzi wrote:
On 10/10/2013 07:54 PM, Sean McCully wrote:
Hello Cassandra Users,
I've recently created a Cassandra Agent as part of Netflix's Cloud
Prize
co
On 10/10/2013 08:53 PM, Sean McCully wrote:
On Thursday, October 10, 2013 08:30:42 PM Blair Jacuzzi wrote:
On 10/10/2013 07:54 PM, Sean McCully wrote:
Hello Cassandra Users,
I've recently created a Cassandra Agent as part of Netflix's Cloud Prize
competition, the submission which I've named H
On Thursday, October 10, 2013 08:30:42 PM Blair Jacuzzi wrote:
> On 10/10/2013 07:54 PM, Sean McCully wrote:
> > Hello Cassandra Users,
> >
> > I've recently created a Cassandra Agent as part of Netflix's Cloud Prize
> > competition, the submission which I've named Hector is largely based on
> >
On 10/10/2013 07:54 PM, Sean McCully wrote:
Hello Cassandra Users,
I've recently created a Cassandra Agent as part of Netflix's Cloud Prize
competition, the submission which I've named Hector is largely based on
Netflix's Priam. I would be very interested in getting feedback, from anyone
willing
Hello Cassandra Users,
I've recently created a Cassandra Agent as part of Netflix's Cloud Prize
competition, the submission which I've named Hector is largely based on
Netflix's Priam. I would be very interested in getting feedback, from anyone
willing to give Hector (https://github.com/seanmcc
Have you done any migration? Can you correlate these errors with any
activity?
On Thu, Oct 10, 2013 at 8:00 AM, Ravikumar Govindarajan <
ravikumar.govindara...@gmail.com> wrote:
> We have suddenly started receiving RangeSliceCommand serializer errors.
>
> We are running 1.2.4 version
>
> This do
Am I posting this to the wrong place?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Proper-Use-of-PreparedStatements-in-DataStax-driver-tp7590793p7590845.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.co
I started C*2 in a test environment yesterday - you need jdk 7
On Fri, Oct 11, 2013 at 9:20 AM, Brian Tarbox wrote:
> We're currently running our pre-production system on a 4 node EC2 cluster
> with C* 1.1.6.
>
> We have the luxury of a fresh install..rebuilding all our data so we can
> skip upg
We're currently running our pre-production system on a 4 node EC2 cluster
with C* 1.1.6.
We have the luxury of a fresh install..rebuilding all our data so we can
skip upgrades and just install a clean system. We obviously won't to do
this very often so we'd like to do it right...take advantage of
Reads still need to satisfy quorum when you've specified quorum --
otherwise you have no consistency control.
Each read goes out to each node that has a replica of key (in your case
all) and then independently each node consults its row cache and either
returns cached data or has to go through the
Thanks, double checked; reads are CL.ONE.
On 10/10/2013 11:15 AM, J. Ryan Earl wrote:
Are you doing QUORUM reads instead of LOCAL_QUORUM reads?
On Wed, Oct 9, 2013 at 7:41 PM, Chris Burroughs
wrote:
I have not been able to do the test with the 2nd cluster, but have been
given a disturbing da
Are you doing QUORUM reads instead of LOCAL_QUORUM reads?
On Wed, Oct 9, 2013 at 7:41 PM, Chris Burroughs
wrote:
> I have not been able to do the test with the 2nd cluster, but have been
> given a disturbing data point. We had a disk slowly fail causing a
> significant performance degradation t
We have suddenly started receiving RangeSliceCommand serializer errors.
We are running 1.2.4 version
This does not happen for Names based command. Only for Slice based
commands, we get this error.
Any help is greatly appreciated
ERROR [Thread-405] 2013-10-10 07:58:13,453 CassandraDaemon.java (l
SSTableSimpleUnsortedWriter is a sstable writer, not Cassandra, so it just
writes to file what you give as it is, you need to ensure the consistency.
You can check the file before running sstableloader, all the data is within
sstable, but instead of 1 row it will have 10 rows with the same key.
Hi.
That is basically our set up. We'll be holding all data on all nodes.
My problem was more on how the cache would behave. I thought it might go
this way:
1. No cache hit
Read from 3 nodes to verify results are correct and then return. Write
result into RowCache.
2. Cache hit
Read from
Hi, I thought the bulk API could handle this, merging all columns
for the same super column. I did something like this in the java client
(Hector) where it is able to solve this conflict only appending the columns.
Regarding to the column value, if the code is overwriting the
colum
> From: johnlu...@hotmail.com
> To: user@cassandra.apache.org
> Subject: RE: cassandra hadoop reducer writing to CQL3 - primary key - must it
> be text type?
> Date: Wed, 9 Oct 2013 18:33:13 -0400
>
> reduce method :
>
> public void reduce(LongWrita
If you're hitting 3/5 nodes, it sounds like you've set your replication
factor to 5. Is that what you're doing so you can have a 2-node outtage?
For a 5-node cluster, RF=5, each node will have 100% of your data (a second
DC is just a clone), so with a 3GB off-heap it means that 3GB / total would
I was reading through configuration tips for cassandra and decided to
use row-cache in order to optimize the read performance on my cluster.
I have a cluster of 10 nodes, each of them opeartion with 3 GB off-heap
using cassandra 2.4.1. I am doing local quorum reads, which means that I
will hit
hi,
thank you very much!
ju wenguang
From: Hannu Kröger
Date: 2013-10-10 17:01
To: user; juwg
Subject: Re: Re: Add a new node
Hi,
You don't need to restart for that either. Check this out:
http://www.datastax.com/docs/1.1/cluster_management#replication-factor
Cheers,
Hannu
2013/10/10
He
Hi,
You don't need to restart for that either. Check this out:
http://www.datastax.com/docs/1.1/cluster_management#replication-factor
Cheers,
Hannu
2013/10/10
> **
> Hello,
>
> thank you very much for your reply.
>
> I want to ask another question:
> for an already existed keyspace, can I cha
Hello,
thank you very much for your reply.
I want to ask another question:
for an already existed keyspace, can I change the number of replicas in it's
REPLICATION option?
If so, do I need to restart the whole cluster?
Thanaks in advance.
ju wenguang
From: Hannu Kröger
Date: 2013-10-10 16
Hello,
No you don't need to. Check this out:
http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.html#cassandra/operations/ops_add_node_to_cluster_t.html
Cheers,
Hannu
2013/10/10 juwg
> **
> Hi all,
>
> I want to ask a basic question: To add a new node to Cassandra system, do
>
Hi all,
I want to ask a basic question: To add a new node to Cassandra system, do I
need to restart the Cassandra system?
thanks
juwenguang
jn shangjie
Hi all,
I want to ask a basic question: To add a new node to Cassandra system, do I
need to restart the Cassandra system?
thanks
juwenguang
jn shangjie
Hi all,
I want to ask a basic question: To add a new node to Cassandra system, do I
need to restart the Cassandra system?
thanks
juwenguang
jn shangjie
28 matches
Mail list logo