Hi,
I'm thinking about reducing the number of vnodes per server.
We have 3 DC setup - one with 9 nodes, two with 3 nodes each.
Each node has 256 vnodes. We've found that repair operations are beginning
to take too long.
Is reducing the number of vnodes to 64/32 likely to help our si
Is reducing the number of vnodes to 64/32 likely to help our situation?
with just 3 nodes per datacenter reduce vnodes to 1.
What options do I have for achieving this in a live cluster?
you need to remove node, move its data to other 2 and add it with
different vnodes count.
Chris,
Which C* version are you running?
You might want to do an upgrade to the latest version before reducing the vnode
counts, a lot of fixes and improvement went in lately, it might help you
getting your repair faster.
H
On 5 Aug 2013, at 12:30, Christopher Wirt
mailto:chris.w...@struq.com>
On 5 August 2013 12:30, Christopher Wirt wrote:
I’m thinking about reducing the number of vnodes per server.
>
> ** **
>
> We have 3 DC setup – one with 9 nodes, two with 3 nodes each.
>
> ** **
>
> Each node has 256 vnodes. We’ve found that repair operations are beginning
> to take too l
We run a cluster in EC2 and it's working very well for us. The standard
seems to be M2.2XLarge instances with data living on the ephemeral drives
(which means its local and fast) and backups either to EBS, S3 or just
relying on cluster size and replication (we avoid that last idea).
Brian
On Su
1.2.4. Really hesitant to upgrade versions due to the inevitable issues it
will cause.
Guess I could upgrade a single node and let that run for a while before
upgrading all nodes.
From: Haithem Jarraya [mailto:a-hjarr...@expedia.com]
Sent: 05 August 2013 13:04
To: user@cassandra.apache.org
you can try nodetool scrub. if it does not work, try repair then cleanup.
had this issue a few weeks back but our version is 1.0.x
On Mon, Aug 5, 2013 at 8:12 AM, Keith Wright wrote:
> Re-sending hoping to get some help. Any ideas would be much appreciated!
>
> From: Keith Wright
> Date: Frid
Thanks for the feedback. This node actually shut down half way when it was
bootstrapping the first time which likely led to this data corruption. We
restarted the JVM and it appeared stable until this issue. We decided to stop
cassandra, wipe the node, and restart so that it can bootstrap aga
Also check your system log for IO Errors. Scrub may eliminate the error,
but even if it does work you should still run repair. This type of
corruption usually happens because of a failed or failing disk/memory.
On Mon, Aug 5, 2013 at 8:44 AM, Jason Wee wrote:
> you can try nodetool scrub. if it
On Wed, Jul 31, 2013 at 3:10 PM, Jonathan Haddad wrote:
> It's advised you do not use compact storage, as it's primarily for
> backwards compatibility.
>
Many Apache Cassandra experts do not advise against using COMPACT STORAGE.
[1] Use CQL3 non-COMPACT STORAGE if you want to, but there are also
The CQL docs recommend not using it - I didn't just make that up. :)
COMPACT STORAGE imposes the limit that you can't add columns to your
tables. For those of us that are heavy CQL users, this limitation is a
total deal breaker.
On Mon, Aug 5, 2013 at 10:27 AM, Robert Coli wrote:
> On Wed, J
Hello,
Question about counters, replication and the ReplicateOnWriteStage
I've recently turned on a new CF which uses a counter column.
We have a three DC setup running Cassandra 1.2.4 with vNodes, hex core
processors, 32Gb memory.
DC 1 - 9 nodes with RF 3
DC 2 - 3 nodes with RF 2
On 5 August 2013 20:04, Christopher Wirt wrote:
> Hello,
>
> ** **
>
> Question about counters, replication and the ReplicateOnWriteStage
>
> ** **
>
> I’ve recently turned on a new CF which uses a counter column.
>
> ** **
>
> We have a three DC setup running Cassandra 1.2.4 with vN
"COMPACT STORAGE imposes the limit that you can't add columns to your
tables."
Is absolutely false. If anything CQL is imposing the limits!
Simple to prove. Try something like this:
create table abc (x int);
insert into abc (y) values (5);
and watch CQL reject the insert saying something to the
If you expected your CQL3 query to work, then I think you've missed the
point of CQL completely. For many of us, adding in a query layer which
gives us predictable column names, but continues to allow us to utilize
wide rows on disk is a huge benefit. Why would I want to reinvent a system
for str
From the Cassandra 1.2 Manual:
Using the compact storage directive prevents you from adding more than
one column that is not part of
the PRIMARY KEY.
At this time, updates to data in a table created with compact storage
are not allowed. The table with
compact storage that uses a compound prim
"Feel free to continue to use thrift's wide row structure, with ad hoc
columns. No one is stopping you."
Thanks. I was not trying to stop you from doing it your way either.
You said this:
"COMPACT STORAGE imposes the limit that you can't add columns to your
tables."
I was demonstrating you are i
CQL maps a series of logical rows into a single physical row by transposing
multiple rows based on partition and clustering keys into slices of a row.
The point is to add a loose schema on top of a wide row which allows you
to stop reimplementing common patterns.
Yes, you can go in and mess with
We've seen high CPU in tests on stress tests with counters. With our
workload, we had some hot counters (e.g. ones with 100s increments/sec)
with RF = 3, which caused the load to spike and replicate on write tasks to
back up on those three nodes. Richard already gave a good overview of why
this hap
For those of you using pycassa or the cassandra dbapi2 driver, I wanted to
let you know that a beta version of the DataStax python driver is available
on GitHub here: https://github.com/datastax/python-driver
This driver works exclusively with cql3 and uses the new native protocol
for Cassandra 1.
Hi all,
I have been trying to bootstrap a new node into my 7 node 1.2.4 C* cluster
with Vnodes RF3 with no luck. It gets close to completing and then the
streaming just stalls with streaming at 99% from 1 or 2 nodes. Nodetool
netstats shows the items that have yet to stream but the logs o
On Fri, May 10, 2013 at 11:24 AM, Robert Coli wrote:
> I have been wondering how Repair is actually used by operators. If
> people operating Cassandra in production could answer the following
> questions, I would greatly appreciate it.
>
https://issues.apache.org/jira/browse/CASSANDRA-5850
File
Hi,
The problem is that the node sending the stream is hitting this
FileNotFound exception. You need to restart this node and it should fix the
problem.
Are you seeing lot of FileNotFoundExceptions? Did you do any schema change
recently?
Sankalp
On Mon, Aug 5, 2013 at 5:39 PM, Keith Wright
Yes we likely dropped and recreated tables. If we stop the sending node, what
will happen to the bootstrapping node?
sankalp kohli wrote:
Hi,
The problem is that the node sending the stream is hitting this
FileNotFound exception. You need to restart this node and it should fix the
prob
So the problem is that when you dropped and recreated the table with the
same name, some how the old CFStore object was not purged. So now there
were two objects which caused same sstable to have 2 SSTableReader object.
The fix is to find all nodes which is emitting this FileNotFound Exception
and
Let me know if this fixes the problem?
On Mon, Aug 5, 2013 at 6:24 PM, sankalp kohli wrote:
> So the problem is that when you dropped and recreated the table with the
> same name, some how the old CFStore object was not purged. So now there
> were two objects which caused same sstable to have 2
Nice job man!
On Mon, Aug 5, 2013 at 6:41 PM, Tyler Hobbs wrote:
> For those of you using pycassa or the cassandra dbapi2 driver, I wanted to
> let you know that a beta version of the DataStax python driver is available
> on GitHub here: https://github.com/datastax/python-driver
>
> This driver
I've been thinking through some cases that I can see happening at some
point and thought I'd ask on the list to see if my understanding is correct.
Say a bunch of columns have been loaded 'a long time ago', i.e long enough
in the past that they have been compacted. My understanding is that if some
28 matches
Mail list logo