For more specifically, I declared a column family
create column family Column_Family
with key_validation_class = UTF8Type
and comparator = 'CompositeType(LongType,UTF8Type)'
and default_validation_class = UTF8Type;
Number of columns will depend on only first column name in
Will do the same!
Thanks,
Or.
On Tue, Aug 12, 2014 at 6:47 PM, Clint Kelly wrote:
> Hi Or,
>
> For now I removed the test that was failing like this from our suite
> and made a note to revisit it in a couple of weeks. Unfortunately I
> still don't know what the issue is. I'll post here if I f
On Tue, Aug 12, 2014 at 4:46 PM, Viswanathan Ramachandran <
vish.ramachand...@gmail.com> wrote:
> Andrey, QUORUM consistency and no deletes makes perfect sense.
> I believe we could modify that to EACH_QUORUM or QUORUM consistency and no
> deletes - isnt that right?
>
yes.
Hi -
I am currently running a single Cassandra node on my local dev machine.
Here is my (test) schema (which is meaningless, I created it just to
demonstrate the issue I am running into):
CREATE TABLE foo (
foo_name ascii,
foo_shard bigint,
int_val bigint,
PRIMARY KEY ((foo_name, foo_sha
Andrey, QUORUM consistency and no deletes makes perfect sense.
I believe we could modify that to EACH_QUORUM or QUORUM consistency and no
deletes - isnt that right ?
Thanks
On Tue, Aug 12, 2014 at 3:10 PM, Andrey Ilinykh wrote:
> 1. You don't have to repair if you use QUORUM consistency and yo
Thanks Mark,
Since we have replicas in each data center, addition of a new data center
(and new replicas) has a performance implication on nodetool repair.
I do understand that adding nodes without increasing number of replicas may
improve repair performance, but in this case we are adding new data
1. You don't have to repair if you use QUORUM consistency and you don't
delete data.
2.Performance depends on size of data each node has. It's very difficult to
predict. It may take days.
Thank you,
Andrey
On Tue, Aug 12, 2014 at 2:06 PM, Viswanathan Ramachandran <
vish.ramachand...@gmail.com>
Agreed need more details; and just start by increasing heap because that may
wells solve the problem.
I have just observed (which makes sense when you think about it) while testing
fix for https://issues.apache.org/jira/browse/CASSANDRA-7546, that if you are
replaying a commit log which has a h
Hi Vish,
1. This tool repairs inconsistencies across replicas of the row. Since
> latest update always wins, I dont see inconsistencies other than ones
> resulting from the combination of deletes, tombstones, and crashed nodes.
> Technically, if data is never deleted from cassandra, then nodetool
Some questions on nodetool repair.
1. This tool repairs inconsistencies across replicas of the row. Since
latest update always wins, I dont see inconsistencies other than ones
resulting from the combination of deletes, tombstones, and crashed nodes.
Technically, if data is never deleted from cassa
Your question is a little too tangled for me... Are you asking about rows in a
partition (some people call that a “storage row”) or columns per row? The
latter is simply the number of columns that you have declared in your table.
The total number of columns – or more properly, “cells” – in a par
Hi Robert,
Thanks for your reply. The Cassandra version is 2.07. Is there some commonly
used rule for determining the commitlog and memtables size depending on the
heap size? What would be the main disadvantage when having smaller commitlog?
On Tuesday, August 12, 2014 8:32 PM, Robert Coli wr
On Mon, Aug 11, 2014 at 4:17 PM, Ian Rose wrote:
>
> "You better off create a manuel reverse-index to track modification date,
> something like this" --> I had considered an approach like this but my
> concern is that for any given minute *all* of the updates will be handled
> by a single node,
On Tue, Aug 12, 2014 at 4:33 AM, tsi wrote:
> In the datastax documentation there is a description how to replace a dead
> node
> (
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
> ).
> Is the replace_address option required even if the IP addr
On Tue, Aug 12, 2014 at 9:34 AM, jivko donev wrote:
> We have a node with commit log director ~4G. During start-up of the node
> on commit log replaying the used heap space is constantly growing ending
> with OOM error.
>
> The heap size and new heap size properties are - 1G and 256M. We are usin
Hi everyone,
I'm confused with number of columns in a row of Cassandra, as far as I know
there is 2 billions columns per row. Like that if I have a composite column
name in each row, for ex: (timestamp, userid), then number of columns per
row is the number of distinct 'timestamp' or each distinct '
Hi all,
We have a node with commit log director ~4G. During start-up of the node on
commit log replaying the used heap space is constantly growing ending with OOM
error.
The heap size and new heap size properties are - 1G and 256M. We are using the
default settings for commitlog_sync, commit
Hi Or,
For now I removed the test that was failing like this from our suite
and made a note to revisit it in a couple of weeks. Unfortunately I
still don't know what the issue is. I'll post here if I figure out it
(please do the same!). My working hypothesis now is that we had some
kind of OOM
Makes sense - thanks again!
On Tue, Aug 12, 2014 at 9:45 AM, DuyHai Doan wrote:
> Hello Ian
>
> "So that way each index entry *will* have quite a few entries and the
> index as a whole won't grow too big. Is my thinking correct here?" --> In
> this case yes. Do not forget that for each date va
Still having issues with node bootstrapping. The new node just died,
because it Full Gced, the nodes it had actual streams with noticed its
down. After the full gc finished the new node printed this log :
ERROR 02:52:36,259 Stream failed because /10.10.20.35 died or was
restarted/removed (streams
Hello Ian
"So that way each index entry *will* have quite a few entries and the index
as a whole won't grow too big. Is my thinking correct here?" --> In this
case yes. Do not forget that for each date value, there will be 1
corresponding index value + 10 updates. If you have an approximate count
Hi,
Without more information (Cassandra version, setup, topology, schema,
queries performed) this list won't be able to assist you. If you can
provide a more detailed explanation of the steps you took to reach your
current state that would be great.
Mark
On Tue, Aug 12, 2014 at 12:21 PM, Batra
After a lot of investigation, it seems that the clocks were desynchronized
through the cluster (altough we did not check that resyncing them resolve the
problem, we modify the schma with one node up and restart all other nodes
afterwards).
From: Demeyer Jonathan
In the datastax documentation there is a description how to replace a dead
node
(http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html).
Is the replace_address option required even if the IP address of the new
node is the same as the original one (I read
Hello all,
I have altered a table in cassandra and on one node it somehow got corrupted. I
the changes did not propagate ok. Ran repair keyspace columnfamily... noting
changed...
Is there a way to repair this?
Hello,
I have a cluster running and I'm trying to change the schema on it. Altough it
succeeds on one cluster (a test one), on another it keeps creating two separate
schema versions (both are 2 DC configuration; the cluster where it goes wrong
end up with a schema version on each DC).
I use ap
Clint, did you find anything?
I just noticed it happens to us too on only one node in our CI cluster.
I don't think there is a special usage before it happens... The last line
in the log before the shutdown lines in at least an hour before..
We're using C* 2.0.9.
On Thu, Aug 7, 2014 at 12:49 AM,
27 matches
Mail list logo