Total 100G data per node.
On Fri, Jun 28, 2013 at 2:14 PM, sulong wrote:
> aaron, thanks for your reply. Yes, I do use the Leveled compactions
> strategy, and the SSTable size is 10M. If it happens again, I will try to
> enlarge the sstable size.
>
> I just wonder why cassandra doesn't limit th
aaron, thanks for your reply. Yes, I do use the Leveled compactions
strategy, and the SSTable size is 10M. If it happens again, I will try to
enlarge the sstable size.
I just wonder why cassandra doesn't limit the SSTableReader's total memory
usage when compacting. Lots of memory are consumed by t
the thing I was doing was definitely triggering the range tombstone issue,
this is what I was doing:
UPDATE clocks SET clock = ? WHERE shard = ?
in this table:
CREATE TABLE clocks (shard INT PRIMARY KEY, clock MAP)
however, from the stack overflow posts it sounds like they aren't
necess
> create table test_table ( k1 bigint, k2 bigint, created timestamp, PRIMARY
> KEY (k1, k2) ) with compaction = { 'class' : 'LeveledCompactionStrategy' };
When CQL creates a non COMPACT STORAGE
(http://www.datastax.com/docs/1.2/cql_cli/cql/CREATE_TABLE#using-compact-storage)
table it uses comp
> http://svn.apache.org/repos/asf/cassandra/tags/cassandra-0.7.0-beta2/src/java/org/apache/cassandra/db/ColumnFamily.java
What was the name of the function you were looking at ?
> It seems that Cassandra was supporting application specific reconcilers before
Can you provide some more information
Are you running the Levelled compactions strategy ?
If so what is the max SSTable size and what is the total data per node?
If you are running it try using a larger SSTable size like 32MB
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www
Are you move the node to a new machine or re-installing on the same machine ?
If it's the former then:
* shut it down cleanly
* copy all the data and config
* update the yaml with the new IP for list_address, rpc_address and seed_list
* restart the node
The error you got says that the schema wa
Can you provide details of the mutation statements you are running ? The Stack
Overflow posts don't seem to include them.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 27/06/2013, at 5:58 AM, Theo Hultberg wrote:
> I do not see the out of heap errors but I am taking a bit of a performance
> hit.
Take a look at nodetool cfhistograms to see how many SSTables are being touched
per read and the local read latency.
In general if you are hitting more than 4 it's not great.
> BloomFilterFalseRatio is 0.8367
> If our application tries to read 80,000 columns each from 10 or more rows at
> the same time, some of the nodes run out of heap space and terminate with OOM
> error.
Is this in one read request ?
Reading 80K columns is too many, try reading a few hundred at most.
Cheers
-
A
> As far as I know in 1.2 coordinator logs request before it updates replicas.
You may be thinking about atomic batches, which are enabled by default for 1.2
via CQL but must be supported by Thrift clients. I would guess Hector is not
using them.
These logs are stored on other machines, which th
Hello Everybody,
We were performing an upgrade of our cluster from 1.1.10 version to 1.2.4 . We
tested the upgrade process in a QA environment and found no issues. However in
the production node, we faced loads of errors and had to abort the upgrade
process.
I was wondering how we ran into suc
Well I guess I'm glad to know I didn't just miss something obvious. I
voted for the issue on Jira; I don't know if that's used to prioritize
features or not, but it can't hurt, right? =)
On Wed, Jun 26, 2013 at 3:03 PM, Robert Coli wrote:
> On Fri, Jun 21, 2013 at 5:25 AM, Eric Stevens wrote:
Non-zero for pending tasks is too transient. Try monitoring tpstats
with a (much) higher frequency and look for sustained threshold over a
duration.
Then, using a percentage of the configuration values for the max - 75%
of memtable_flush_queue_size in this case - alert when it has been
higher than
Try doing request tracing.
http://www.datastax.com/dev/blog/tracing-in-cassandra-1-2
On Thu, Jun 27, 2013 at 2:40 PM, Bao Le wrote:
> Hi,
>
> We are using Leveled Compaction with Cassandra 1.2.5. Our sstable size
> is 100M. On each node,
> we have anywhere from 700+ to 800+ sstables (for al
Briefly, counters read before write, unlike other writes in Cassandra.
http://www.datastax.com/dev/blog/whats-new-in-cassandra-0-8-part-2-counters
"
Counters have been designed to allow for very fast writes. However, increment
does involve a read on one of the replica as part of replication. As a
Hi,
We are using Leveled Compaction with Cassandra 1.2.5. Our sstable size is
100M. On each node,
we have anywhere from 700+ to 800+ sstables (for all levels). The
bloom_filter_fp_chance is set at 0.000744.
For read requests that ask for existing row records, the latency is great,
mostly
On Wed, Jun 26, 2013 at 9:51 PM, aaron morton wrote:
> WARNING: disabling durable_writes means that writes are only in memory and
> will not be committed to disk until the CF's are flushed. You should
> *always* use nodetool drain before shutting down a node in this case.
While a rational and inf
No : just two 15000 rpm scsi disks per machine. Each disk can handle more than
100MB/sec streaming data (tested). Iostat reports service times of 2 or 3 milli
sec.
Ubuntu 12.04 LTS 48 GB memory, 24 CPU Xeon X 5670
Cassandra is started with 8GB.
-Original Message-
From: Jeremy Hanna [mail
Are you on SSDs?
On 27 Jun 2013, at 14:24, "Desimpel, Ignace" wrote:
> On a test with 3 cassandra servers version 1.2.5 with replication factor 1
> and leveled compaction, I did a store last night and I did not see any
> problem with Cassandra. On all 3 machine the compaction is stopped alread
On a test with 3 cassandra servers version 1.2.5 with replication factor 1 and
leveled compaction, I did a store last night and I did not see any problem with
Cassandra. On all 3 machine the compaction is stopped already several hours.
However , one machine reports 650 pending compaction tasks (
Thanks for the info. There are other reasons and the size you mention is small
compared to other data I have worked with. The speed and size of data and cost
of license have to be taken into consideration which I am looking at. Also
dynamic columns is of interest to me also.
I am just really s
Hi,
I am using Cassandra 1.2.5. I built a cluster of 2 data centers with 3 nodes in
each data center.
I created a key space and table with a composite key:
create keyspace test_keyspace WITH replication = {'class':
'NetworkTopologyStrategy', 'DC1' : 1, 'DC2' : 1};
create table test_table (
I am bit confused. It seems that Cassandra was supporting application specific
reconcilers before. Please check the link below.
http://svn.apache.org/repos/asf/cassandra/tags/cassandra-0.7.0-beta2/src/java/org/apache/cassandra/db/ColumnFamily.java
However I could not find these changes in 1.2
In our performance tests, we are seeing similar FlushWriter, MutationStage,
MemtablePostFlusher pending tasks become non-zero. We collect snapshots every 5
minutes, and they seem to clear after ~10-15 minutes though. (The flush writer
has an 'All time blocked' count of 540 in the below example).
Hi All,
I want to replace Cassandra's default reconciler by my application specific
reconciler.
Can someone tell me how I can do this in Cassandra 1.2 ? Currently I am using
CQL to interact with Cassandra.
Thank you
Emalayan
26 matches
Mail list logo