I am using C*1.1.6.
"Did you restart the node after changing the row_cache_size_in_mb ?"
No, I didn't. I used the nodetool setcachecapacity and didn't restart the
node.
"The changes in GC activity are not huge and may not be due to cache
activity"
I find them hudge, and just happened on the nod
On Mar 12, 2013, at 6:04 AM, aaron morton wrote:
>> by a multiget will not find the just inserted data.
> Can you explain how the data is not found.
> Does it not find new columns or does it return stale columns ?
It does not find new columns, I don't overwrite data.
> If the read is run agai
I'm already using Cassandra 1.2.2 with only one line to test the cassandra
access :
rows = LOAD 'cassandra://twissandra/users' USING
org.apache.cassandra.hadoop.pig.CassandraStorage();
extracted from the sample script provided in the sources
--
Cyril SCETBON
On Mar 12, 2013, at 6:57 AM, aaron
We tested it in QA, but in production it brought our cluster to a halt even
though we setcompactionthroughput to 1, we were severely limited. Nodetool
stop compaction did not seem to have any impact either. We ended up increasing
memory on one node to help alleviate some issue(cranked it up to
Here is our cluster which has 10 billion rows on 6 nodes and about 1.2TB
[root@sdi-ci ~]# clush -g datanodes du -sh /opt/datastore/commitlog
a5: 1.1G /opt/datastore/commitlog
a3: 1.1G /opt/datastore/commitlog
a1: 1.1G /opt/datastore/commitlog
a2: 1006M /opt/datastore/commitlog
a4: 1.1G /opt/datasto
yes LCS has its own compacting thing. It does not honor min compaction, max
compaction, and no-ops major compaction. The issue is that at the moment
you change your system moves all your sized data to L0 and then starts a
huge compaction grid to level it.
It would be great to just make this change
Thanks Dean. I will try the node drain next, however Do you know if this is
a known issue/bug with 1.1, I scanned through some 200 odd jira entries
that have commit log in the text for some clues -but no luck.
Amit
On Tue, Mar 12, 2013 at 12:17 PM, Hiller, Dean wrote:
> Here is our cluster whi
Can someone refer me to a C* tutorial on how to define dynamic schema
and populate data.
I am trying to an inheritance hierarchy object population into C*.
I want to handle all Base/Derived Class objects as dynamic schema each
with its own set of
attributes...
Thanks
Raman
Thanks for you reply. we will try both of your recommentation. The OS
memory is 8G, For JVM Heap it is 2G, DeletedColumn used 1.4G which are
rooted from readStage thread. Do you think we need increase the size of JVM
Heap?
Configuration for the index columnFamily is
create column family purge