op two)
org.apache.cassandra.db.ColumnFamilyStore.isKeyInRemainingSSTables( )
org.apache.cassandra.utils.BloomFilter.getHashBuckets( )
org.apache.cassandra.io.sstable.SSTableIdentityIterator.echoData()
netstats does not show anything streaming to/from any of the nodes.
-Adi Pandit
not recommend running a live system with that much data per node.
>
>
Thanks for the advice and this can be a separate discussion but that will
make a Cassandra cluster way too costly , we would have to buy 16 systems
for the same amount of data as opposed to 4 that we have now and my IT
director will strangle me.
-Adi
typo in >> memtable_troughput
"update column family columnfamily2 memtable_troughput=155;"
"update column family columnfamily2 memtable_throughput=155;"
On Wed, Jul 27, 2011 at 9:59 AM, lebron james wrote:
> Hi!
> Need set memtable_troughput for cassandra
> I try do this with help cassandra-cl
evented swapping for now.
"auto" will try to use mmap for all disk access ,
"mmap" will use mmap
"standard" will not use mmap
Search for swapping on the users list and go through the email discussions
and jira issues related to swapping and that will give you an idea what can
work for you.
-Adi
setting it to 0.0.
Running nodetool repair will reduce the chance of inconsistent data, it does
not mean that read repair will not get triggered.
-Adi
You are reading with ONE and the
On Sat, Jul 30, 2011 at 5:04 AM, Philippe wrote:
> Hello,
> I have a 3-node ring at RF=3 that is doing
?
-Adi
>
> The seedlist of A is localhost.
Seedlist of B is localhost, A_ipaddr and
seedlist of C is localhost,B_ipaddr,A_ipaddr.
>
Using localhost(or own IP address for non-seed nodes) is not a good
practice.
Try
> The seedlist of A : A_ipaddr.
Seedlist of B : A_ipaddr
seedlist of C : A_ipaddr
rdered
more memory and are running it with 24 GB heap and the cluster has
been stable without complains.
Other things you can do for reducing memory usage if they are
appropriate for your read/write profile:
a) reduce memtable throughput(most reduction in mem footprint)
b) disable row caching
c) reduce/disable key caching(least reduction)
Ultimately you will have to tune based on your
1) row sizes
2) read/write load
-Adi
saw an OOM on one node after 2 weeks. The heap used was close to the
GC threshold and full GC takes around 80 seconds.
-Adi
2011/8/24 Ernst D Schoen-René :
> So, we're on 8, so I don't think there's a key cache setting. Am I wrong?
>
> here's my newest crash log:
&
ure it is not a client issue - Is your client hitting all nodes in
round-robin or some other fashion?
-Adi
On Wed, Sep 7, 2011 at 1:09 PM, Hefeng Yuan wrote:
> Adi,
>
> The reason we're attempting to add more nodes is trying to solve the
> long/simultaneous compactions, i.e. the performance issue, not the storage
> issue yet.
> We have RF 5 and CL QUORUM for read and write, we
y 10 or 20 and see if your CF
flushes at higher size. Keep adjusting it until the frequency/size of
flushing becomes satisfactory and hopefully reduces the compaction overhead.
-Adi
> On Sep 7, 2011, at 10:51 AM, Adi wrote:
>
>
> On Wed, Sep 7, 2011 at 1:09 PM, Hefeng Yuan w
/hadoop users and sharing my
experience.
Do get in touch with me if any of you would like to host a meetup/user
group meeting.
-Adi
On Mon, Mar 21, 2011 at 9:02 AM, Geek Talks wrote:
> Hi,
>
> Anyone interested joining in Apache Cassandra hangout/meetup nearby
> mumbai-pune area
/browse/THRIFT-591
-Adi
On Wed, Dec 1, 2010 at 1:01 PM, Chris Trimble wrote:
> Are there any that compile on Windows without the need for linking in
> cygwin?
>
> C
>
>
> On Tue, Nov 30, 2010 at 10:16 PM, sharanabasava raddi > wrote:
>
>> Thrift is there..
d, W = consistency
level of write
On single machine with one node, consistency level of 1 for both R and W
will make the write consistent as again 1 + 1 > 1
Cassandra : The definitive guide - is a good resource for most questions
besides the wiki and mailing list.
http://oreilly.com/catalog/0636920010852
-Adi
at
was inserted.
Any suggestions on what I might be doing incorrectly either in schema
definition or the way I am sending the values are welcome.
-Adi
That was it. Thanks thobbs :-) The queries work as expected now.
-Adi
On Thu, Mar 10, 2011 at 1:01 PM, Tyler Hobbs wrote:
> I looked again at the original
> email<http://mail-archives.apache.org/mod_mbox//cassandra-user/201101.mbox/raw/%3CAANLkTik4Z_6OfvT4ByQ8_kpX_=thxyl3
I have been going through the mailing list and compiling suggestions to
address the swapping due to mmap issue.
1) Use JNA (done but)
Are these steps also required:
- Start Cassandra with CAP_IPC_LOCK (or as "root"). (not done)
grep Unevictable /proc/meminfo
- set /proc/sys/vm/swappiness to 0
On Tue, Mar 22, 2011 at 3:44 PM, ruslan usifov wrote:
>
>
> 2011/3/22 Adi
>
>> I have been going through the mailing list and compiling suggestions to
>> address the swapping due to mmap issue.
>>
>> 1) Use JNA (done but)
>> Are these steps also requir
tens, hundreds, thousands,millions?
I am not looking for any tested numbers a general suggestion/best practice
recommendation will suffice.
Thanks.
-Adi
amazon paper has some good tips on solving the transactional
gotcha :-)
-Adi
On Fri, Apr 8, 2011 at 3:49 PM, Ed Anuff wrote:
> If you're just indexing on a single column value and the values have
> low cardinality in, say, the 10's - I'd have a wide row for each
> cardi
ction of Operations wiki page. That actually led to a more unbalanced load
distribution (Which the doc warned can happen if the key distribution is not
even).
Any suggestions/pointers are welcome. Thanks.
-Adi
the files a node should be having(say the ones
that show up in stream command) and just scp them to the new node.
Thank you for your time.
-Adi
ootstrap process or some other
> recommended way of replacing a dead node.
> 2) Is there a way to find the files a node should be having(say the ones
> that show up in stream command) and just scp them to the new node.
>
> Thank you for your time.
>
> -Adi
>
FYI
>>1) "So if your node tokens are set as "vertexid_" all keys with the same
prefix will be in the same range."
Adding to Aaron's comment -
This will be the case if you use OrderPreservingPartitioner.
RandomPartitioner(the default) will distribute the tokens randomly across
nodes.
On Mon, Nov 15,
25 matches
Mail list logo