I *think* people lean towards more JVM than file cache. Often people email about the JVM running Out Of Memory, so give it more and see how much it's using in your case. Your nodes will gave a minimum requirement for memory based on the Memtable Thresholds, cache settings and the usage patters. It
Thanks for the tip. AaronOn 03 Aug, 2010,at 11:51 AM, Benjamin Black wrote:On Mon, Aug 2, 2010 at 2:24 PM, Aaron Morton wrote:
>
> 3.5) Yes load balance restores things, I suggest you run it on one node at a
> time. Start with the node with the lowest load. Watching the progress by
> watching the
1.) 16 to 24GB out of how much total system memory? Is this 50% of
available system RAM or 90%?
Thanks for the reply!
-Aaron
On Mon, Aug 2, 2010 at 2:24 PM, Aaron Morton wrote:
> Will answer as best I can, others will know more.
>
> 1) Most people seem to lean towards more memory for the JVM,
Correct, it is on its own branch.
On Mon, Aug 2, 2010 at 9:08 AM, Sal Fuentes wrote:
> I'm guessing its the cassandra branch from that repo. You can find it here:
> http://github.com/b/cookbooks/tree/cassandra
>
> On Mon, Aug 2, 2010 at 1:49 AM, Boris Shulman wrote:
>>
>> I can't find this cookb
Yes, python or ruby from the command line. The CLI is not useful.
On Mon, Aug 2, 2010 at 10:44 AM, Mark wrote:
> Is there anyway to limit the number of results returned from a the CLI?
>
On Mon, Aug 2, 2010 at 2:24 PM, Aaron Morton wrote:
>
> 3.5) Yes load balance restores things, I suggest you run it on one node at a
> time. Start with the node with the lowest load. Watching the progress by
> watching the streams via JMX or nodetool.
>
I recommend you _never_ use nodetool loadba
sent last msg before reading this. it confirms what i said, i/o is
your problem:
On Mon, Aug 2, 2010 at 4:05 PM, Artie Copeland wrote:
> sdb 335.50 0.00 70.50 0.00 3248.00 0.00 46.07
> 0.78 11.01 7.40 52.20
> sdc 330.00 0.00 70.00 0.50 3180.00
you have insufficient i/o bandwidth and are seeing reads suffer due to
competition from memtable flushes and compaction. adding additional
nodes will help some, but i recommend increasing the disk i/o
bandwidth, regardless.
b
On Mon, Aug 2, 2010 at 11:47 AM, Artie Copeland wrote:
> i have a qu
On Mon, Aug 2, 2010 at 2:39 PM, Aaron Morton wrote:
> You may need to provide some more information on how many reads your
> sending to the cluster. Also...
>
> How many nodes do you have in the cluster ?
>
We have a cluster of 4 nodes.
> When you are seeing high response times on one node, wha
You may need to provide some more information on how many reads your sending to the cluster. Also...How many nodes do you have in the cluster ?When you are seeing high response times on one node, what's the load like on the others ?Is the data load evenly distributed around the cluster ? Are your c
Will answer as best I can, others will know more. 1) Most people seem to lean towards more memory for the JVM, around 16 to 24gb. Memory is also used by the MemTables and I assume during the compaction processes. 2) Cannot say for sure, but I assume so. Think I've seen the cache with data in it whe
The error I'm seeing seems to be random: if I try to get the data
again I usually get the correct data. Although maybe compaction
happened between when the error occurred and when I checked again and
the bad key was fixed? I'll try upgrading to 0.6.4 anyway and see if
it helps. Thanks for the help.
Sounds a lot like it's running out of memory. Check your logs for
GCInspector lines.
Easiest solution: increase the argument to Xmx in cassandra.in.sh.
On Mon, Aug 2, 2010 at 8:05 AM, Jean-Yves LEBLEU wrote:
> Hi all,
>
> We have a cassandra installation with two nodes in a ring, replication
>
No.
On Mon, Aug 2, 2010 at 12:44 PM, Mark wrote:
> Is there anyway to limit the number of results returned from a the CLI?
>
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com
Yes, it is deterministic (but compaction could change which precise
keys are affected)
On Mon, Aug 2, 2010 at 1:15 PM, Jianing Hu wrote:
> Does that bug cause *random* data read errors? Looks like it may fail
> in a deterministic way, but I'm not familiar with the code base so
> please correct me
Hi All,
I've got a couple questions that have come up about how Cassandra works and
what others are seeing in their environments. Here goes:
1.) What have you found to be the best ratio of Cassandra row cache to
memory free on the system for filesystem cache? Are you tuning it like an
RDBMS so C
Don't forget about the tombstones. (delete markers)
They are still present on the other two nodes, then they will
replicate to the 3rd node and finish off your deleted data.
On Mon, Aug 2, 2010 at 9:30 AM, Edward Capriolo wrote:
> On Mon, Aug 2, 2010 at 9:11 AM, john xie wrote:
>> ReplicationFac
i have a question on what are the signs from cassandra that new nodes should
be added to the cluster. We are currently seeing long read times from the
one node that has about 70GB of data with 60GB in one column family. we are
using a replication factor of 3. I have tracked down the slow to occu
Does that bug cause *random* data read errors? Looks like it may fail
in a deterministic way, but I'm not familiar with the code base so
please correct me if I'm wrong.
On Fri, Jul 30, 2010 at 8:49 PM, Jonathan Ellis wrote:
> This is probably a bug fixed in 0.6.2:
>
> * fix size of row in spanne
Is there anyway to limit the number of results returned from a the CLI?
I'm guessing its the cassandra branch from that repo. You can find it here:
http://github.com/b/cookbooks/tree/cassandra
On Mon, Aug 2, 2010 at 1:49 AM, Boris Shulman wrote:
> I can't find this cookbook anymore at the specified URL. Where can I find
> it?
>
> On Tue, Mar 16, 2010 at 6:40 AM, B
Registration for Surge Scalability Conference 2010 is open for all
attendees! We have an awesome lineup of leaders from across the various
communities that support highly scalable architectures, as well as the
companies that implement them. Here's a small sampling from our list of
speakers:
John
On Mon, Aug 2, 2010 at 9:11 AM, john xie wrote:
> ReplicationFactor = 3
> one day i stop 192.168.1.147 and remove cassandra data by mistake, can i
> recover 192.168.1.147's cassadra data by restart cassandra ?
>
>
>
> /data1/cassandra/
> /data2/cassandra/
> /data3/cass
ReplicationFactor = 3
one day i stop 192.168.1.147 and remove cassandra data by mistake, can i
recover 192.168.1.147's cassadra data by restart cassandra ?
/data1/cassandra/
/data2/cassandra/
/data3/cassandra/
/data3 mount /dev/sdd
i remove /data3 and form
Hi all,
We have a cassandra installation with two nodes in a ring, replication
factor = 2, some times cassandra becomes non-responsive, it takes
about three minutes before answering to a get.
Do you have any idea of what we should check when it happens ? Or what
could cause the problem.
We are usi
I can't find this cookbook anymore at the specified URL. Where can I find it?
On Tue, Mar 16, 2010 at 6:40 AM, Benjamin Black wrote:
> I've just pushed a rough but useful chef cookbook for Cassandra:
> http://github.com/b/cookbooks/tree/master/cassandra
>
> It is lacking in documentation and assu
> First, Cassandra suddenly dies during compaction. Java core dump says that
> the last thread run was "COMPACTION-POOL:1".
> I suspect that my business logic could lead size of columns in a column
> family per a row to be greater than two gigabytes. (but i couldn't confirm
> it yet)
Are you runn
27 matches
Mail list logo