> Ok, problem solved – I’ve removed all from logs and data directories run it
> with sun jdk and for now everything works fine.
Is it possible the commit log directory contained old information from
e.g. previous experiments with different column families? Suppose for
example that you wiped the da
>
> 1.) What have you found to be the best ratio of Cassandra row cache to memory
free on the system for filesystem cache? Are you tuning it like an RDBMS so
Cassandra has the vast majority of the RAM in the system or are you letting the
filesystem cache do some of the work?
This depends on your
Hi All,
I'm planning to use the current 0.6.4 stable for creating an image that
would be the base for nodes in our Cassandra cluster.
However, the 0.6.5 release is on the way. When the 0.6.5 has been released.
Is it possible to have some of the nodes stay in 0.6.4 and having new nodes
in 0.6.5?
This comment from Ben Black may help...
"I recommend you _never_ use nodetool loadbalance in production because
it will _not_ result in balanced load. The correct process is manual
calculation of tokens (the algorithm for RP is on the Operations wiki
page) and nodetool move.
"
http://www.mail-arc
Have you had a chance to try this technique out in Java ?
I've not been able to get back to my original experiments for the last week.
If it works you should be able to put together a non blocking client that still
used thrift.
Aaron
On 30 Jul 2010, at 16:57, Ryan Daum wrote:
> An asynchrono
You can upgrade them one at a time but I wouldn't recommend leaving
the cluster mixed permanently.
On Thu, Aug 5, 2010 at 5:10 AM, Utku Can Topçu wrote:
> Hi All,
>
> I'm planning to use the current 0.6.4 stable for creating an image that
> would be the base for nodes in our Cassandra cluster.
>
Hi,
I'm on 0.6.4. Previous tickets in the JIRA in searching the web indicated that
iterating over the keys in keyspace is possible, even with the random
partitioner. This is mostly desirable in my case for testing purposes only.
I get the following error:
[junit] Internal error processing get_
I've managed to find the problem. Not a IBM vs. Sun JRE issue.
It had to do with my RHEL distribution JRE defintions which caused
conflicts (from /etc/java/java.conf...). When invoking with absolute
path to java executable, instead of simply java the errors were gone.
So no more errors in munin-nod
Hi all,
first of all, I have read the Cassandra Hardware requirements page on
Cassandra wiki: http://wiki.apache.org/cassandra/CassandraHardware .
I am currently in a simple project that, fetches data from a message
broker. That data can be thought as logging data, about a system user
usage. I ne
> I have ring of three nodes. When i try to decommission one of the machine
> then i got the following exception:
>From reading the code I believe this is expected (though not sure
whether to call it a bug?). Decommision ends up calling
MessagingService' shutdown() which closes the socket and then
On 8/5/10 8:01 AM, Rui Silva wrote:
Hi all,
first of all, I have read the Cassandra Hardware requirements page on
Cassandra wiki: http://wiki.apache.org/cassandra/CassandraHardware .
I am currently in a simple project that, fetches data from a message
broker. That data can be thought as logging
Yes, you should be able to use get_range_slices with RP.
This stack trace looks like you changed your partitioner after the
node already had data in it.
On Thu, Aug 5, 2010 at 10:06 AM, Adam Crain
wrote:
> Hi,
>
> I'm on 0.6.4. Previous tickets in the JIRA in searching the web indicated
> that i
It sounds like you would be fine doing what you propose.
On Thu, Aug 5, 2010 at 11:01 AM, Rui Silva wrote:
> Hi all,
>
> first of all, I have read the Cassandra Hardware requirements page on
> Cassandra wiki: http://wiki.apache.org/cassandra/CassandraHardware .
>
> I am currently in a simple proj
I've never changed the partitioner from the default random. Other ideas?
I can insert and do column queries using a single key but not range on CF.
-Adam
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Thursday, August 05, 2010 11:33 AM
To: user@cassandra.apache
can you reproduce starting with a fresh install, no existing data?
On Thu, Aug 5, 2010 at 12:09 PM, Adam Crain
wrote:
> I've never changed the partitioner from the default random. Other ideas?
>
> I can insert and do column queries using a single key but not range on CF.
>
> -Adam
>
> -Origin
I can. I'm using the debian distro. I assume that all that is required is
wiping the data/commitlog directories.
If I do that, I still get the same result.
Here's my CF:
I'm using this to time series measurement data where the keys are measurement
names and the columns are Long unix epoch t
That's puzzling, because we have a bunch of system tests that do range
scans with randompartitioner. If you can open a ticket with the code
to reproduce, I'll have a look. Thanks!
On Thu, Aug 5, 2010 at 1:24 PM, Adam Crain
wrote:
> I can. I'm using the debian distro. I assume that all that is
> So the manual compaction did help somewhat but did not get the nodes down to
> the
> size of their raw data. There are still multiple SSTables on most nodes.
>
> At 4:02pm, ran nodetool cleanup on every node.
>
> At 4:12pm, nodes are taking up the expected amount of space and all nodes are
> us
Oh and,
> Nodetool cleanup works so beautifully, that I am wondering if there is any
> harm
> in using "nodetool cleanup" in a cron job on a live system that is actively
> processing reads and writes to the database?
since a cleanup/compact is supposed to trigger a full compaction,
that's genera
This url got simple steps to create cluster and stress testing setup also.
http://www.coreyhulen.org/category/cassandra/
From: SSam
To: user@cassandra.apache.org
Sent: Wed, August 4, 2010 7:02:24 PM
Subject: Re: stress.py
Thanks for the reply,
Issue re
Finally I able to configure and run this program on my 3 node cluster.
#python stress.py -n 20 -t 200 -d 172.16.7.76,172.16.7.77,172.16.7.78 -o
read
total,interval_op_rate,avg_latency,elapsed_time
83664,8366,0.0278478825376,10
145478,6181,0.0295496694395,20
177409,3193,0.027055770029,30
All,
Thanks for Apache Cassandra Project, it is great project.
This is my first time to use it. We install it on 10 nodes and runs
great. The 10 nodes cross all 5 datacenters around the world.
The big thing bother me is initial ring token. We have some Column
Families. It is very hard to c
On 8/4/10 1:27 AM, David Boxenhorn wrote:
When I change schema I need to delete the commit logs - otherwise I get
a null pointer exception.
In versions prior to 0.6.4, there is a bug which leads to an infinite
loop when :
a) you stop the node without doing "nodetool drain" first
b) then you
On Thu, Aug 5, 2010 at 14:59, Zhong Li wrote:
> All,
>
> Thanks for Apache Cassandra Project, it is great project.
>
> This is my first time to use it. We install it on 10 nodes and runs great.
> The 10 nodes cross all 5 datacenters around the world.
>
> The big thing bother me is initial ring tok
On Thu, Aug 5, 2010 at 12:59 PM, Zhong Li wrote:
>
> The big thing bother me is initial ring token. We have some Column Families.
> It is very hard to choose one token suitable for all CFs. Also some Column
> Families need higher Consistent Level and some don't. If we set
Consistency Level is set
You are running multiple threads / processes (-t 200) so things are happening in parallel. The latency is how long it took for the entire request to complete from the clients point of view. So having a very busy client with lots of threads will also have an affect on this number. Try adjusting the
Would you care to elaborate?
On Thu, Aug 5, 2010 at 8:27 AM, Mark wrote:
>
> MongoDB may be a better choice for this?
>
--
Salvador Fuentes Jr.
10 writes / second could even be done with every sql/nosql solution, even with
plain files.
So I think the storage choosen should be the one optimized for the queries you
wanna have.
Sent from my iPhone
On 06.08.2010, at 05:54, Sal Fuentes wrote:
> Would you care to elaborate?
>
> On Thu, A
28 matches
Mail list logo