You might be confusing the RackAware strategy (which puts 1 replica in
a remote DC) and the DatacenterShard strategy (which puts M of N
replicas in remote DCs). Both are in 0.6.5.
https://svn.apache.org/repos/asf/cassandra/tags/cassandra-0.6.5/src/java/org/apache/cassandra/locator/DatacenterShard
I installed thrift successfully on Snow Leaopard. However, when I run
*thrift -gen java interface/cassandra.thrift* with Cassandra 0.6.5, I get an
error which reads as follows:
apache-cassandra-0.6.5/interface/cassandra.thrift:303] error: identifier
ONE is unqualified!
Line 303 of cassandra.thr
We had similar problems. It may help to read this:
http://blog.mikiobraun.de/ (Tuning GC for Cassandra)
Regards,
Leo.
On 22.09.2010, at 09:27, Chris Jansen wrote:
> Hi all,
>
> I have written a test application that does a write, read and delete on one
> of the sample column families that s
http://wiki.apache.org/cassandra/FAQ#i_deleted_what_gives
That help?
On Wed, Sep 22, 2010 at 5:27 PM, Chris Jansen <
chris.jan...@cognitomobile.com> wrote:
> Hi all,
>
>
>
> I have written a test application that does a write, read and delete on one
> of the sample column families that ship with
Thanks Leo, I'll have a read.
Regards,
Chris
From: Matthias L. Jugel [mailto:l...@thinkberg.com]
Sent: 22 September 2010 08:39
To: user@cassandra.apache.org
Subject: Re: Running out of heap
We had similar problems. It may help to read this:
http://blog.mikiobraun.de/ (Tuning GC fo
Hi Dan,
I do see compaction happen, I keep a close eye on the disk usage and
what I see is the usage grow then shrink, but it despite the periodic
compaction the overall result is a slow but steady growth.
Regards,
Chris
From: Dan Washusen [mailto:d...@reactive.org]
Sent: 22 Septem
A key point in that FAQ entry is that the deletes don't occur until after
the configured GCGraceSeconds (10 days is the default I believe).
This (http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts)
FAQ entry mentions your scenario and suggests either increasing the memory
allocati
I'm running a test cluster for development and evaluation of Cassandra. In
order to use the latest build of Hector (the 0.7.0 branch) I've need to move
from 0.6.5 first to 0.7.0-beta1 of Cassandra and then to the latest nightly
build (actually 2010-09-20_14-20-20). In order to move over I've
Hi all,
just wanted to make sure that I get this right:
What this means is that I have to schedule repairs only on every RFs node?
So with 4 nodes and RF=2 I would repair nodes 1 and 3
and with 6 nodes and RF=3 I would repair nodes 1 and 4
and that would lead to a synched cluster?
> On Thu, Jul 15
Did you build thrift with the specific subversion revision that Cassandra uses?
http://wiki.apache.org/cassandra/InstallThrift
On Sep 22, 2010, at 2:35 AM, Shashank Tiwari wrote:
> I installed thrift successfully on Snow Leaopard. However, when I run
> thrift -gen java interface/cassandra.thrift
Thanks Dan, I've reduced the GCGraceSeconds to a number of hours for my
testing, and Cassandra is now removing the old records.
The link provided by Leo helped a lot also, I've been able to tune the
garbage collector to better suit the rapid creation and removal of data.
Thanks again,
C
DSS is broken in 0.6 and was removed from 0.6.6 to make it (even more)
clear that you shouldn't be using it.
On Wed, Sep 22, 2010 at 12:23 AM, rbukshin rbukshin wrote:
> The one in 0.6 doesn't allow controlling number of replicas to place in
> other DC. Atmost 1 copy of data can be placed in othe
the strategy is saved as part of your schema. install the new build
before loading it.
On Wed, Sep 22, 2010 at 3:35 AM, andy C wrote:
> I'm running a test cluster for development and evaluation of Cassandra. In
> order to use the latest build of Hector (the 0.7.0 branch) I've need to move
> fro
if you're using RackUnawareStrategy that should work.
On Wed, Sep 22, 2010 at 5:27 AM, Daniel Doubleday
wrote:
> Hi all,
>
> just wanted to make sure that I get this right:
>
> What this means is that I have to schedule repairs only on every RFs node?
>
> So with 4 nodes and RF=2 I would repair n
Sorry Jonathan, not quite getting you
I'd gathered the strategy is stored in the schema. However I'm not quite
getting what you mean, "Install the new build before loading it" Are you
saying I should load the new build on a clean machine and use schematool to
import the schema from the beta1 cl
Riptano is going to be in Seattle on Oct 8 for a full-day Cassandra
training, taught by Ben Black, who many of you know from IRC and this
list. The training is broken into two parts: the first covers
application design and modeling in Cassandra, with exercises using the
Pycassa library; the second
Oops, correct link for Seattle: http://www.eventbrite.com/event/763062340
On Wed, Sep 22, 2010 at 11:26 AM, Jonathan Ellis wrote:
> Riptano is going to be in Seattle on Oct 8 for a full-day Cassandra
> training, taught by Ben Black, who many of you know from IRC and this
> list. The training is b
Any training in the East Coast ..?
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Wednesday, September 22, 2010 12:41 PM
To: user
Subject: Re: Riptano Cassandra training in Seattle
Oops, correct link for Seattle: http://www.eventbrite.com/event/763062340
On Wed
Is this also true for RackAware with alternating nodes from 2 datacenters on
the ring?
On Wed, Sep 22, 2010 at 7:28 AM, Jonathan Ellis wrote:
> if you're using RackUnawareStrategy that should work.
>
> On Wed, Sep 22, 2010 at 5:27 AM, Daniel Doubleday
> wrote:
> > Hi all,
> >
> > just wanted to
We had one in NYC in August. We're lining up another East coast location soon.
On Wed, Sep 22, 2010 at 11:51 AM, Parsacala Jr, Nazario R. [Tech]
wrote:
> Any training in the East Coast ..?
>
> -Original Message-
> From: Jonathan Ellis [mailto:jbel...@gmail.com]
> Sent: Wednesday, Septembe
I normally get the source download, then run ant gen-thrift-py Is there a reason you want to build it manually?Aaron On 22 Sep, 2010,at 07:35 PM, Shashank Tiwari wrote:I installed thrift successfully on Snow Leaopard. However, when I run thrift -gen java interface/cassandra.thrift with Cassandra 0
Hi,
I am expecting my data size to be around nGB. However, it keeps growing
and growing.
I am setting the gc_grace_seconds for the CF to 5 hours, and I am also
setting "ttl" for all columns on a row and expecting that these columns
will be "deleted" after the ttl time, and will be "removed"
The data will only be physically deleted when a major compaction runs and the
GCGraceSeconds has passed. You need to trigger the compaction using node tool.
http://wiki.apache.org/cassandra/DistributedDeletes
Aaron
On 23 Sep 2010, at 12:14, Alaa Zubaidi wrote:
> Hi,
> I am expecting my data s
unsubscribe
-Original Message-
From: Aaron Morton [mailto:aa...@thelastpickle.com]
Sent: Wednesday, September 22, 2010 7:47 PM
To: user@cassandra.apache.org
Subject: Re: column expiration and rows in 0.7
The data will only be physically deleted when a major compaction runs and
the GCGra
Jeremy and Aaron,
Thanks for your help.
I had already installed Thrift on my Snow Leopard so I thought running *thrift
-gen cassandra.thrift* file would work. However as the wiki suggests it
appears only a specific version of Thrift work with a particular Cassandra
version. So I checked out the m
Minor compactions will often be able to perform this garbage collection as well
in 0.6.6 and 0.7.0 due to a great optimization implemented by Sylvain:
https://issues.apache.org/jira/browse/CASSANDRA-1074
-Original Message-
From: "Aaron Morton"
Sent: Wednesday, September 22, 2010 7:47pm
Hello,
I tried to use Column.Ttl property but I was not successful. My simple test:
1) insert column with ttl = 3
2) get column - all is ok
3) wait for 2 seconds
4) get column - all is ok
5) wait again for 2 seconds (so column should disappear)
6) get column - I got "Thrift.TApplicationException"
27 matches
Mail list logo