My team prefers Pelops. https://github.com/s7/scale7-pelops
It's had failover since 0.7.
http://groups.google.com/group/scale7/browse_thread/thread/19d441b7cd000de0/624257fe4f94a037
With respect to avoiding writing marshaling code yourself, I agree with the
OP that that is rather lacking with the
That's pretty awesome! Apologies for my misleading statement about
marshaling support. I clearly haven't been keeping up. :)
On Fri, Jun 17, 2011 at 8:13 PM, Dan Washusen wrote:
>
> Also, a quick look at the Hector wiki suggests that they have some form of
> annotation support (
> https://github
All of the download URLs for 0.7.6-2 appear to be broken. The issue appears
to be a lack of "-2" in the path.
http://cassandra.apache.org/download/
Dan
Hi all,
I am nursing an overloaded 0.6 cluster through compaction to get its disk
usage under 50%. Many rows' content have been replaced so that after
compaction there will be plenty of room, but a couple of nodes are
currently at 95%.
One strategy I considered is temporarily moving a couple of t
ct.
>
> 3) Once you have compacted I would recommend stopping the node, moving the
> SSTables back to the local node and removing the additional data file
> directory.
>
> Hope that helps.
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
&g
Chunking is a good idea, but you'll have to do it yourself. A few of the
columns in our application got quite large (maybe ~150MB) and the failure
mode was RPC timeout exceptions. Nodes couldn't always move that much data
across our data center interconnect in the default 10 seconds. With enough
he
The stack looks like this one:
https://issues.apache.org/jira/browse/CASSANDRA-2863
On Fri, Feb 24, 2012 at 9:12 AM, Jahangir Mohammed
wrote:
> Hi All,
>
>
> Minor compaction throws NPE. Any ideas? Bug?
>
>
> Cassandra version: 0.8.7
>
>
> Stack Trace:
>
> ERROR [Thread-220] 2012-02-24 16:35:34,4
Hi Stefan. Can you share the output of nodetool cfstats?
On Tue, Feb 28, 2012 at 1:50 AM, Stefan Reek wrote:
> Hi All,
>
> We are running a 3-node cluster with Cassandra 0.6.13.
> We are in the process of upgrading to 1.x, but can't do so for a while
> because we can't take the cluster offline.
gt; Pending Tasks: 0
> Key cache capacity: 20
> Key cache size: 0
> Key cache hit rate: NaN
> Row cache: disabled
> Compacted row minimum size: 0
> Compacted row maximum size: 0
> Compacted row mean size: 0
>
imeouts on requests
> and also see Dropped Messages in my logs.
>
> Cheers,
>
> Stefan
>
>
>
> On 02/29/2012 07:48 PM, Dan Retzlaff wrote:
>
> First to be clear, I'm not an expert but I suggested "cfstats" because it
> can show unhealthy signs.
My team switched our production stack from Hector to Pelops a while back,
based largely on this admittedly subjective "programmer experience" bit.
I've found Pelops' code and abstractions significantly easier to follow and
integrate with, plus Pelops has had feature-parity with Hector for all of
ou
Dear experts, :)
Our application triggered an OOM error in Cassandra 0.6.5 by reading the
same 1.7MB column repeatedly (~80k reads). I analyzed the heap dump, and it
looks like the column value was queued 5400 times in an
OutboundTcpConnection destined for the Cassandra instance that received the
Beautiful, thanks.
On Sun, Mar 20, 2011 at 4:36 PM, Jonathan Ellis wrote:
> 0.7.1+ uses zero-copy reads in mmap'd mode so having 80k references to
> the same column is essentially just the reference overhead.
>
> On Fri, Mar 18, 2011 at 7:11 PM, Dan Retzlaff wrote:
> > D
If you go the home-grown route, check out these musings on adapting
Lamport's Bakery algorithm to a similar problem:
http://wiki.apache.org/cassandra/Locking
On Sun, Nov 7, 2010 at 5:05 PM, Mubarak Seyed wrote:
> Hi All,
> Can someone please validate and recommend a solution for the given design
14 matches
Mail list logo