Hello all,
I currently working on testing of various HA scenarios on small cassandra
cluster of 8 nodes, RF=3. I have a test environment with thrift clients doing
double writes of all operations to cassandra cluster and reliable storage and
cross checking read results. Reads are performed with CL=
Hi.
What is the best method for make large extract of datas from cassandra ?
Extract directly from the SSTables files sound good ?
If yes,
There is an API for exploit directly the SSTables files ?
There is a specification of the SSTables files ?
Thx.
On Fri, Jul 16, 2010 at 3:53 PM, xavier manach wrote:
> Hi.
>
> What is the best method for make large extract of datas from cassandra ?
> Extract directly from the SSTables files sound good ?
> If yes,
> There is an API for exploit directly the SSTables files ?
> There is a specification of t
Arya,
That is not currently possible in trunk. It would be a good feature
though. Care to file a ticket?
Gary.
On Thu, Jul 15, 2010 at 22:13, Arya Goudarzi wrote:
> I recall jbellis in his training showing us how to increase the replication
> factor and repair data on a cluster in 0.6. How
https://issues.apache.org/jira/browse/CASSANDRA-1285
- Original Message -
From: "Gary Dusbabek"
To: user@cassandra.apache.org
Sent: Friday, July 16, 2010 7:17:48 AM
Subject: Re: Increasing Replication Factor in 0.7
Arya,
That is not currently possible in trunk. It would be a good featu
We integrate ganglia
On Mon, Jun 28, 2010 at 1:53 AM, Jonathan Ellis wrote:
> short version:
>
> if o.a.c.concurrent.{ROW-READ-STAGE,ROW-MUTATION-STAGE} and
> o.a.c.db.CompactionManager have
>
> - completed task count increasing
> - pending tasks stable (for RRS and RMS, stable in low hundreds
If you can't accept out of date data you shouldn't be reading at
CL.ONE. Making HH more complex is not the answer.
On Fri, Jul 16, 2010 at 7:52 AM, Oleg Anastasjev wrote:
> Hello all,
>
> I currently working on testing of various HA scenarios on small cassandra
> cluster of 8 nodes, RF=3. I have
On Thu, Jul 15, 2010 at 10:45:08PM -0700, Anthony Molinaro wrote:
> Is there something else I should try? The only thing I can think of
> is deleting the system directory on the new node, and restarting, so
> I'll try that and see if it does anything.
So I tried this, it didn't do anything. The
I've been doing quite a bit of benchmarking of Cassandra in the cloud using
stress.py I'm working on a comprehensive spreadsheet of results with a
template that others can add to, but for now I thought I'd post some of the
basic results here to get some feedback from others.
The first goal was
My friend Mikeal posted this on his blog, including a discussion of
Cassandra versus CouchDB and MongoDB:
http://www.mikealrogers.com/2010/07/mongodb-performance-durability/
I've emailed him a couple clarifications on the discussion of Cassandra,
but it's mostly spot-on and a good read on the sta
It's a non trivial, but you could try using haddop/pig. Take a look at
contrib/pig in the source.
You could output flat file formats.
Aaron
On 17 Jul 2010, at 02:09, Sylvain Lebresne wrote:
> On Fri, Jul 16, 2010 at 3:53 PM, xavier manach wrote:
>> Hi.
>>
>> What is the best method for
I think your read throughput is very high, and it may be unauthentic.
For random read, the disk seek will always be the bottleneck (100% utils)
There will be about 3 random disk-seeks for a random read, and aout 10ms for
one seek. So, there will be 30ms for a random read.
If you have only one dis
Maybe the OrderPreservingPartitioner should let user define the customized
comparator.
In fact, user can implement his/her own XXXOrderPreservingPartitioner.
On Tue, Jun 22, 2010 at 8:34 PM, Sylvain Lebresne wrote:
> 2010/6/22 Maxim Kramarenko :
> > Hello!
> >
> > I use OrderPreservingPartitione
13 matches
Mail list logo