Maybe. We haven't really tested it without buffering and probably
won't anytime soon. 1 minute latency is good enough for what we're
doing.
On Mon, May 23, 2011 at 1:58 PM, Jeremy Hanna
wrote:
>
> On May 23, 2011, at 2:23 PM, Ryan King wrote:
>
>> On Mon, May 23, 2011 at 12:06 PM, Yang wrote:
>>
You could have removed the affected commit log file and then run a nodetool
repair after the node had started.
It would be handy to have some more context for the problem. Was this an
upgrade from 0.7 or a fresh install?
If you are running the rc's it's handy to turn logging up to DEBUG so the
We are performing the repair on one node only. Other nodes receive reasonable
amounts of data (~500MB). It's only the repairing node itself which
'explodes'.
I must admit that I'm a noob when it comes to aes/repair. Its just strange that
a cluster that is up and running with no probs is doing
Nate,
Really appreciate your quick response.
Yes, I will sign up hector-users as well.
Thanks,
John
On Mon, May 23, 2011 at 5:55 PM, Nate McCall wrote:
> The Mutator#insert signature is a single insertion operation. To
> "batch" operations, use Mutator#addInsertion. You must then call
> Muta
The Mutator#insert signature is a single insertion operation. To
"batch" operations, use Mutator#addInsertion. You must then call
Mutator#execute to send the batched operations.
For Hector specific questions, feel free to sign up for
hector-us...@googlegroups.com as well.
On Mon, May 23, 2011 at
DataStax is hosting full-day Cassandra training this week in Seattle
and in London on the 25th and 26th, respectively.
Seattle's training will be given by Srisatish Ambati, our director of
engineering.
London's will be given by Sylvain Lebresne, Cassandra committer and a
familiar face on this list
Have you looked at graphite? It would be very cool to see graphite using
cassandra as a backend, and then to have cacti feeding data into cassandra.
On May 20, 2011, at 4:48 PM, Edward Capriolo wrote:
> The first love of my open life was cacti. I am going to discuss with
> them porting some of
Hi,
I am pretty new to Cassandra and am going to use Cassandra 0.8.0. I have two
questions (sorry if they are very basic ones):
1) I have a column family to hold many super columns, say 30. When I first
insert the data to the column family, do I need to insert each column one at
a time or can I i
On May 23, 2011, at 2:23 PM, Ryan King wrote:
> On Mon, May 23, 2011 at 12:06 PM, Yang wrote:
>> Thanks Ryan,
>>
>> could you please share more details: according to what you observed in
>> testing, why was performance worse if you do not do extra buffering?
>>
>> I was thinking (could be wr
Since this is a testing system, I deleted the commit log and it came
right up. My question now is, let's say I had a ton of data in the
commit log that this node needs now. What is the best way to get the
data back to the node? Does a nodetool repair do this? Or do I need to
decommission the nod
On Mon, May 23, 2011 at 12:06 PM, Yang wrote:
> Thanks Ryan,
>
> could you please share more details: according to what you observed in
> testing, why was performance worse if you do not do extra buffering?
>
> I was thinking (could be wrong) that without extra buffering, the
> counter update g
> I'm a bit lost: I tried a repair yesterday with only one CF and that didn't
> really work the way I expected but I thought that would be a bug which only
> affects that special case.
>
> So I tried again for all CFs.
>
> I started with a nicely compacted machine with around 320GB of load. Total
Thanks Ryan,
could you please share more details: according to what you observed in
testing, why was performance worse if you do not do extra buffering?
I was thinking (could be wrong) that without extra buffering, the
counter update goes to Memtable.putIfPresent() and
CounterColumn.resolve(),
On Sun, May 22, 2011 at 11:00 AM, Yang wrote:
> Thanks,
>
> I did read through that pdf doc, and went through the counters code in
> 0.8-rc2, I think I understand the logic in that code.
>
> in my hypothetical implementation, I am not suggesting to overstep the
> complicated logic in counters code
Thanks Sylvain
well no I don't really understand it at all. We have all
Wide rows / small val to single larger column in one row.
The problem hits every CF. RF = 3 Read / Write with Quorum.
The CF that is killing me right now is one col thats never updated (its WORM -
updates are reinserts u
I have a test node system running release 0.8rc1. I rebooted node3 and
now Cassandra is failing on startup.
Any ideas? I am not sure where to begin.
Debian 6, plenty of disk space, Cassandra 0.8rc1
INFO 13:48:58,192 Creating new commitlog segment
/home/cassandra/commitlog/CommitLog-130617293
On Mon, May 23, 2011 at 7:17 PM, Daniel Doubleday
wrote:
> Hi all
>
> I'm a bit lost: I tried a repair yesterday with only one CF and that didn't
> really work the way I expected but I thought that would be a bug which only
> affects that special case.
>
> So I tried again for all CFs.
>
> I sta
Hi all
I'm a bit lost: I tried a repair yesterday with only one CF and that didn't
really work the way I expected but I thought that would be a bug which only
affects that special case.
So I tried again for all CFs.
I started with a nicely compacted machine with around 320GB of load. Total dis
Three ways to do this.
Client app does get key for every row, lots of small network operations
brisk / hive does select(*), which is sent to each node to map then
the hadoop network shuffle merges the result
Write your own code to merge all the SStables across the cluster.
So I think that brisk
thanks Sri
I am trying to make sure that Brisk underneath does a simple scraping
of the rows, instead of doing foreach key ( keys ) { lookup (key) }..
after that, I can feel comfortable using Brisk for the import/export jobs
yang
On Mon, May 23, 2011 at 8:50 AM, SriSatish Ambati
wrote:
> Adrian
Adrian,
+1
Using hive & hadoop for the export-import of data from & to Cassandra is one
of the original use cases we had in mind for Brisk. That also has the
ability to parallelize the workload and finish rapidly.
thanks,
Sri
On Sun, May 22, 2011 at 11:31 PM, Adrian Cockcroft <
adrian.cockcr...@g
On Mon, May 23, 2011 at 5:47 AM, Wojciech Pietrzok wrote:
> It was installed as 0.7.2 and upgraded with each new official release.
I bet that's the problem, then.
https://issues.apache.org/jira/browse/CASSANDRA-2244 could cause
indexes to not be updated for releases < 0.7.4. You'll want to
rebui
\w+
On Mon, May 23, 2011 at 12:28 AM, Dikang Gu wrote:
> What's the naming convention of the column family in cassandra? I did not
> find this in the wiki yet...
> Thanks.
>
> --
> Dikang Gu
> 0086 - 18611140205
>
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the s
Added Jira item CASSANDRA-2675. I also included a test program in the ticket to
reproduce the issue.
Thanks,
Rene
-Original Message-
From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: donderdag 19 mei 2011 17:13
To: user@cassandra.apache.org
Subject: Re: java.io.IOError: java.i
It was installed as 0.7.2 and upgraded with each new official release.
As I wrote in another message in this thread, now nodes are upgraded
to 0.7.6 but it still seems that one of the problematic nodes returns
inconsistent data.
By the way - is it possible to force the rebuild of the secondary
ind
Hi,
Is there a time planed for 0.8.1 release?
I want to use the CompositeType comparer :
https://issues.apache.org/jira/browse/CASSANDRA-2231
Thanks!
Donal
26 matches
Mail list logo