hi guys,
I'm going to build a warehouse with Cassandra. There are a lot of range and
aggregate queries.
Does Cassandra support parallel query processing?(both on single box and
cluster)
Dne 19.5.2012 0:09, Gurpreet Singh napsal(a):
Thanks Radim.
Radim, actually 100 reads per second is achievable even with 2 disks.
it will become worse as rows will get fragmented.
But achieving them with a really low avg latency per key is the issue.
I am wondering if anyone has played with in
Can I infer from this that if I have 3 replicas, then running repair
without -pr won 1 node will repair the other 2 replicas as well.
-Raj
On Sat, Apr 14, 2012 at 2:54 AM, Zhu Han wrote:
>
> On Sat, Apr 14, 2012 at 1:57 PM, Igor wrote:
>
>> Hi!
>>
>> What is the difference between 'repair' and
> 2. I know I have counter columns. I can do sums. But can I do averages ?
One counter column for the sum, one counter column for the count. Divide for
average :-)
/Janne
In the meantime, Sylvain just posted this:
http://www.datastax.com/dev/blog/cql3-evolutions
On Wed, May 16, 2012 at 11:45 AM, paul cannon wrote:
> Sylvain has a draft on https://issues.apache.org/jira/browse/CASSANDRA-3779
> , and that should be an official cassandra project doc "real soon now".
When these bugs are fixed:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&jqlQuery=project+%3D+CASSANDRA+AND+fixVersion+%3D+%221.1.1%22+AND+resolution+%3D+Unresolved+ORDER+BY+due+ASC%2C+priority+DESC%2C+created+ASC&mode=hide
On Wed, May 16, 2012 at 6:35 PM, Bryan Fernandez w
Looks like sstable corruption to me. Bad memory can often cause this.
You should upgrade to the latest 0.7 release and run nodetool scrub.
I don't think the 0.7.3 scrub was very robust.
On Thu, May 17, 2012 at 1:36 AM, Preston Cheung wrote:
> While doing compaction, cassandra occured an EOFExce
Better: use bin/sstableloader, which will copy exactly the right
ranges of data to the new cluster.
On Fri, May 18, 2012 at 3:39 PM, Rob Coli wrote:
> On Thu, May 17, 2012 at 9:37 AM, Bryan Fernandez
> wrote:
>> What would be the recommended
>> approach to migrating a few column families from a
1.1 will migrate your data to the new directory structure, but it needs the
0.8 schema to do that. Then you can drop the unwanted keyspace
post-upgrade.
On Fri, May 18, 2012 at 11:58 AM, Harshvardhan Ojha <
harshvardhan.o...@makemytrip.com> wrote:
> Hi All,
>
> ** **
>
> I am trying to migr
Sounds like you have a permissions problem. Cassandra creates a
subdirectory for each snapshot.
On Thu, May 17, 2012 at 4:57 AM, ruslan usifov wrote:
> Hello
>
> I have follow situation on our test server:
>
> from cassandra-cli i try to use
>
> truncate purchase_history;
>
> 3 times i got:
>
>
So, you're doing about 20 ops/s where each op consists of "read 2
metadata columns, then read ~250 columns of ~2K each." Is that right?
Is your test client multithreaded? Is it on a separate machine from
the Cassandra server?
What is your bottleneck?
http://spyced.blogspot.com/2010/01/linux-per
Hi experts,
I have a 6 node cluster spread across 2 DCs.
DC RackStatus State LoadOwnsToken
113427455640312814857969558651062452225
DC1 RAC13 Up Normal 95.98 GB33.33% 0
DC2 RAC5Up Normal 50.79 GB
Dear distinguished colleagues:
I am trying to come up with a data model that lets me do aggregations, such
as sums and averages.
Here are my requirements:
1. Data may be updated concurrently
2. I want to avoid changing schema; we have a multitennant cloud solution
that is driven by configuration
13 matches
Mail list logo