No niftier tricks -- you will need to drain/restart because of the
commitlog headers. Other than that it should work fine.
On Fri, Nov 26, 2010 at 8:08 PM, David King wrote:
> I'd like to move a CF from one keyspace into another. I'm running 0.6.8. Is
> this just a matter of draining the nodes,
I'd like to move a CF from one keyspace into another. I'm running 0.6.8. Is
this just a matter of draining the nodes, taking down the cluster, updating the
schema in storage-conf.xml, and moving the files themselves? Is there an even
niftier trick that I can without taking down the cluster?
For
unsubscribe
On Fri, Nov 26, 2010 at 1:34 PM, Edward Capriolo wrote:
> I believe there is room for other compaction models. I am interested
> in systems that can optimize the case with multiple data directories
> for example. It seems like from my experiment a major compaction can
> not fully utilize hardware
Prior to Cassandra 0.7, there was a limitation of 2GB on row sizes as the
entire row had to fit in memory for compaction. As far as I'm aware, in
Cassandra 0.7, the limit has changed to 2^31 (approximately 2 billion)
columns.
See http://wiki.apache.org/cassandra/CassandraLimitations for more detai
This might be an issue with selinux. You can try this quickly to
temporarily disable selinux enforcement:
/usr/sbin/setenforce 0 (as root)
and then start cassandra as your user.
On Fri, Nov 26, 2010 at 1:00 AM, Jason Pell wrote:
> I restarted the box :-) so it's well and truly set
>
> Sent from
Hi guys,
I would like to be able to do slices but have been reading that when
using too many columns under one key Cassandra becomes very slow? Is it
true and if so what are the suggestions.
P.S.
The value of the column is just 1 so its quite flat model, reasons
behind are slicing on name.
On Fri, Nov 26, 2010 at 10:49 AM, Peter Schuller
wrote:
>> Making compaction parallel isn't a priority because the problem is
>> almost always the opposite: how do we spread it out over a longer
>> period of time instead of sharp spikes of activity that hurt
>> read/write latency. I'd be very sur
Hi guys,
I have a key with 3million+ columns but when I am trying to run
get_count on it its getting me error if setting limit more than 46000+
any ideas?
In previous API there was no predicate at all so it was simply counting
number of columns now its not so simple any more.
Please let me
> Making compaction parallel isn't a priority because the problem is
> almost always the opposite: how do we spread it out over a longer
> period of time instead of sharp spikes of activity that hurt
> read/write latency. I'd be very surprised if latency would be
> acceptable if you did have paral
> I keep running into the following error while running a nodetool cleanup:
Depending on version, maybe you're running into this:
https://issues.apache.org/jira/browse/CASSANDRA-1674
But note though that independently of the above, if your 80 gb is
mostly a single column family, you're in dan
Hello,
I keep running into the following error while running a nodetool cleanup:
ERROR [COMPACTION-POOL:1] 2010-11-26 12:36:38,383 CassandraDaemon.java
(line 87) Uncaught exception in thread
Thread[COMPACTION-POOL:1,5,main]
java.lang.UnsupportedOperationException: disk full
at
org.apache
Sorry - just realised this is now a parameter on the CFDef
From: Dr. Andrew Perella [mailto:a...@eutechnyx.com]
Sent: 26 November 2010 14:17
To: user@cassandra.apache.org
Subject: RE: Newbie question on Cassandra mem usage
How can I set these per CF when I create them dynamically?
Regards,
Andrew
How can I set these per CF when I create them dynamically?
Regards,
Andrew
From: Aaron Morton [mailto:aa...@thelastpickle.com]
Sent: 22 November 2010 21:40
To: user@cassandra.apache.org
Subject: Re: Newbie question on Cassandra mem usage
They are memtable_throughput_in_mb, memtable_flush_after_mi
Making compaction parallel isn't a priority because the problem is
almost always the opposite: how do we spread it out over a longer
period of time instead of sharp spikes of activity that hurt
read/write latency. I'd be very surprised if latency would be
acceptable if you did have parallel compac
Its makes it far simpler to enter queries though. No more all on the
one line - which I kind of assume is why the change was made. All in
all after the initial keyboard bashing, I am happy :-)
2010/11/26 Héctor Izquierdo Seliva :
> Yes, I know what you mean. I bashed my head a few times against
Yes, I know what you mean. I bashed my head a few times against the
keyboard :)
El vie, 26-11-2010 a las 21:37 +1100, Jason Pell escribió:
> It works perfectly, thanks so much, I was finding it a little frustrating.
>
> On Fri, Nov 26, 2010 at 9:36 PM, Jason Pell wrote:
> > no way - well I certa
It works perfectly, thanks so much, I was finding it a little frustrating.
On Fri, Nov 26, 2010 at 9:36 PM, Jason Pell wrote:
> no way - well I certainly feel stupid! Is this new, it worked without
> it on beta 3?
>
> 2010/11/26 Héctor Izquierdo Seliva :
>> Try ending the lines with ;
>>
>> Rega
no way - well I certainly feel stupid! Is this new, it worked without
it on beta 3?
2010/11/26 Héctor Izquierdo Seliva :
> Try ending the lines with ;
>
> Regards
>
> El vie, 26-11-2010 a las 21:25 +1100, jasonmp...@gmail.com escribió:
>> Hi,
>>
>> So I had this working perfectly with beta 3 and
Try ending the lines with ;
Regards
El vie, 26-11-2010 a las 21:25 +1100, jasonmp...@gmail.com escribió:
> Hi,
>
> So I had this working perfectly with beta 3 and now it fails.
> Basically what I do is follows:
>
> 1) Extract new rc1 tarball.
> 2) Prepare location based on instructions in Readm
Hi,
So I had this working perfectly with beta 3 and now it fails.
Basically what I do is follows:
1) Extract new rc1 tarball.
2) Prepare location based on instructions in Readme.txt:
sudo rm -r /var/log/cassandra
sudo rm -r /var/lib/cassandra
sudo mkdir -p /var/log/cassandra
sudo chown -R `whoam
21 matches
Mail list logo