I have experimenting cassandra latest version for storing the huge the in
our application.
Write are doing good. but when comes to reads i have obsereved that
cassandra is getting into too many open files issues. When i check the logs
its not able to open the cassandra data files any more before o
Hi Murthy,
Did you do a package install (.deb?) or you downloaded the tar?
If the latest, you have to adjust the limits.conf file
(/etc/security/limits.conf) to raise the nofile (number of files open) for the
cassandra user.
If you are using the .deb package, the limit is already raised to 100
Thanks Pieter for giving quick reply.
I have downloaded the tar ball. And have changed the limits.conf as per
the documentation like below.
* soft nofile 32768
* hard nofile 32768
root soft nofile 32768
root hard nofile 32768
* soft memlock unlimited
* hard memlock unlimited
root soft memlock un
Tested on version 2.0.1 and 2.0.2.
At complete idle running (nothing stored nor queried) I see that some random
(depending on the tests I do) column family gets compacted over and over again
(already 48h) . Total data size is only 3.5GB. Column family was created with
SSTableSize : 10 MB
Using
Hey guys, just started to learn Cassandra recently, got a simple
(hopefully) question on querying.
There's a table with composite primary key - mdid and bucket_id. So I
assume mdid is going to be a partition key and bucket_id is a clustering
key. There're also two more columns to hold a text and a
Hi Murthy,
32768 is a bit low (I know datastax docs recommend this). But our production
env is now running on 1kk, or you can even put it on unlimited.
Pieter
From: Murthy Chelankuri [mailto:kmurt...@gmail.com]
Sent: donderdag 7 november 2013 12:46
To: user@cassandra.apache.org
Subject: Re: Get
Hi all,
When I upgrade C* from 1.1.9 to 1.2.6, I notice that the previous
hintscolumnfamily would be directly truncated.
Can you tell me why ?
Because consistency is important to my services.
Best Regards,
Boole Guo
Hi,
When I debug Cassandra in eclipse, log info only appears a little. If the
server finish startup, I notice the log info wouldn't print into the console.
The system os is win7.
Why?
Thanks.
Best Regards,
Boole Guo
On 11/06/2013 11:18 PM, Aaron Morton wrote:
The default row cache is of the JVM heap, have you changed to the
ConcurrentLinkedHashCacheProvider ?
ConcurrentLinkedHashCacheProvider was removed in 2.0.x.
Hi, all!
I've performed a test in my cluster, regarding the sstableloader behaviour on
counter column families. The test cluster has 3 nodes running Cassandra 1.2.3
with RF=3. The machine that sstableloader run had Cassandra 1.2.11.
The counter column family had only one row, so I chose one nod
Yes. I am taking a Snapshot and then offloading the full data into S3. How
will Table Snap help?
On Wed, Nov 6, 2013 at 6:57 AM, Robert Coli wrote:
> On Tue, Nov 5, 2013 at 4:36 PM, Sridhar Chellappa
> wrote:
>
>>
>>1. *If not, what is the right backup strategy ?*
>>
>> You didn't specif
Sorry - I meant more that it would be worth experimenting with (much)
smaller page sizes as opposed to getting back one giant page. This would
cut down the read command overhead but you would still have the parsing to
deal with.
Aaron has a good point though. A big query is still a bad idea if it
Thanks--that's what I was wondering. So, if I understand you correctly,
it sounds like a single
SELECT ... WHERE foo in (k items);
can tie up k threads rather than 1 thread per node which can starve
other tasks on a cluster. AFAICT, there's no way to say "this query
should be limited to
On Mon, Nov 4, 2013 at 2:09 PM, Elias Ross wrote:
>
> Thanks Robert.
>
> CASSANDRA-6298
>
>
I've been attempting a scrub and see the same issue. Is there some mapping
issue? Are six data directories too much for Cassandra?
$ ./nodetool -p 7299 scrub rhq
Exception in thread "main" java.lang.Runti
Regardless of the size of 'k', each key gets turned into a ReadCommand
internally and is (eventually) delegated to StorageProxy#fetchRows:
https://github.com/apache/cassandra/blob/cassandra-1.2/src/java/org/apache/cassandra/service/StorageProxy.java#L852
You still send the same number of ReadComma
Last time to respond to myself...
On Thu, Nov 7, 2013 at 8:23 AM, Elias Ross wrote:
I've been attempting a scrub and see the same issue. Is there some mapping
> issue? Are six data directories too much for Cassandra?
>
>
With the scrub --no-snapshots option I get:
java.util.concurrent.Execution
I see 100 000 recommended in the Datastax documentation for thenofile limit
since Cassandra 1.2 :
http://www.datastax.com/documentation/cassandra/2.0/webhelp/cassandra/install/installRecommendSettings.html
-Arindam
From: Pieter Callewaert [mailto:pieter.callewa...@be-mobile.be]
Sent: Thursday,
How to move a token to another node on 1.2.x? I have tried move command,
[cassy@dsat103.e1a ~]$ nodetool move 168755834953206242653616795390304335559
Exception in thread "main" java.io.IOException: target token
168755834953206242653616795390304335559 is already owned by another node.
at
org.apache
On Thu, Nov 7, 2013 at 8:23 AM, Elias Ross wrote:
>
> I've been attempting a scrub and see the same issue. Is there some mapping
> issue? Are six data directories too much for Cassandra?
>
Do both filesystems support hard links?
How do you have JBOD defined?
=Rob
On Thu, Nov 7, 2013 at 6:28 AM, Sridhar Chellappa wrote:
> Yes. I am taking a Snapshot and then offloading the full data into S3.
> How will Table Snap help?
>
As I detailed in my previous mail :
1) incremental style backup, instead of snapshot + full
2) tracks meta information about backup set
Thanks for sharing tablesnap. It's just what I have been looking for.
On Thu, Nov 7, 2013 at 5:10 PM, Robert Coli wrote:
> On Thu, Nov 7, 2013 at 6:28 AM, Sridhar Chellappa
> wrote:
>
>> Yes. I am taking a Snapshot and then offloading the full data into S3.
>> How will Table Snap help?
>>
>
On Thu, Nov 7, 2013 at 5:08 PM, Robert Coli wrote:
> On Thu, Nov 7, 2013 at 8:23 AM, Elias Ross wrote:
>
>>
>> I've been attempting a scrub and see the same issue. Is there some
>> mapping issue? Are six data directories too much for Cassandra?
>>
>
> Do both filesystems support hard links?
>
>
On Thu, Nov 7, 2013 at 3:58 PM, Daning Wang wrote:
> How to move a token to another node on 1.2.x? I have tried move command,
>
...
> We don't want to use cassandra-shuffle, because it put too much load on
> the server. we just want to move some tokens.
>
driftx on #cassandra reminded me of :
I am using the below table in our use case -
create table testing1 (
employee_id text,
employee_name text,
value text,
last_modified_date timeuuid,
primary key (employee_name,last_modified_date)
);
In my above table employee_id will be unique always starting from 1 till
327
> On 8 Nov 2013, at 12:54 pm, Techy Teck wrote:
>
> I am using the below table in our use case -
>
> create table testing1 (
> employee_id text,
> employee_name text,
> value text,
> last_modified_date timeuuid,
> primary key (employee_name,last_modified_date)
>);
Bef
I am using the below table in our use case -
create table test_new (
employee_id text,
employee_name text,
value text,
last_modified_date timeuuid,
primary key (employee_id, last_modified_date)
);
create index employee_name_idx on test_new (e
@Jacob - See my this question with the subject - *CQL Tables in Cassandra
with an Index*
I just send it out.. Sorry for duplicacy.. After I posted it out, I realize
I missed the important details..
On Thu, Nov 7, 2013 at 6:52 PM, Jacob Rhoden wrote:
>
>
> > On 8 Nov 2013, at 12:54 pm, Techy T
Check if its an issue with permissions or broken links..
On Nov 6, 2013 11:17 AM, "Elias Ross" wrote:
>
> I'm seeing the following:
>
> Caused by: java.lang.RuntimeException: java.io.FileNotFoundException:
> /data05/rhq/data/rhq/six_hour_metrics/rhq-six_hour_metrics-ic-1-Data.db (No
> such file o
Hi all,
If I understood correctly, in Cassandra, authenticated users can see the
list of all keyspaces. (i.e. full schema)
Is that the default behavior in Cassandra? Can we restrict that behavior?
Thanks,
Bhathiya
Thanks i missed that issue but it solved our Problems.
Regards
Fabian
From: Robert Coli
Sent: Tuesday, 5 November 2013 19:12
To: user@cassandra.apache.org
On Tue, Nov 5, 2013 at 12:06 AM, Fabian Seifert
wrote:
It keeps crashing with OOM on CommitLog replay:
https:
Please take a look at https://issues.apache.org/jira/browse/CASSANDRA-6266
for details
-M
"Bhathiya Jayasekara" wrote in message
news:capt+24qd4e_+_6oc3rrmzpq2c6yoij1yqydlkcepngabxk_...@mail.gmail.com...
Hi all,
If I understood correctly, in Cassandra, authenticated users can see the
list
31 matches
Mail list logo