Hello All,
I have installed Cassandra Datastax community edition 2.2 and python 2.6
but when I run ./cqlsh I faced with the following error:
"No appropriate python interpreter found."
I know that the error indicates that I have incompatible Python version but
I've already installed Python 2.6 and
I'm seeing the following:
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException:
/data05/rhq/data/rhq/six_hour_metrics/rhq-six_hour_metrics-ic-1-Data.db (No
such file or directory)
at
org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
at
org.a
We've been working on tracking down the causes of nodes in the cluster
incorrectly marking other, healthy nodes down. We've identified three
scenarios. The first two deal with the Gossip thread blocking while
processing a state change, preventing subsequent heartbeats from being
processed:
1. Wr
On Tue, Nov 5, 2013 at 4:36 PM, Sridhar Chellappa
wrote:
>
>1. *If not, what is the right backup strategy ?*
>
> You didn't specify, but it sounds like you are doing a snapshot and then a
full offhost backup of the sstables?
Perhaps instead of point in time full backups, you could continuous
I don't understand how the creation of a snapshot causes any load
whatsoever. By definition, a snapshot is a hard link of an existing
SSTable. The SSTable is not being physically copied so there is no disk
I/O, it's just a reference to an inode.
--
Ray //o-o\\
On Tue, Nov 5, 2013 at 8:09 PM, A
Why not just have a small DC/ring of nodes which you just do your
snapshots/backups from?
I wouldn't take nodes offline from the ring just to back them up.
The other option is to add sufficient nodes to handle your existing
request I/O + backups. Sounds like you might be already I/O
constrained.
We are running into problems where Backup jobs are taking away a huge
bandwidth out of the C* nodes. While we can schedule a particular timing
window for backups alone, the request pattern is so random; there is no
particular time where we can schedule backups, periodically.
My current thinking is
> Date: Tue, 5 Nov 2013 23:20:37 +0100
> Subject: Re: filter using timeuuid column type
> From: t...@drillster.com
> To: user@cassandra.apache.org
>
> This is because time2 is not part of the primary key. Only the primary
> key column(s) can be queried with>
This is because time2 is not part of the primary key. Only the primary key
column(s) can be queried with > and <. Secondary indexes (like your
timeuuid_test2_idx) can only be queried with the = operator.
Maybe you can make time2 also part of your primary key?
Good luck,
Tom
On Mon, Nov 4, 201
Hi Thomas,
I understand your concerns about ensuring the integrity of your data when
having to maintain the indexes yourself.
In some situations, using Cassandra's built in secondary indexes is more
efficient -- when many rows contained the indexed value. Maybe your
permissions fall in this categ
On Tue, Nov 5, 2013 at 12:06 AM, Fabian Seifert <
fabian.seif...@frischmann.biz> wrote:
> It keeps crashing with OOM on CommitLog replay:
>
https://issues.apache.org/jira/browse/CASSANDRA-6087
Probably this issue, fixed in 2.0.2.
=Rob
Hi,
I know it is "Not Supported and also Not Advised", but currently we have
half of our cassandra cluster on 1.2.9 and the other half on 2.0.0 as we
got stuck during upgrade. We know already that it was not very wise
decision but that's what we've got now.
I have basically two questions. First,
What happens if they are not being successfully delivered ? Will they
eventually TTL-out ?
Also, do I need to truncate hints on every node or is it replicated ?
Oleg
On 2013-11-04 21:34:55 +, Robert Coli said:
On Mon, Nov 4, 2013 at 11:34 AM, Oleg Dulin wrote:
I have a dual DC setup, 4
We have the same problem.
2013/11/5 Jiri Horky
> Hi there,
>
> we are seeing extensive memory allocation leading to quite long and
> frequent GC pauses when using row cache. This is on cassandra 2.0.0
> cluster with JNA 4.0 library with following settings:
>
> key_cache_size_in_mb: 300
> key_ca
We are currently evaluating cassandra 2.0 to be used with a Project.
The cluster constists of 5 identical nodes each has 16Gb RAM and a 6 core Xeon
and 2TB harddisk.
The heap max size is defined with 8Gig and row_Cache_size_in_mb=0
The last test was a write test, runs several days (with nearly
15 matches
Mail list logo