Hi,
I know it has been depreciated, but OrderPreservingPartitioner still works
with 1.2?
Just wanted to know how it works, but I got a couple of exceptions as below:
ERROR [GossipStage:2] 2013-08-23 07:03:57,171 CassandraDaemon.java (line
175) Exception in thread Thread[GossipStage:2,5,main]
jav
It can handle some millions of columns, but not more like 10M. I mean, a
request for such a row concentrates on a particular node, so the
performance degrades.
> I also had idea for semi-ordered partitioner - instead of single MD5, to
have two MD5's.
works for us with wide row with about 40-50 M,
Hi,
I am currently using about 10 CF to store temporal data. Those data are
growing pretty big (hundreds of GB when I actually only need information
from the last month - i.e. about hundreds of MB).
I am going to delete old (and useless) data, I cannot always use TTL since
I have counters too. Ye
For the first exception: OPP was not working in 1.2. It has been fixed but
not yet there in latest 1.2.8 version.
Jira issue about it: https://issues.apache.org/jira/browse/CASSANDRA-5793
On Fri, Aug 23, 2013 at 12:51 PM, Takenori Sato wrote:
> Hi,
>
> I know it has been depreciated, but Order
I need to perform range query efficiently. I have the table like:
users
---
user_id | age | gender | salary | ...
The attr user_id is the PRIMARY KEY.
Example of querying:
select * from users where user_id = '*x*' and age > *y *and age < *z* and
salary > *a* and salary < *b *and age='M';
(I'm using Cassandra 1.2.8 and Pig 0.11.1)
I'm loading some simple data from Cassandra into Pig using CqlStorage. The
CqlStorage loader defines a Pig schema based on the Cassandra schema, but
it seems to be wrong.
If I do:
data = LOAD 'cql://bookdata/books' USING CqlStorage();
DESCRIBE data;
I
Take a look at the following article:
http://www.datastax.com/dev/blog/when-to-use-leveled-compaction
You'll want to monitor your IOPS for a while to make sure you can spare the
overhead before you try it. Certainly one at a time on column families and
only where the use case makes sense given the
On Thu, Aug 22, 2013 at 7:53 PM, Faraaz Sareshwala <
fsareshw...@quantcast.com> wrote:
> According to the datastax documentation [1], there are two types of row
> cache providers:
>
...
> The off-heap row cache provider does indeed invalidate rows. We're going
> to look into using the ConcurrentL
On Wed, 2013-08-21 at 10:42 -0700, Robert Coli wrote:
> On Wed, Aug 21, 2013 at 3:58 AM, Tim Wintle wrote:
>
> > What would the best way to achieve this? (We can tolerate a fairly short
> > period of downtime).
> >
>
> I think this would work, but may require a full cluster shutdown.
>
> 1) sto
I appear to have a problem illustrated by
https://issues.apache.org/jira/browse/CASSANDRA-1955. At low data
rates, I'm seeing mutation messages dropped because writers are
blocked as I get a storm of memtables being flushed. OpsCenter
memtables seem to also contribute to this:
INFO [OptionalTasks:
I can't emphasise enough testing row caching against your workload for
sustained periods and comparing results to just leveraging the
filesystem cache and/or ssds. That said. The default off-heap cache can
work for structures that don't mutate frequently, and whose rows are not
very wide such t
Hi All,
We are evaluating our JVM heap size configuration on Cassandra 1.2.8 and would
like to get some feedback from the community as to what the proper JVM heap
size should be for cassandra nodes deployed on to Amazon EC2. We are running
m2.4xlarge EC2 instances (64GB RAM, 8 core, 2 x 840GB d
On Thu, Aug 22, 2013 at 10:40 AM, Jay Svc wrote:
> its DSE 3.1 Cassandra 2.1
>
Not 2.1... 1.2.1? Web search is sorta inconclusive on this topic, you'd
think it'd be more easily referenced?
=Rob
The advice I heard at the New York C* conference...which we follow is to
use the m2.2xlarge and give it about 8 GB. The m2.4xlarge seems overkill
(or at least over price).
Brian
On Fri, Aug 23, 2013 at 6:12 PM, David Laube wrote:
> Hi All,
>
> We are evaluating our JVM heap size configuration
14 matches
Mail list logo