i think what he means is...do you know what day the 'oldest' day is? eg
if you have a rolling window of say 2 weeks, structure your query so
that your slice range only goes back 2 weeks, rather than to the
beginning of time. this would avoid iterating over all the tombstones
from prior to the
ok great, thanks ed, that's really helpful.
just wanted to make sure i wasn't missing something fundamental.
On 13/11/2011 23:57, Ed Anuff wrote:
Yes, correct, it's not going to clean itself. Using your example with
a little more detail:
1 ) A(T1) reads previous location (T0,L0) from index_en
I have digged a bit more to try to find the root cause of the error, and I have
some more information.
It seems that all started after I upgraded Cassandra from 0.8.x to 1.0.0
When I do a incr on the CLI I also get a timeout.
row_cache_save_period_in_seconds is set to 60sec.
Could be a problem f
Check if Cassandra secondary index meets your requirement.
Thank you,
Jaydeep
From: Aklin_81
To: user
Sent: Sunday, 13 November 2011 12:32 PM
Subject: Fast lookups for userId to username and vice versa
I need to create mapping from (s) to (s) which need to
p
Thanks for your feedback Kolar.
Well to be honest I was thinking of using that connection in production,
not for a backup node.
My Cassandra deployment works just like an expensive file caching and
replication - I mean, all I use it for is to replicate some 5million files
of 2M each across few nod
Well to be honest I was thinking of using that connection in
production, not for a backup node.
For productions. there are several problems. Added network latency which
is inconsistent and vary greatly during day, sometimes you will face
network lags which will break cluster for a while (abo
Broadband here is fairly stable, to be honest don't remember last time I
had problems such as larger than expected latency or downtime - ISP Bethere
/UK
My application can cope fine with up to 10 min lag (data
freshness), however taking your input into consideration I agree with you,
so don't think
from log output it seems that during hintedhandoff delivery compaction
is kicked too soon. There needs to be some delay for flusher to write
sstable.
INFO [GossipStage:1] 2011-11-14 13:16:03,933 Gossiper.java (line 745)
InetAddress /***.99.40 is now UP
INFO [HintedHandoff:1] 2011-11-14 1
I am new to cassandra. I search for random write examples in wiki
(http://wiki.apache.org/cassandra/ClientOptions) and mailing list, but
do not find similar one. My question is - does cassandra suppport
random write access? Is there any code example that explains this? Or
any doc that may provide s
On Mon, Nov 14, 2011 at 1:21 AM, Michael Vaknine wrote:
> Hi,
>
> After configuring the encryption on Cassandra.yaml I get this error when
> upgrading from 1.0.0 to 1.0.2
> Attached the log file with the errors.
https://issues.apache.org/jira/browse/CASSANDRA-3466
-Brandon
Does this means that I have to wait to 1.0.3?
-Original Message-
From: Brandon Williams [mailto:dri...@gmail.com]
Sent: Monday, November 14, 2011 3:51 PM
To: user@cassandra.apache.org
Cc: cassandra-u...@incubator.apache.org
Subject: Re: Upgrade Cassandra Cluster to 1.0.2
On Mon, Nov 14,
Hi,
yes, the heap size is set to 2GB on all nodes. Without any activity, the heap
usage is less than 1GB. Does this include the bloom filters?
From the logs I can see that at the beginning of the test the GC is able to
free enough memory to push the heap usage to 1GB or less. However, then a lo
On Mon, Nov 14, 2011 at 7:53 AM, Michael Vaknine wrote:
> Does this means that I have to wait to 1.0.3?
In the meantime you can just delete the hints and rely on read repair
or antientropy repair if you're concerned about the consistency of
your replicas.
-Brandon
Hi,
As of Nov. 9, 2011 Amazon added us-west-2 (US West Oregon) region:
http://aws.typepad.com/aws/2011/11/now-open-us-west-portland-region.html
In looking at the EC2Snitch code (in the 0.8.x and 1.0.x branches), I see
it determining which data center (which I think is supposed to be
equivalent to
Well,
I tried to delete the hints on the failed cluster but I could not start it
I got other errors such as
ERROR [MutationStage:34] 2011-11-14 15:37:43,813
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[MutationStage:34,5,main]
java.lang.StackOverflowError
at
ja
I am new to cassandra. I search for random write examples
you can access cassandra data at any node and keys can be accessed at
random.
It may be the case that your CL is the issue. You are writing it at
ONE, which means that out of the 4 replicas of that key (two in each
data center), you are only putting it on one of them. When you read at
CL ONE, if only looks at a single replica to see if the data is there.
In other words. If y
You should be able to do it as long as you shut down the whole cluster
for it:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Upgrading-to-1-0-tp6954908p6955316.html
On 11/13/2011 02:14 PM, Timothy Smith wrote:
Due to some application dependencies I've been holding off on a
> you can access cassandra data at any node and keys can be accessed at
> random.
Including individual columns in a row.
--
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)
Hi all,
Quick note about our Cassandra London 1st birthday party!
We'll be looking at what's changed in Cassandra over the past year,
with talks on feature improvements, performance and Hadoop
integration. Please come along if you're UK-based! It's a great chance
to meet other Cassandra users.
h
Hi
While testing the proposed 1.0.3 version I got the following exception
while running repair: (StackOverflowError)
http://pastebin.com/raw.php?i=35Rt7ryB
The affected column family is denfined like this:
create column family FileStore
with comparator=UTF8Type and key_validation_class = '
On Mon, Nov 14, 2011 at 8:06 AM, Michael Vaknine wrote:
> Well,
> I tried to delete the hints on the failed cluster but I could not start it
> I got other errors such as
>
> ERROR [MutationStage:34] 2011-11-14 15:37:43,813
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Threa
Additionally, you have the barely-documented but nasty behavior of
Hotspot forcing full GCs when allocateDirect reaches
-XX:MaxDirectMemorySize.
On Sun, Nov 13, 2011 at 2:09 PM, Peter Schuller
wrote:
>> I would like to know it also - actually is should be similar, plus there are
>> no dependencie
Hi,
Sorry for the intrusion.
I was speaking to some of the LinkedIn engineers at ApacheCon last week
about to see how to get
Cassandra into the linkedin skills page [1].
They claim if more people add Cassandra as a skill in their profile then it
will show up. So my request
is if you use Cassand
Hello everyone,
We're using the bulk loader to load data every day to Cassandra. The
machines that use the bulkloader are diferent every day so their IP
addresses change. When I do "describe cluster" i see all the unreachable
nodes that keep piling up for the past few days. Is there a way to remov
Thanks for the note. Ideally I would not like to keep track of what is
the oldest indexed date,
because this means that I'm already creating a bit of infrastructure on
top of my database,
with attendant referential integrity problems.
But I suppose I'll be forced to do that. In addition, I'll h
Hello Giannis,
Can you share a little bit on how to use the bulk loader ? We're considering
use bulk loader for a user case.
Thanks,
Mike
From: Giannis Neokleous [mailto:gian...@generalsentiment.com]
Sent: Monday, November 14, 2011 2:50 PM
To: user@cassandra.apache.org
Subject: BulkLoader
H
Hi
I'm getting this error when I try to run describe cluster:
[] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
Error retrieving data: Internal error processing describe_schem
I am trying to run the word count example and looking at the source, I
don't see where it knows which job tracker to connect to.
1. Would I be correct in guessing that I have to run hadoop jar ?
(just wondering why I don't see a port/hostname for job tracker in the
WordCount.java file)
This t
- It would be super cool if all of that counter work made it possible
to support other atomic data types (sets? CAS? just pass a assoc/commun
Function to apply).
- Again with types, pluggable type specific compression.
- Wishy washy wish: Simpler "elasticity" I would like to go from
6-->8-->7
Re Simpler "elasticity":
Latest opscenter will now rebalance cluster optimally
http://www.datastax.com/dev/blog/whats-new-in-opscenter-1-3
-Jake
On Mon, Nov 14, 2011 at 7:27 PM, Chris Burroughs
wrote:
> - It would be super cool if all of that counter work made it possible
> to support other
On Mon, Nov 14, 2011 at 4:44 PM, Jake Luciani wrote:
> Re Simpler "elasticity":
> Latest opscenter will now rebalance cluster optimally
> http://www.datastax.com/dev/blog/whats-new-in-opscenter-1-3
>
Does it cause any impact on reads and writes while re-balance is in
progress? How is it handled
+1 on coprocessors
On Mon, Nov 14, 2011 at 6:51 PM, Mohit Anchlia wrote:
> On Mon, Nov 14, 2011 at 4:44 PM, Jake Luciani wrote:
> > Re Simpler "elasticity":
> > Latest opscenter will now rebalance cluster optimally
> > http://www.datastax.com/dev/blog/whats-new-in-opscenter-1-3
> >
>
> Doe
There are 4 jobs submitted by the wordcount cassandra example and the first
one fails and the other 3 all pass and work with results.
The first job I noticed is looking for column name text0 due to i being 0
in the loop. The exception is not going through the wordcount code at all
though, but thi
Well, by edting
src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
in version 1.0.2 cassandra src just before the
totalRead++;
KeySlice ks = rows.get(i++);
SortedMap map = new TreeMap(comparator);
I added the code
if(i >= rows.size
oh yeah, one more BIG one.in memory writes with asynch write-behind to
disk like cassandra does for speed.
So if you have atomic locking, it writes to the primary node(memory) and
some other node(memory) and returns with success to the client. asynch
then writes to disk later. This prove to
+1 on co-processors.
Edward
Giannis,
>From here:
http://wiki.apache.org/cassandra/Operations#Removing_nodes_entirely
Have you tried "nodetool removetoken" ?
Ernie
On Mon, Nov 14, 2011 at 4:20 PM, wrote:
> Hello Giannis,
>
> ** **
>
> Can you share a little bit on how to use the bulk loader ? We’re
> considering us
Can you describe how you did the upgrade on these machines? You may
still have some old jars on the classpath.
On Mon, Nov 14, 2011 at 4:03 PM, Silviu Matei wrote:
> Hi
> I'm getting this error when I try to run describe cluster:
> [] describe cluster;
> Cluster Information:
> Snitch: org.apac
I am running java version:
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.7) (6b20-1.9.7-0ubuntu1~10.04.1)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
I have tried to run with -Xss96k or -Xss256k or -Xss320 but it still give me
the error
TheNode has 8GB memory and %GB
It may be the case that your CL is the issue. You are writing it at
ONE, which means that out of the 4 replicas of that key (two in each
data center), you are only putting it on one of them.
cassandra will always try to replicate key to all available replicas.
Under normal conditions if you do
41 matches
Mail list logo