with n-depth slicing of that partitioned row given an
>>> arbitrary query syntax if range queries on clustering keys was allowed
>>> anywhere.
>>>
>>> At present, you can either duplicate the data using the other clustering
>>> key (transaction_time) a
here weatherstation_id = 'foo' and
> event_time >= '2015-01-01 00:00:00' and event_time < '2015-01-02
> 00:00:00' and transaction_time = ''
>
>
>
> On Sat, Feb 14, 2015 at 3:06 AM, Raj N wrote:
>
>> Has anyone designed a b
Has anyone designed a bi-temporal table in Cassandra? Doesn't look like I
can do this using CQL for now. Taking the time series example from well
known modeling tutorials in Cassandra -
CREATE TABLE temperatures (
weatherstation_id text,
event_time timestamp,
temperature text,
PRIMARY KEY (weather
ty
is reserved in heap still. Any plans to move it off-heap?
-Raj
On Tue, Nov 25, 2014 at 3:10 PM, Robert Coli wrote:
> On Tue, Nov 25, 2014 at 9:07 AM, Raj N wrote:
>
>> What's the latest on the maximum number of keyspaces and/or tables that
>> one can have in Cassandra
What's the latest on the maximum number of keyspaces and/or tables that one
can have in Cassandra 2.1.x?
-Raj
We are planning to upgrade soon. But in the meantime, I wanted to see if we
can tweak certain things.
-Rajesh
On Wed, Nov 5, 2014 at 3:10 PM, Robert Coli wrote:
> On Tue, Nov 4, 2014 at 8:51 PM, Raj N wrote:
>
>> Is there a good formula to calculate heap utilization in Cassandr
Is there a good formula to calculate heap utilization in Cassandra pre-1.1,
specifically 1.0.10. We are seeing gc pressure on our nodes. And I am
trying to estimate what could be causing this? Using node tool info my
steady state heap is at about 10GB. XMX is 12G.
I have 4.5 GB of bloom filters wh
One of our nodes keeps crashing continuously with out of memory errors. I
see the following error in the logs -
INFO 21:03:54,007 Creating new commitlog segment
/local3/logs/cassandra/commitlog/CommitLog-1348016634007.log
Java HotSpot(TM) 64-Bit Server VM warning: Attempt to allocate stack guard
Hi,
I have a 2 DC setup(DC1:3, DC2:3). All reads and writes are at
LOCAL_QUORUM. The question is if I do reads at LOCAL_QUORUM in DC1, will
read repair happen on the replicas in DC2?
Thanks
-Raj
Hi experts,
I am planning to upgrade from 0.8.4 to 1.+. Whats the latest stable
version?
Thanks
-Rajesh
Great stuff!!!
On Tue, Jun 26, 2012 at 5:25 PM, Edward Capriolo wrote:
> Hello all,
>
> It has not been very long since the first book was published but
> several things have been added to Cassandra and a few things have
> changed. I am putting together a list of changed content, for example
> fe
n Tue, Jun 19, 2012 at 11:11 PM, Raj N wrote:
> > But wont that also run a major compaction which is not recommended
> anymore.
> >
> > -Raj
> >
> >
> > On Sun, Jun 17, 2012 at 11:58 PM, aaron morton
> > wrote:
> >&g
How did you solve your problem eventually? I am experiencing something
similar. Did you run cleanup on the node that has 80GB data?
-Raj
On Mon, Aug 15, 2011 at 10:12 PM, aaron morton wrote:
> Just checking do you have read_repair_chance set to something ? The second
> request is going to all re
Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/06/2012, at 4:06 AM, Raj N wrote:
>
> Nick, do you think I should still run cleanup on the first node.
>
> -Rajesh
>
> On Fri, Jun 15, 2012 at 3:47 PM, Raj N wrote:
>
>> I did run nodetool move. But that
other smaller ones).
>
> The updated answer is "You probably do not want to run major
> compactions, but some use cases could see some benefits"
>
> On Tue, Jun 19, 2012 at 10:51 AM, Raj N wrote:
> > DataStax recommends not to run major compactions. Edward Capriolo
DataStax recommends not to run major compactions. Edward Capriolo's
Cassandra High Performance book suggests that major compaction is a good
thing. And should be run on a regular basis. Are there any ground rules
about running major compactions? For example, if you have write-once kind
of data that
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15, 2012 at 3:47 PM, Raj N wrote:
> I did run nodetool move. But that was when I was setting up the cluster
> which means I didn't have any data at that time.
>
> -Raj
>
>
> On Fr
round won't delete unneeded data after
> the move is done.
>
> Try running 'nodetool cleanup' on all of your nodes.
>
> On Fri, Jun 15, 2012 at 12:24 PM, Raj N wrote:
> > Actually I am not worried about the percentage. Its the data I am
> concerned
> >
, Jun 15, 2012 at 11:06 AM, Nick Bailey wrote:
> This is just a known problem with the nodetool output and multiple
> DCs. Your configuration is correct. The problem with nodetool is fixed
> in 1.1.1
>
> https://issues.apache.org/jira/browse/CASSANDRA-3412
>
> On Fri, Jun 15, 2
Hi experts,
I have a 6 node cluster across 2 DCs(DC1:3, DC2:3). I have assigned
tokens using the first strategy(adding 1) mentioned here -
http://wiki.apache.org/cassandra/Operations?#Token_selection
But when I run nodetool ring on my cluster, this is the result I get -
Address DC R
50Gb
> sounds like a long time (btw do you have compaction on ?)
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/05/2012, at 3:14 AM, Raj N wrote:
>
> Hi experts,
>
>
Can I infer from this that if I have 3 replicas, then running repair
without -pr won 1 node will repair the other 2 replicas as well.
-Raj
On Sat, Apr 14, 2012 at 2:54 AM, Zhu Han wrote:
>
> On Sat, Apr 14, 2012 at 1:57 PM, Igor wrote:
>
>> Hi!
>>
>> What is the difference between 'repair' and
Hi experts,
I have a 6 node cluster spread across 2 DCs.
DC RackStatus State LoadOwnsToken
113427455640312814857969558651062452225
DC1 RAC13 Up Normal 95.98 GB33.33% 0
DC2 RAC5Up Normal 50.79 GB
> You should run repair. If the disk space is the problem, try to cleanup
> and major compact before repair.
> You can limit the streaming data by running repair for each column family
> separately.
>
> maki
>
> On 2012/04/28, at 23:47, Raj N wrote:
>
> > I have a
I have a 6 node cassandra cluster DC1=3, DC2=3 with 60 GB data on each
node. I was bulk loading data over the weekend. But we forgot to turn off
the weekly nodetool repair job. As a result, repair was interfering when we
were bulk loading data. I canceled repair by restarting the nodes. But
unfortu
I had 3 nodes with strategy_options (DC1=3) in 1 DC. I added 1 more DC and 3
more nodes. I didnt set the initial token. But I ran nodetool move on the
new nodes(adding 1 to the tokens of the nodes in DC1) . I updated the
keyspace to strategy_options (DC1=3, DC2=3). Then I started running nodetool
r
How do I ensure it is indeed using the SerializingCacheProvider.
Thanks
-Rajesh
On Tue, Jul 12, 2011 at 1:46 PM, Jonathan Ellis wrote:
> You need to set row_cache_provider=SerializingCacheProvider on the
> columnfamily definition (via the cli)
>
> On Tue, Jul 12, 2011 at 9:57 AM,
Do we need to do anything special to turn off-heap cache on?
https://issues.apache.org/jira/browse/CASSANDRA-1969
-Raj
I know it doesn't. But is this a valid enhancement request?
On Tue, Jul 5, 2011 at 1:32 PM, Edward Capriolo wrote:
>
>
> On Tue, Jul 5, 2011 at 1:27 PM, Raj N wrote:
>
>> Hi experts,
>> Are there any benchmarks that quantify how long nodetool repair
>>
Hi experts,
Are there any benchmarks that quantify how long nodetool repair takes?
Something which says on this kind of hardware, with this much of data,
nodetool repair takes this long. The other question that I have is since
Cassandra recommends running nodetool repair within GCGracePeriodSe
Hi experts,
We are planning to deploy Cassandra in 2 datacenters. Let assume there
are 3 nodes, RF=3, 2 nodes in 1 DC and 1 node in 2nd DC. Under normal
operations, we would read and write at QUORUM. What we want to do though is
if we lose a datacenter which has 2 nodes, DC1 in this case, we w
Guys,
Correct me if I am wrong. The whole problem is because a node missed an
update when it was down. Shouldn’t HintedHandoff take care of this case?
Thanks
-Raj
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Wednesday, August 18, 2010 9:22 AM
To: user@cassa
32 matches
Mail list logo