Hi Fellows,
I used to be able to build cassandra 1.1 up to 1.1.1 with the same set
of procedures by running ant on the same machine, but now the stuff
associated with gen-cli-grammar breaks the build. Any advice will be
greatly appreciated.
-Arya
Source:
source tarball for 1.1.2 downloaded from
pickle.com
>
> On 8/07/2012, at 1:57 PM, Arya Goudarzi wrote:
>
> Hi Fellows,
>
> I used to be able to build cassandra 1.1 up to 1.1.1 with the same set
> of procedures by running ant on the same machine, but now the stuff
> associated with gen-cli-grammar breaks the build.
n't have a version of antlr install on you
> system that takes
> precedence over the one distributed with C* and happens to not be compatible.
>
> Because I don't remember there having been much change to the Cli between
> 1.1.1
> and 1.1.2 and the grammar nobody has had that pr
Hi All,
Correct me if I am wrong, but I know that secondary indexes are stored
to local column families on each node. Previously where the default
cache key value was 200,000 rows, and you couldn't really tune the
local index column family, that posed a limitation on low cardinality
of the possibl
Just had a good conversation with rcoli in chat. Wanted to clarify the
steps for resolving this issue and see if there are any pitfalls I am
missing.
Issue: I upgraded from 1.1.2 to 1.1.3 a while ago and today I realized I
cannot make any schema changes since the fix in
https://issues.apache.org/j
Hi All,
I have a 4 node cluster setup in 2 zones with NetworkTopology strategy and
strategy options for writing a copy to each zone, so the effective load on
each machine is 50%.
Symptom:
I have a column family that has gc grace seconds of 10 days (the default).
On 17th there was an insert done t
No. We don't use TTLs.
On Tue, Sep 25, 2012 at 11:47 PM, Roshni Rajagopal <
roshni_rajago...@hotmail.com> wrote:
> By any chance is a TTL (time to live ) set on the columns...
>
> --
> Date: Tue, 25 Sep 2012 19:56:19 -0700
> Subject: 1.1.5 Missing Insert! Strange Prob
Any change anyone has seen the same mysterious issue?
On Wed, Sep 26, 2012 at 12:03 AM, Arya Goudarzi wrote:
> No. We don't use TTLs.
>
>
> On Tue, Sep 25, 2012 at 11:47 PM, Roshni Rajagopal <
> roshni_rajago...@hotmail.com> wrote:
>
>> By any chance is a TTL
Thanks for your reply. I did grep on the commit logs for the offending key
and grep showed Binary file matches. I am trying to use this tool to
extract the commitlog and actually confirm if the mutation was a write:
https://github.com/carloscm/cassandra-commitlog-extract.git
On Thu, Sep 27, 2012
grep the commit log for a missing record like I did before.
We have durable writes enabled. To me, it seams like when stuff are in
memtables and hasn't been flushed to disk, when I restart the node, the
commit log doesn't get replayed correctly.
Please advice.
On Thu, Sep 27, 2012 at 2:4
going to change our settings to batch mode.
Thank you rcoli for you help.
On Thu, Sep 27, 2012 at 2:49 PM, Arya Goudarzi wrote:
> I was restarting Cassandra nodes again today. 1 hour later my support team
> let me know that a customer has reported some missing data. I suppose this
> is
Hi C* users,
I just upgrade a 12 node test cluster from 1.1.6 to 1.2.1. What I noticed
from nodetool ring was that the new upgraded nodes only saw each other as
Normal and the rest of the cluster which was on 1.1.6 as Down. Vise versa
was true for the nodes running 1.1.6. They saw each other as No
of
> nodetool gossipinfo and nodetool ring by chance?
>
> On Feb 23, 2013, at 12:26 AM, "Arya Goudarzi" wrote:
>
> > Hi C* users,
> >
> > I just upgrade a 12 node test cluster from 1.1.6 to 1.2.1. What I
> noticed from nodetool ring was that the new upgraded
Sorry to jump on this late. GC is one of my favorite topics. A while ago I
wrote a blob post about C* GC tuning and documented several issues that I
had experienced. It seems it has helped some people in that past, so I am
sharing it here:
http://aryanet.com/blog/cassandra-garbage-collector-tuning
Hi,
I sympathize with your issue. I recommend adding the following to your JVM
flags:
JVM_OPTS="$JVM_OPTS -XX:+PrintGCDetails"
JVM_OPTS="$JVM_OPTS -XX:+PrintGCDateStamps"
JVM_OPTS="$JVM_OPTS -XX:+PrintHeapAtGC"
JVM_OPTS="$JVM_OPTS -XX:+PrintTenuringDistribution"
JVM_OPTS="$JVM_OPTS -XX:+PrintGCAp
It is not a good idea to change settings without identifying the root
cause. Chances are what you did masked the problem a bit for you, but the
problem is still there, isn't it?
On Wed, Jan 15, 2014 at 1:11 AM, Dimetrio wrote:
> I set G1 because GS started to work wrong(dropped messages) with s
Read the upgrade best practices
http://www.datastax.com/docs/1.1/install/upgrading#best-practices
You cannot change partitioner
http://www.datastax.com/documentation/cassandra/1.2/webhelp/cassandra/architecture/architecturePartitionerAbout_c.html
On Thu, Jan 16, 2014 at 2:04 AM, Or Sher wrote
to tune a system with fewer non default settings.
>
> Cheers
>
> -
> Aaron Morton
> New Zealand
> @aaronmorton
>
> Co-Founder & Principal Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On 16/01/2014, at 8:22 am
Dimetrio,
Look at my last post. I showed you how to turn on all useful GC logging
flags. From there we can get information on why GC has long pauses. From
the changes you have made it seems you are changing things without knowing
the effect. Here are a few things to considenr:
- Having a 9GB NewG
Nodetool cleanup deletes rows that aren't owned by specific tokens
(shouldn't be on this node). And nodetool repair makes sure data is in sync
between all replicas. It is wrong to say either of these commands cleanup
tombstones. Tombstones are only cleaned up during compactions only if they
are exp
Hi,
I am exercising the rolling upgrade from 1.1.6 to 1.2.2. When I upgraded to
1.2.2 on the first node, during startup I got this exception:
ERROR [main] 2013-03-09 04:24:30,771 CassandraDaemon.java (line 213) Could
not migrate old leveled manifest. Move away the .json file in the data
directory
*
* LOAD:5.0710624207E10*
Is this just a display bug in nodetool or this upgraded node really sees
the other ones as dead?
-Arya
On Mon, Feb 25, 2013 at 8:10 PM, Arya Goudarzi wrote:
> No I did not look at nodetool gossipinfo but from the ring on both
> pre-upgrade and post upgrade nodes to
you sure you are using 1.2.2?
> Because LegacyLeveledManifest is from unreleased development version.
>
> On Friday, March 8, 2013 at 11:02 PM, Arya Goudarzi wrote:
>
> Hi,
>
> I am exercising the rolling upgrade from 1.1.6 to 1.2.2. When I upgraded
> to 1.2.2 on the first
despite my other issue having to do with the wrong version of cassandra,
this one still stands as described.
On Fri, Mar 8, 2013 at 10:24 PM, Arya Goudarzi wrote:
> OK. I upgraded one node from 1.1.6 to 1.2.2 today. Despite some new
> problems that I had and I posted them in a separate
You may have bumped to this issue:
https://github.com/Netflix/Priam/issues/161
make sure is_replace_token Priam API call is working for you.
On Fri, Mar 8, 2013 at 8:22 AM, aaron morton wrote:
> If it does not have the schema check the logs for errors and ensure it is
> actually part of the clust
Hi,
I have upgraded our test cluster from 1.1.6 to 1.1.10. Followed by running
repairs. It appears that the repair task that I executed after upgrade,
brought back lots of deleted rows into life. Here are some logistics:
- The upgraded cluster started from 1.1.1 -> 1.1.2 -> 1.1.5 -> 1.1.6
- Old c
ing_state=false
>
> parameter, append it at the bottom of the cassandra-env.sh file. It will
> force the node to get the ring state from the others.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
-cli and confirm the timestamps on the
> columns make sense ?
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 16/03/2013, at 2:31 PM, Arya Goudarzi wrote:
>
> Hi,
used ?
>
> ** **
>
> Can you look at the data in cassanca-cli and confirm the timestamps on the
> columns make sense ?
>
> ** **
>
> Cheers
>
> ** **
>
> -
>
> Aaron Morton
>
> Freelance Cassandra Consultant
>
> New Zealand*
I took Brandon's suggestion in CASSANDRA-5332 and upgraded to 1.1.10 before
upgrading to 1.2.2 but the issue with nodetool ring reporting machines as
down did not resolve.
On Fri, Mar 15, 2013 at 6:35 PM, Arya Goudarzi wrote:
> Thank you very much Aaron. I recall from the logs of this
Hi,
I am experiencing this bug on our 1.1.6 cluster:
https://issues.apache.org/jira/browse/CASSANDRA-4765
The pending compactions has been stuck on a constant value, so I suppose
something is not compacting due to this. Is there a workaround beside
upgrading? We are not ready to upgrade just yet
d beside upgrading? We are not ready to upgrade just
> yet.
>
> Cannot see one.
>
> Cheers
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 26/03/2013, at 7:42 PM, Arya Gouda
g to try.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 22/03/2013, at 10:30 AM, Arya Goudarzi wrote:
>
> I took Brandon's suggestion in CASSANDRA-5332 and upgrade
ira/browse/CASSANDRA-5379
>
> Ensuring no hints are in place during an upgrade would work around. I tend
> to make sure hints and commit log are clear during an upgrade.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
---
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 29/03/2013, at 2:58 AM, Arya Goudarzi wrote:
>
> I am not familiar with that part of the code yet. But what if the gc_grace
> was changed t
t will fix
>> it, but it's a simple thing to try.
>>
>> Cheers
>>
>>-
>> Aaron Morton
>> Freelance Cassandra Consultant
>> New Zealand
>>
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 4/04/2013, at 6:11 AM, Arya Goudarzi wrote:
>
> Hi,
>
> I have upgraded 2 nodes out of a 12 mode test cluster from 1.1.10 to
> 1.2.3. During startup while tailing C*'s system.log, I observed
een
> 0.6 and 0.7. I have ran into this a couple other times before. The good
> news is the save key cache is just an optimization, you can blow it away
> and it is not usually a big deal.
>
>
>
>
> On Fri, Apr 5, 2013 at 2:55 PM, Arya Goudarzi wrote:
>
>> Here is
TL;DR; An EC2 Multi-Region Setup's Repair/Gossip Works with 1.1.10 but with
1.2.4, gossip does not see the nodes after restarting all nodes at once,
and repair gets stuck.
This is a working configuration:
Cassandra 1.1.10 Cluster with 12 nodes in us-east-1 and 12 nodes in
us-west-2
Using Ec2MultiR
ter node compression. I have not checked but this
> might be accidentally getting turned on by default. Because the storage
> port is typically 7000. Not sure why you are allowing 7100. In any case try
> allowing 7000 or with internode compression off.
>
>
> On Tue, Apr 16, 2013
We haven't tried using Pig. However, we had a problem where our mapreduce
job blew up for a subset of data. It appeared that we had a bug in our code
that had generated a row as big as 3Gb. It was actually causing long GC
pauses and would cause GC thrashing. The hadoop job of course would time
out.
Hi Fellows,
I just joined this mailing list but I've been on the IRC for a while. Pardon if
this post is a repeat but I would like to share with you some of my experiences
with Cassandra Thrift Interface that comes with the nightly built and probably
0.7. I came across an issue last night that
Hi Fellows,
I have the following design for a system which holds basically key->value pairs
(aka Columns) for each user (SuperColumn Key) in different namespaces
(SuperColumnFamily row key).
Like this:
Namesapce->user->column_name = column_value;
keyspaces:
- name: NKVP
replica_placeme
-
From: "Jonathan Ellis"
To: user@cassandra.apache.org
Sent: Saturday, June 5, 2010 6:26:46 AM
Subject: Re: Strage Read Perfoamnce 1xN column slice or N column slice
reading 1 column, is faster than reading lots of columns. this
shouldn't be surprising.
On Fri, Jun 4, 2010 a
Hey'all,
As Jonathan pointed out in CASSANDRA-1199, this issue seams to be related to
https://issues.apache.org/jira/browse/THRIFT-788. If you experience slowness
with multiget_slice, take a look at that bug.
-Arya
- Original Message -
From: "Arya Goudarzi&
Hi,
Please confirm if this is an issue and should be reported or I am doing
something wrong. I could not find anything relevant on JIRA:
Playing with 0.7 nightly (today's build), I setup a 3 node cluster this way:
- Added one node;
- Loaded default schema with RF 1 from YAML using JMX;
- Loa
I just build today's trunk successfully and am getting the following exception
on startup which to me it seams bogus as the method exists but I don't know why:
ERROR 15:27:00,957 Exception encountered during startup.
java.lang.NoSuchMethodError: org.apache.cassandra.db.ColumnFamily.id()I
ubject: Re: nodetool loadbalance : Strerams Continue on Non Acceptance of New
Token
On Tue, Jun 22, 2010 at 20:16, Arya Goudarzi wrote:
> Hi,
>
> Please confirm if this is an issue and should be reported or I am doing
> something wrong. I could not find anything relevant on JIRA
be a good feature
though. Care to file a ticket?
Gary.
On Thu, Jul 15, 2010 at 22:13, Arya Goudarzi wrote:
> I recall jbellis in his training showing us how to increase the replication
> factor and repair data on a cluster in 0.6. How is that possible in 0.7 when
> you cannot change sche
Just wanted to toss this out there in case if this is an issue or the format
really changed and have to start from a clean slate. I was running from
yesterday's trunc and had some Keyspaces with data. Today's trunc failed server
start giving this exception:
ERROR [main] 2010-07-29 14:05:21,489
-----
From: "Arya Goudarzi"
Sent: Thursday, July 29, 2010 4:42pm
To: user@cassandra.apache.org
Subject: Avro Runtime Exception Bad Index
Just wanted to toss this out there in case if this is an issue or the format
really changed and have to start from a clean slate. I was running fro
Just throwing this out there as it could be a concern. I had a cluster of 3
nodes running. Over the weekend I updated to trunc (Aug 9th @ 2pm). Today, I
came to run my daily tests and my client kept giving me TSocket timeouts.
Checking the error log of Cassandra servers, all 3 nodes had this and
t;Jonathan Ellis"
To: user@cassandra.apache.org
Sent: Monday, August 9, 2010 5:18:35 PM
Subject: Re: COMMIT-LOG_WRITER Assertion Error
Sounds like you upgraded to trunk from 0.6 without draining your
commitlog first?
On Mon, Aug 9, 2010 at 3:30 PM, Arya Goudarzi wrote:
> Just throwing th
at 8:42 PM, Arya Goudarzi wrote:
> I've never run 0.6. I have been running of trunc with automatic svn update
> and build everyday at 2pm. One of my nodes got this error which lead to the
> same last error prior to build and restart today. Hope this helps better:
>
&
While inserting into a 3 node cluster, one of the nodes got this exception in
its log:
ERROR [MIGRATION-STAGE:1] 2010-08-16 17:46:24,090 CassandraDaemon.java (line
82) Uncaught exception in thread Thread[MIGRATION-STAGE:1,5,main]
java.util.concurrent.ExecutionException: java.lang.IllegalArgument
Forwarding to Cassandra users list.
This fix addresses the issue with PHP Accelerated module returning "Cannot Read
XX bytes" TException due to faulty stream sent to FramedTransport.
On Tue, Aug 31, 2010 at 12:13 PM, Bryan Duxbury wrote:
> Hey guys,
>
> I think someone has managed to figu
Upgraded code from trunk 10/7 to trunk 10/8 and nodes don't start:
ERROR 16:19:41,335 Fatal error: saved_caches_directory missing
Please advice.
Best Regards,
-Arya
Never mind, I did not pay attention to the new config change.
- Original Message -
From: "Arya Goudarzi"
To: user@cassandra.apache.org
Sent: Friday, October 8, 2010 4:22:34 PM
Subject: ERROR saved_caches_directory missing
Upgraded code from trunk 10/7 to trunk 10/8 and nodes d
Do you perform nodetool cleanup after you loadbalance?
From: "Joe Alex"
To: cassandra-u...@incubator.apache.org
Sent: Tuesday, October 26, 2010 2:06:58 PM
Subject: After loadbalance why does the size increase
Hi,
I have Cassandra 0.6.6 running on 4 nodes with RF=2. I have around 2
milli
.32.93 Up 366.47 MB
81410091240220304547331026452103785021 | |
10.210.32.74 Up 415.47 MB
127314552263552317194896535803056965704|-->|
On Tue, Oct 26, 2010 at 5:28 PM, Arya Goudarzi wrote:
> Do you perform nodetool cleanup after you loadbalance?
>
> __
In this case should one set ulimit -l to the amount of heap size?
Thanks,
-Arya Goudarzi
- Original Message -
From: "Peter Schuller"
To: user@cassandra.apache.org
Sent: Saturday, October 9, 2010 1:18:28 AM
Subject: Re: using jna.jar "Unknown mlockall error 0"
>
unsubscribe
On Mon, Aug 28, 2023 at 12:54 PM Rich Bowen wrote:
> Hello! Registration is still open for the upcoming Community Over Code
> NA event in Halifax, NS! We invite you to register for the event
> https://communityovercode.org/registration/
>
> Apache Committers, note that you have a sp
62 matches
Mail list logo