PrepareStatement problem

2015-06-15 Thread joseph gao
hi, all I'm using PrepareStatement. If I prepare a sql everytime I use, cassandra will give me a warning tell me NOT PREPARE EVERYTIME. So I Cache the PrepareStatement locally . But when other client change the table's schema, like, add a new Column, If I still use the former Cached PrepareSt

RE: PrepareStatement problem

2015-06-15 Thread Peer, Oded
This only applies to “select *” queries where you don’t specify the column names. There is a reported bug and fixed in 2.1.3. See https://issues.apache.org/jira/browse/CASSANDRA-7910 From: joseph gao [mailto:gaojf.bok...@gmail.com] Sent: Monday, June 15, 2015 10:52 AM To: user@cassandra.apache.o

Missing data

2015-06-15 Thread Jean Tremblay
Hi, I have reloaded the data in my cluster of 3 nodes RF: 2. I have loaded about 2 billion rows in one table. I use LeveledCompactionStrategy on my table. I use version 2.1.6. I use the default cassandra.yaml, only the ip address for seeds and throughput has been change. I loaded my data with si

Catastrophy Recovery.

2015-06-15 Thread Jean Tremblay
Hi, I have a cluster of 3 nodes RF: 2. There are about 2 billion rows in one table. I use LeveledCompactionStrategy on my table. I use version 2.1.6. I use the default cassandra.yaml, only the ip address for seeds and throughput has been change. I am have tested a scenario where one node crashe

Re: Missing data

2015-06-15 Thread Carlos Rolo
Hi Jean, The problem of that Warning is that you are reading too many tombstones per request. If you do have Tombstones without doing DELETE it because you probably TTL'ed the data when inserting (By mistake? Or did you set default_time_to_live in your table?). You can use nodetool cfstats to see

Re: Catastrophy Recovery.

2015-06-15 Thread Alain RODRIGUEZ
Hi, it looks like your starting to use Cassandra. Welcome. I invite you to read from here as much as you can http://docs.datastax.com/en/cassandra/2.1/cassandra/gettingStartedCassandraIntro.html . When a node lose some data you have various anti entropy mechanism Hinted Handoff --> For writes t

Re: Catastrophy Recovery.

2015-06-15 Thread Jean Tremblay
That is really wonderful. Thank you very much Alain. You gave me a lot of trails to investigate. Thanks again for you help. On 15 Jun 2015, at 17:49 , Alain RODRIGUEZ mailto:arodr...@gmail.com>> wrote: Hi, it looks like your starting to use Cassandra. Welcome. I invite you to read from here a

RE: Lucene index plugin for Apache Cassandra

2015-06-15 Thread Matthew Johnson
Hi Andres, This looks awesome, many thanks for your work on this. Just out of curiosity, how does this compare to the DSE Cassandra with embedded Solr? Do they provide very similar functionality? Is there a list of obvious pros and cons of one versus the other? Thanks! Matthew *From:* A

Re: Catastrophy Recovery.

2015-06-15 Thread Saladi Naidu
Alain great write-up on the recovery procedure. You had covered both RF factor and Consistency levels. As mentioned two anti entropy mechanisms, hinted hand off's and Read Repair work for temporary node outage and incremental recovery. In case of disaster/catastrophic recovery, nodetool repair i

Re: Missing data

2015-06-15 Thread Jean Tremblay
Dear all, I identified a bit more closely the root cause of my missing data. The problem is occurring when I use com.datastax.cassandra cassandra-driver-core 2.1.6 on my client against Cassandra 2.1.6. I did not have the problem when I was using the driver 2.1.4 with C* 2.1.4. Interestingly

Re: Missing data

2015-06-15 Thread Bryan Holladay
Theres your problem, you're using the DataStax java driver :) I just ran into this issue in the last week and it was incredibly frustrating. If you are doing a simple loop on a "select * " query, then the DataStax java driver will only process 2^31 rows (e.g. the Java Integer Max (2,147,483,647)) b

Re: Missing data

2015-06-15 Thread Robert Wille
You can get tombstones from inserting null values. Not sure if that’s the problem, but it is another way of getting tombstones in your data. On Jun 15, 2015, at 10:50 AM, Jean Tremblay mailto:jean.tremb...@zen-innovations.com>> wrote: Dear all, I identified a bit more closely the root cause o

Re: Missing data

2015-06-15 Thread Jean Tremblay
Thanks Bryan. I believe I have a different problem with the Datastax 2.1.6 driver. My problem is not that I make huge selects. My problem seems more to occur on some inserts. I inserts MANY rows and with the version 2.1.6 of the driver I seem to be loosing some records. But thanks anyway I will r

Re: Missing data

2015-06-15 Thread Jean Tremblay
Thanks Robert, but I don’t insert NULL values, but thanks anyway. On 15 Jun 2015, at 19:16 , Robert Wille mailto:rwi...@fold3.com>> wrote: You can get tombstones from inserting null values. Not sure if that’s the problem, but it is another way of getting tombstones in your data. On Jun 15, 201

Re: Seed Node OOM

2015-06-15 Thread Robert Coli
On Sat, Jun 13, 2015 at 4:39 AM, Oleksandr Petrov < oleksandr.pet...@gmail.com> wrote: > We're using Cassandra, recently migrated to 2.1.6, and we're experiencing > constant OOMs in one of our clusters. > Maybe this memory leak? https://issues.apache.org/jira/browse/CASSANDRA-9549 =Rob

counters still inconsistent after repair

2015-06-15 Thread Dan Kinder
Currently on 2.1.6 I'm seeing behavior like the following: cqlsh:walker> select * from counter_table where field = 'test'; field | value ---+--- test |30 (1 rows) cqlsh:walker> select * from counter_table where field = 'test'; field | value ---+--- test |90 (1 rows) c

Re: connections remain on CLOSE_WAIT state after process is killed after upgrade to 2.0.15

2015-06-15 Thread Paulo Ricardo Motta Gomes
Just a quick update, I was able to fix the problem by reverting the patch CASSANDRA-8336 in our custom cassandra build. I don't know the root cause yet though. I will open a JIRA ticket and post here for reference later. On Fri, Jun 12, 2015 at 11:31 AM, Paulo Ricardo Motta Gomes < paulo.mo...@cha

Nodetool ring and "Replicas" after 1.2 upgrade

2015-06-15 Thread Michael Theroux
Hello, We (finally) have just upgraded from Cassandra 1.1 to Cassandra 1.2.19.   Everything appears to be up and running normally, however, we have noticed unusual output from nodetool ring.  There is a new (to us) field "Replicas" in the nodetool output, and this field, seemingly at random, is c

Re: counters still inconsistent after repair

2015-06-15 Thread Robert Coli
On Mon, Jun 15, 2015 at 2:52 PM, Dan Kinder wrote: > Potentially relevant facts: > - Recently upgraded to 2.1.6 from 2.0.14 > - This table has ~million rows, low contention, and fairly high increment > rate > Can you repro on a counter that was created after the upgrade? > Mainly wondering: > >

Re: Nodetool ring and "Replicas" after 1.2 upgrade

2015-06-15 Thread Jason Wee
maybe check the system.log to see if there is any exception and/or error? check as well if they are having consistent schema for the keyspace? hth jason On Tue, Jun 16, 2015 at 7:17 AM, Michael Theroux wrote: > Hello, > > We (finally) have just upgraded from Cassandra 1.1 to Cassandra 1.2.19.