Re: How is the coordinator node in LOCAL_QUORUM chosen?

2016-03-28 Thread Eric Stevens
> Local quorum works in the same data center as the coordinator node,
but when an app server execute the write query, how is the coordinator
node chosen?

It typically depends on the driver, and decent drivers offer you several
options for this, usually called load balancing strategy.  You indicate
that you're using the node.js driver (presumably the DataStax version),
which is documented here:
http://docs.datastax.com/en/developer/nodejs-driver/3.0/common/drivers/reference/tuningPolicies.html

I'm not familiar with the node.js driver, but I am familiar with the Java
driver, and since they use the same terminology RE load balancing, I'll
assume they work the same.

A typical way to set that up is to use TokenAware policy with
DCAwareRoundRobinPolicy as its child policy.  This will prefer to route
queries to the primary replica (or secondary replica if the primary is
offline) in the local datacenter for that query if it can be discovered
automatically by the driver, such as with prepared statements.

Where the replica discovery can't be accomplished, TokenAware defers to the
child policy to choose the host.  In the case of DCAwareRoundRobinPolicy
that means it iterates through the hosts of the configured local datacenter
(defaulted to the DC of the seed nodes if they're all in the same DC) for
each subsequent execution.

On Fri, Mar 25, 2016 at 2:04 PM X. F. Li  wrote:

> Hello,
>
> Local quorum works in the same data center as the coordinator node, but
> when an app server execute the write query, how is the coordinator node
> chosen?
>
> I use the node.js driver. How do the driver client determine which
> cassandra nodes are in the same DC as the client node? Does it use
> private network IP [192.168.x.x etc] to auto detect, or must I manually
> provide a localBalancing policy by `new DCAwareRoundRobinPolicy(
> localDcName )`?
>
> If a partition is not available in the local DC, i.e. if the local
> replica node fails or all replica nodes are in remote DC, will local
> quorum fail? If it doesn't fail, there is no guarantee that it all
> queries on a partition will be directed to the same data center, so does
> it means strong consistency cannot be expected?
>
> Another question:
>
> Suppose I have replication factor 3. If one of the node fails, will
> queries with ALL consistency fail if the queried partition is on the
> failed node? Or would they continue to work with 2 replicas during the
> time while cassandra is replicating the partitions on the failed node to
> re-establish 3 replicas?
>
> Thank you.
> Regards,
>
> X. F. Li
>


*** What is the best way to model this JSON *** ??

2016-03-28 Thread Lokesh Ceeba - Vendor
Hello Team,
   How to design/develop the best data model for this JSON ?


var json=[{ "id":"9a55fdf6-eeab-4c83-9c6f-04c7df1b3225",
"user":"ssatish",
"event":"business",
"occurredOn":"09 Mar 2016 17:55:15.292-0600",
"eventObject":
{
"objectType":"LOAD",
"id":"12345",
"state":"ARRIVAL",
"associatedAttrs":
[
{

"type":"location_id",
"value":"100"
},
{

"type":"location_type",
"value":"STORE"
},
{

"type":"arrival_ts",

"value":"2015-12-12T10:10:10"
}
]
} }]


I've taken this approach :

create type event_object_0328
(
Object_Type text,
Object_ID   Int,
Object_State text
)
;


create table Events
(
event_id   timeuuid,
event_type text,
triggered_by   text,
triggered_ts   timestamp,
Appl_IDtext,
eventObjectfrozen,
primary key(event_id)
)
;

Now I need to build the Associated Attributes (Highlighted above in JSON text). 
The Associated Attributes can be very dynamic and shall come in any (Key,Value) 
pair combination.




--
Lokesh

This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***


Re: *** What is the best way to model this JSON *** ??

2016-03-28 Thread Jack Krupansky
As always with Cassandra data modeling, you must start with your intended
query and access patterns as well as the cardinality of your data.

So, what information is your app likely to have when it needs to perform a
query and what information is it going to want to retrieve? What is the
full range of potential queries? Which are the most common and need to be
the fastest?


-- Jack Krupansky

On Mon, Mar 28, 2016 at 12:10 PM, Lokesh Ceeba - Vendor <
lokesh.ce...@walmart.com> wrote:

> Hello Team,
>
>How to design/develop the best data model for this JSON ?
>
>
>
>
>
> var json=[{ "id":"9a55fdf6-eeab-4c83-9c6f-04c7df1b3225",
>
> "user":"ssatish",
>
> "event":"business",
>
> "occurredOn":"09 Mar 2016 17:55:15.292-0600",
>
> "eventObject":
>
> {
>
> "objectType":"LOAD",
>
> "id":"12345",
>
> "state":"ARRIVAL",
>
> "associatedAttrs":
>
> [
>
> {
>
>
> "type":"location_id",
>
>
> "value":"100"
>
> },
>
> {
>
>
> "type":"location_type",
>
>
> "value":"STORE"
>
> },
>
> {
>
>
> "type":"arrival_ts",
>
>
> "value":"2015-12-12T10:10:10"
>
> }
>
> ]
>
> } }]
>
>
>
>
>
> I’ve taken this approach :
>
>
>
> create type event_object_0328
>
> (
>
> Object_Type text,
>
> Object_ID   Int,
>
> Object_State text
>
> )
>
> ;
>
>
>
>
>
> create table Events
>
> (
>
> event_id   timeuuid,
>
> event_type text,
>
> triggered_by   text,
>
> triggered_ts   timestamp,
>
> Appl_IDtext,
>
> eventObjectfrozen,
>
> primary key(event_id)
>
> )
>
> ;
>
>
>
> Now I need to build the Associated Attributes (Highlighted above in JSON
> text). The Associated Attributes can be very dynamic and shall come in any
> (Key,Value) pair combination.
>
>
>
>
>
>
>
>
>
> --
>
> Lokesh
> This email and any files transmitted with it are confidential and intended
> solely for the individual or entity to whom they are addressed. If you have
> received this email in error destroy it immediately. *** Walmart
> Confidential ***
>


RE: *** What is the best way to model this JSON *** ??

2016-03-28 Thread Lokesh Ceeba - Vendor
Team,
   Here is the listing of data elements..
Search/Sorting criteria:

Search fields

Order by fields

object, id

id

object, id, triggered_ts

triggered_ts

object, id, state

triggered_ts

app_id, triggered_ts

triggered_ts

event_type

object, id, triggered_ts

event_type, triggered_ts

object, id, triggered_ts


Model Layout:

Colume Name

Colume Type



event_type

String

Type of event (BUSINESS, SYSTEM, ERROR, etc.)

uuid

String

Unique identifier

triggered_by

String

User ID

triggered_ts

Date Time (down to micro-second)

Time of event

app_id

String

Process that triggered the event

object

String

Type of object (LOAD, TRIP, LOCATION, CARRIER, etc.)

id

String

Identifier for the object

state

String

Event that happened

associated_object_1

String

what changed

associated_value_1

String

value of it

associated_object_2

String



associated_value_2

String



associated_object_3

String



associated_value_3

String



associated_object_4

String



associated_value_4

String







--
Lokesh

From: Jack Krupansky [mailto:jack.krupan...@gmail.com]
Sent: Monday, March 28, 2016 12:23 PM
To: user@cassandra.apache.org
Subject: Re: *** What is the best way to model this JSON *** ??

As always with Cassandra data modeling, you must start with your intended query 
and access patterns as well as the cardinality of your data.

So, what information is your app likely to have when it needs to perform a 
query and what information is it going to want to retrieve? What is the full 
range of potential queries? Which are the most common and need to be the 
fastest?


-- Jack Krupansky

On Mon, Mar 28, 2016 at 12:10 PM, Lokesh Ceeba - Vendor 
mailto:lokesh.ce...@walmart.com>> wrote:
Hello Team,
   How to design/develop the best data model for this JSON ?


var json=[{ "id":"9a55fdf6-eeab-4c83-9c6f-04c7df1b3225",
"user":"ssatish",
"event":"business",
"occurredOn":"09 Mar 2016 17:55:15.292-0600",
"eventObject":
{
"objectType":"LOAD",
"id":"12345",
"state":"ARRIVAL",
"associatedAttrs":
[
{

"type":"location_id",
"value":"100"
},
{

"type":"location_type",
"value":"STORE"
},
{

"type":"arrival_ts",

"value":"2015-12-12T10:10:10"
}
]
} }]


I’ve taken this approach :

create type event_object_0328
(
Object_Type text,
Object_ID   Int,
Object_State text
)
;


create table Events
(
event_id   timeuuid,
event_type text,
triggered_by   text,
triggered_ts   timestamp,
Appl_IDtext,
eventObjectfrozen,
primary key(event_id)
)
;

Now I need to build the Associated Attributes (Highlighted above in JSON text). 
The Associated Attributes can be very dynamic and shall come in any (Key,Value) 
pair combination.




--
Lokesh
This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***


This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***


Re: What is the best way to model this JSON ??

2016-03-28 Thread Ryan Svihla
Lokesh,

The modeling will change a bit depending on your queries, the rate of update 
and your tooling (Spring-data-cassandra makes a mess of updating collections 
for example).  I suggest asking the Cassandra users mailing list for help since 
this list is for development OF Cassandra.

> On Mar 28, 2016, at 11:09 AM, Lokesh Ceeba - Vendor 
>  wrote:
> 
> Hello Team,
>   How to design/develop the best data model for this ?
> 
> 
> var json=[{ "id":"9a55fdf6-eeab-4c83-9c6f-04c7df1b3225",
>"user":"ssatish",
>"event":"business",
>"occurredOn":"09 Mar 2016 17:55:15.292-0600",
>"eventObject":
>{
>"objectType":"LOAD",
>"id":"12345",
>"state":"ARRIVAL",
>"associatedAttrs":
>[
>{
>
> "type":"location_id",
>"value":"100"
>},
>{
>
> "type":"location_type",
>"value":"STORE"
>},
>{
>
> "type":"arrival_ts",
>
> "value":"2015-12-12T10:10:10"
>}
>]
> } }]
> 
> 
> I've taken this approach :
> 
> create type event_object_0328
> (
> Object_Type text,
> Object_ID   Int,
> Object_State text
> )
> ;
> 
> 
> create table Events
> (
> event_id   timeuuid,
> event_type text,
> triggered_by   text,
> triggered_ts   timestamp,
> Appl_IDtext,
> eventObjectfrozen,
> primary key(event_id)
> )
> ;
> 
> Now I need to build the Associated Attributes (Highlighted above in JSON 
> text). The Associated Attributes can be very dynamic and shall come in any 
> (Key,Value) pair combination.
> 
> 
> 
> 
> --
> Lokesh
> 
> This email and any files transmitted with it are confidential and intended 
> solely for the individual or entity to whom they are addressed. If you have 
> received this email in error destroy it immediately. *** Walmart Confidential 
> ***



Why is write failing

2016-03-28 Thread Rakesh Kumar
Cassandra: 3.0.3

I am new to Cassandra.

I am creating a test instance of four nodes, two in each data center.
The idea is to verify that Cassandra can continue with writes even if
one DC is down and we further  lose one machine in the surviving DC.

This is in my cassandra-topology.properties

10.122.66.41=DC1:RAC1
10.122.98.53=DC1:RAC2
10.122.142.218=DC2:RAC1
10.122.142.219=DC2:RAC2

# default for unknown nodes
default=DC2:RAC1

Snitch property in cassandra.yaml
 endpoint_snitch: GossipingPropertyFileSnitch

Keyspace has been defined as follows

CREATE KEYSPACE mytesting
 WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': 1, 'DC2': 1}
AND durable_writes = true ;

yet when I insert to a table via cqlsh with no consistency set, I get this error

Unavailable: code=1000 [Unavailable exception] message="Cannot achieve
consistency level ONE" info={'required_replicas': 1, 'alive_replicas':
0, 'consistency': 'ONE'}

I have verified that cassandra is up on four all nodes.

What is going on?

Thanks.


Does saveToCassandra work with Cassandra Lucene plugin ?

2016-03-28 Thread Cleosson José Pirani de Souza


Hello,




I am implementing the example on the github 
(https://github.com/Stratio/cassandra-lucene-index) and when I try to save the 
data using saveToCassandra I get the exception NoSuchElementException.
 If I use CassandraConnector.withSessionDo I am able to add elements into 
Cassandra and no exception is raised.


 The code :
import org.apache.spark.{SparkConf, SparkContext, Logging}
import com.datastax.spark.connector.cql.CassandraConnector
import com.datastax.spark.connector._

object App extends Logging{
def main(args: Array[String]) {

// Get the cassandra IP and create the spark context
val cassandraIP = System.getenv("CASSANDRA_IP");
val sparkConf = new SparkConf(true)
.set("spark.cassandra.connection.host", cassandraIP)
.set("spark.cleaner.ttl", "3600")
.setAppName("Simple Spark Cassandra Example")


val sc = new SparkContext(sparkConf)

// Works
CassandraConnector(sparkConf).withSessionDo { session =>
   session.execute("INSERT INTO demo.tweets(id, user, body, time, 
latitude, longitude) VALUES (19, 'Name', 'Body', '2016-03-19 09:00:00-0300', 
39, 39)")
}

// Does not work
val demo = sc.parallelize(Seq((9, "Name", "Body", "2016-03-29 
19:00:00-0300", 29, 29)))
// Raises the exception
demo.saveToCassandra("demo", "tweets", SomeColumns("id", "user", 
"body", "time", "latitude", "longitude"))

}
}





 The exception:
16/03/28 14:15:41 INFO CassandraConnector: Connected to Cassandra cluster: Test 
Cluster
Exception in thread "main" java.util.NoSuchElementException: Column  not found 
in demo.tweets
at 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
at 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
at scala.collection.Map$WithDefault.default(Map.scala:52)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:153)
at com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:152)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at com.datastax.spark.connector.cql.TableDef.(Schema.scala:152)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:283)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:271)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.Set$Set4.foreach(Set.scala:137)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchTables$1(Schema.scala:271)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:295)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:294)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1(Schema.scala:294)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:307)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:304)
at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:121)
at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:120)
at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)
at 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109)
at 
com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139)
at 
com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
at 
com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:120)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:304)
at com.datastax.spark.connector.writer.TableWriter$.apply(TableWriter.scala:275)
at 
com.

Re: Does saveToCassandra work with Cassandra Lucene plugin ?

2016-03-28 Thread Anuj Wadehra
With my limited experience with Spark, I can tell you that you need to make 
sure that all columns mentioned in somecolumns must be part of CQL schema of 
table.

ThanksAnuj

Sent from Yahoo Mail on Android 
 
  On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de 
Souza wrote:   



Hello,




 
I am implementing the example on the github 
(https://github.com/Stratio/cassandra-lucene-index) and when I try to save the 
data using saveToCassandra I get the exception NoSuchElementException. If I use 
CassandraConnector.withSessionDo I am able to add elements into Cassandra and 
no exception is raised.

 The code :import org.apache.spark.{SparkConf, SparkContext, Logging}import 
com.datastax.spark.connector.cql.CassandraConnectorimport 
com.datastax.spark.connector._
object App extends Logging{    def main(args: Array[String]) {
        // Get the cassandra IP and create the spark context        val 
cassandraIP = System.getenv("CASSANDRA_IP");        val sparkConf = new 
SparkConf(true)                        .set("spark.cassandra.connection.host", 
cassandraIP)                        .set("spark.cleaner.ttl", "3600")           
             .setAppName("Simple Spark Cassandra Example")





        val sc = new SparkContext(sparkConf)
        // Works        CassandraConnector(sparkConf).withSessionDo { session 
=>           session.execute("INSERT INTO demo.tweets(id, user, body, time, 
latitude, longitude) VALUES (19, 'Name', 'Body', '2016-03-19 09:00:00-0300', 
39, 39)")        }
        // Does not work        val demo = sc.parallelize(Seq((9, "Name", 
"Body", "2016-03-29 19:00:00-0300", 29, 29)))        // Raises the exception    
    demo.saveToCassandra("demo", "tweets", SomeColumns("id", "user", "body", 
"time", "latitude", "longitude"))
    }
}











 The exception:16/03/28 14:15:41 INFO CassandraConnector: Connected to 
Cassandra cluster: Test ClusterException in thread "main" 
java.util.NoSuchElementException: Column  not found in demo.tweetsat 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)at
 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)at
 scala.collection.Map$WithDefault.default(Map.scala:52)at 
scala.collection.MapLike$class.apply(MapLike.scala:141)at 
scala.collection.AbstractMap.apply(Map.scala:58)at 
com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:153)at 
com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:152)at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)at
 scala.collection.immutable.Map$Map1.foreach(Map.scala:109)at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)at 
com.datastax.spark.connector.cql.TableDef.(Schema.scala:152)at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:283)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:271)at
 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)at
 scala.collection.immutable.Set$Set4.foreach(Set.scala:137)at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchTables$1(Schema.scala:271)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:295)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:294)at
 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)at
 scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)at 
scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1(Schema.scala:294)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:307)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:304)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:121)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:120)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109)at
 
com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139)at
 
com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnect

Re: Does saveToCassandra work with Cassandra Lucene plugin ?

2016-03-28 Thread Anuj Wadehra
I used it with Java and there, every field of Pojo must map to column names of 
the table. I think someone with Scala syntax knowledge can help you better.

ThanksAnuj
Sent from Yahoo Mail on Android 
 
  On Mon, 28 Mar, 2016 at 11:47 pm, Anuj Wadehra wrote: 
  With my limited experience with Spark, I can tell you that you need to make 
sure that all columns mentioned in somecolumns must be part of CQL schema of 
table.

ThanksAnuj

Sent from Yahoo Mail on Android 
 
  On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de 
Souza wrote:   



Hello,




 
I am implementing the example on the github 
(https://github.com/Stratio/cassandra-lucene-index) and when I try to save the 
data using saveToCassandra I get the exception NoSuchElementException. If I use 
CassandraConnector.withSessionDo I am able to add elements into Cassandra and 
no exception is raised.

 The code :import org.apache.spark.{SparkConf, SparkContext, Logging}import 
com.datastax.spark.connector.cql.CassandraConnectorimport 
com.datastax.spark.connector._
object App extends Logging{    def main(args: Array[String]) {
        // Get the cassandra IP and create the spark context        val 
cassandraIP = System.getenv("CASSANDRA_IP");        val sparkConf = new 
SparkConf(true)                        .set("spark.cassandra.connection.host", 
cassandraIP)                        .set("spark.cleaner.ttl", "3600")           
             .setAppName("Simple Spark Cassandra Example")





        val sc = new SparkContext(sparkConf)
        // Works        CassandraConnector(sparkConf).withSessionDo { session 
=>           session.execute("INSERT INTO demo.tweets(id, user, body, time, 
latitude, longitude) VALUES (19, 'Name', 'Body', '2016-03-19 09:00:00-0300', 
39, 39)")        }
        // Does not work        val demo = sc.parallelize(Seq((9, "Name", 
"Body", "2016-03-29 19:00:00-0300", 29, 29)))        // Raises the exception    
    demo.saveToCassandra("demo", "tweets", SomeColumns("id", "user", "body", 
"time", "latitude", "longitude"))
    }
}











 The exception:16/03/28 14:15:41 INFO CassandraConnector: Connected to 
Cassandra cluster: Test ClusterException in thread "main" 
java.util.NoSuchElementException: Column  not found in demo.tweetsat 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)at
 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)at
 scala.collection.Map$WithDefault.default(Map.scala:52)at 
scala.collection.MapLike$class.apply(MapLike.scala:141)at 
scala.collection.AbstractMap.apply(Map.scala:58)at 
com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:153)at 
com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:152)at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)at
 scala.collection.immutable.Map$Map1.foreach(Map.scala:109)at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)at 
com.datastax.spark.connector.cql.TableDef.(Schema.scala:152)at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:283)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:271)at
 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)at
 scala.collection.immutable.Set$Set4.foreach(Set.scala:137)at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchTables$1(Schema.scala:271)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:295)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:294)at
 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)at
 scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)at 
scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)at 
scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1(Schema.scala:294)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:307)at
 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:304)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:121)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withClusterDo$1.apply(CassandraConnector.scala:120)at
 
com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)at
 
com.datastax.spark.connector.cql.CassandraCon

Re: Does saveToCassandra work with Cassandra Lucene plugin ?

2016-03-28 Thread Cleosson José Pirani de Souza
Hi,

 One important thing, if I remove the custom index using Lucene, 
saveToCassandra works.


Thanks

Cleosson



From: Anuj Wadehra 
Sent: Monday, March 28, 2016 3:27 PM
To: user@cassandra.apache.org; Cleosson José Pirani de Souza; 
user@cassandra.apache.org
Subject: Re: Does saveToCassandra work with Cassandra Lucene plugin ?

I used it with Java and there, every field of Pojo must map to column names of 
the table. I think someone with Scala syntax knowledge can help you better.


Thanks
Anuj

Sent from Yahoo Mail on 
Android

On Mon, 28 Mar, 2016 at 11:47 pm, Anuj Wadehra
 wrote:
With my limited experience with Spark, I can tell you that you need to make 
sure that all columns mentioned in somecolumns must be part of CQL schema of 
table.


Thanks
Anuj

Sent from Yahoo Mail on 
Android

On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de Souza
 wrote:


Hello,




I am implementing the example on the github 
(https://github.com/Stratio/cassandra-lucene-index) and when I try to save the 
data using saveToCassandra I get the exception NoSuchElementException.
 If I use CassandraConnector.withSessionDo I am able to add elements into 
Cassandra and no exception is raised.


 The code :
import org.apache.spark.{SparkConf, SparkContext, Logging}
import com.datastax.spark.connector.cql.CassandraConnector
import com.datastax.spark.connector._

object App extends Logging{
def main(args: Array[String]) {

// Get the cassandra IP and create the spark context
val cassandraIP = System.getenv("CASSANDRA_IP");
val sparkConf = new SparkConf(true)
.set("spark.cassandra.connection.host", cassandraIP)
.set("spark.cleaner.ttl", "3600")
.setAppName("Simple Spark Cassandra Example")


val sc = new SparkContext(sparkConf)

// Works
CassandraConnector(sparkConf).withSessionDo { session =>
   session.execute("INSERT INTO demo.tweets(id, user, body, time, 
latitude, longitude) VALUES (19, 'Name', 'Body', '2016-03-19 09:00:00-0300', 
39, 39)")
}

// Does not work
val demo = sc.parallelize(Seq((9, "Name", "Body", "2016-03-29 
19:00:00-0300", 29, 29)))
// Raises the exception
demo.saveToCassandra("demo", "tweets", SomeColumns("id", "user", 
"body", "time", "latitude", "longitude"))

}
}





 The exception:
16/03/28 14:15:41 INFO CassandraConnector: Connected to Cassandra cluster: Test 
Cluster
Exception in thread "main" java.util.NoSuchElementException: Column  not found 
in demo.tweets
at 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
at 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
at scala.collection.Map$WithDefault.default(Map.scala:52)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:153)
at com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:152)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at com.datastax.spark.connector.cql.TableDef.(Schema.scala:152)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:283)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:271)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.Set$Set4.foreach(Set.scala:137)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchTables$1(Schema.scala:271)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:295)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:294)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1(Schema.scala:294)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$fromCassandra$1.apply(Schema.scala:307)
at 
com.datastax.s

Re: Does saveToCassandra work with Cassandra Lucene plugin ?

2016-03-28 Thread Jack Krupansky
The exception message has an empty column name. Odd. Not sure if that is a
bug in the exception code or whether you actually have an empty column name
somewhere.

Did you use the absolutely exact same commands to create the keyspace,
table, and custom index as in the Stratio readme?

-- Jack Krupansky

On Mon, Mar 28, 2016 at 4:57 PM, Cleosson José Pirani de Souza <
cso...@daitangroup.com> wrote:

> Hi,
>
>  One important thing, if I remove the custom index using Lucene,
> saveToCassandra works.
>
>
> Thanks
>
> Cleosson
>
>
> --
> *From:* Anuj Wadehra 
> *Sent:* Monday, March 28, 2016 3:27 PM
> *To:* user@cassandra.apache.org; Cleosson José Pirani de Souza;
> user@cassandra.apache.org
> *Subject:* Re: Does saveToCassandra work with Cassandra Lucene plugin ?
>
> I used it with Java and there, every field of Pojo must map to column
> names of the table. I think someone with Scala syntax knowledge can help
> you better.
>
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> 
>
> On Mon, 28 Mar, 2016 at 11:47 pm, Anuj Wadehra
>  wrote:
> With my limited experience with Spark, I can tell you that you need to
> make sure that all columns mentioned in somecolumns must be part of CQL
> schema of table.
>
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> 
>
> On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de Souza
>  wrote:
>
>
>
> Hello,
>
>
>
> I am implementing the example on the github (
> https://github.com/Stratio/cassandra-lucene-index) and when I try to save
> the data using saveToCassandra I get the exception NoSuchElementException.
>  If I use CassandraConnector.withSessionDo I am able to add elements into
> Cassandra and no exception is raised.
>
>
>  The code :
> import org.apache.spark.{SparkConf, SparkContext, Logging}
> import com.datastax.spark.connector.cql.CassandraConnector
> import com.datastax.spark.connector._
>
> object App extends Logging{
> def main(args: Array[String]) {
>
> // Get the cassandra IP and create the spark context
> val cassandraIP = System.getenv("CASSANDRA_IP");
> val sparkConf = new SparkConf(true)
> .set("spark.cassandra.connection.host",
> cassandraIP)
> .set("spark.cleaner.ttl", "3600")
> .setAppName("Simple Spark Cassandra Example")
>
>
> *val sc = new SparkContext(sparkConf)*
>
> *// Works*
> *CassandraConnector(sparkConf).withSessionDo { session =>*
> *   session.execute("INSERT INTO demo.tweets(id, user, body, time,
> latitude, longitude) VALUES (19, 'Name', 'Body', '2016-03-19
> 09:00:00-0300', 39, 39)")*
> *}*
>
> *// Does not work*
> *val demo = sc.parallelize(Seq((9, "Name", "Body", "2016-03-29
> 19:00:00-0300", 29, 29)))*
> *// Raises the exception*
> *demo.saveToCassandra("demo", "tweets", SomeColumns("id", "user",
> "body", "time", "latitude", "longitude"))*
>
>
> *} *
> *}*
>
>
>
>
>  The exception:
> *16/03/28 14:15:41 INFO CassandraConnector: Connected to Cassandra
> cluster: Test Cluster*
> *Exception in thread "main" java.util.NoSuchElementException: Column  not
> found in demo.tweets*
> at
> com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
> at
> com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
> at scala.collection.Map$WithDefault.default(Map.scala:52)
> at scala.collection.MapLike$class.apply(MapLike.scala:141)
> at scala.collection.AbstractMap.apply(Map.scala:58)
> at
> com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:153)
> at
> com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:152)
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at
> scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
> at com.datastax.spark.connector.cql.TableDef.(Schema.scala:152)
> at
> com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:283)
> at
> com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:271)
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
> at scala.collection.immutable.Set$Set4.foreach(Set.scala:137)
> at
> scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
> at
> com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchTables$1(Schema.scala:271)
> at
> com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1$2.apply(Schema.scala:295)
> at
> com.datastax.spark.connector.cql.Schema$$an

Re: Why is write failing

2016-03-28 Thread Rakesh Kumar
> This is in my cassandra-topology.properties

my bad. I used wrong file, instead of rackdc properties file.


Re: Does saveToCassandra work with Cassandra Lucene plugin ?

2016-03-28 Thread Cleosson José Pirani de Souza
Hi Jack,


 Yes, I used the exact same commands in the Stratio readme.


Thanks,

Cleososn



From: Jack Krupansky 
Sent: Monday, March 28, 2016 6:06 PM
To: user@cassandra.apache.org
Subject: Re: Does saveToCassandra work with Cassandra Lucene plugin ?

The exception message has an empty column name. Odd. Not sure if that is a bug 
in the exception code or whether you actually have an empty column name 
somewhere.

Did you use the absolutely exact same commands to create the keyspace, table, 
and custom index as in the Stratio readme?

-- Jack Krupansky

On Mon, Mar 28, 2016 at 4:57 PM, Cleosson José Pirani de Souza 
mailto:cso...@daitangroup.com>> wrote:

Hi,

 One important thing, if I remove the custom index using Lucene, 
saveToCassandra works.


Thanks

Cleosson



From: Anuj Wadehra mailto:anujw_2...@yahoo.co.in>>
Sent: Monday, March 28, 2016 3:27 PM
To: user@cassandra.apache.org; Cleosson José 
Pirani de Souza; user@cassandra.apache.org
Subject: Re: Does saveToCassandra work with Cassandra Lucene plugin ?

I used it with Java and there, every field of Pojo must map to column names of 
the table. I think someone with Scala syntax knowledge can help you better.


Thanks
Anuj

Sent from Yahoo Mail on 
Android

On Mon, 28 Mar, 2016 at 11:47 pm, Anuj Wadehra
mailto:anujw_2...@yahoo.co.in>> wrote:
With my limited experience with Spark, I can tell you that you need to make 
sure that all columns mentioned in somecolumns must be part of CQL schema of 
table.


Thanks
Anuj

Sent from Yahoo Mail on 
Android

On Mon, 28 Mar, 2016 at 11:38 pm, Cleosson José Pirani de Souza
mailto:cso...@daitangroup.com>> wrote:


Hello,




I am implementing the example on the github 
(https://github.com/Stratio/cassandra-lucene-index) and when I try to save the 
data using saveToCassandra I get the exception NoSuchElementException.
 If I use CassandraConnector.withSessionDo I am able to add elements into 
Cassandra and no exception is raised.


 The code :
import org.apache.spark.{SparkConf, SparkContext, Logging}
import com.datastax.spark.connector.cql.CassandraConnector
import com.datastax.spark.connector._

object App extends Logging{
def main(args: Array[String]) {

// Get the cassandra IP and create the spark context
val cassandraIP = System.getenv("CASSANDRA_IP");
val sparkConf = new SparkConf(true)
.set("spark.cassandra.connection.host", cassandraIP)
.set("spark.cleaner.ttl", "3600")
.setAppName("Simple Spark Cassandra Example")


val sc = new SparkContext(sparkConf)

// Works
CassandraConnector(sparkConf).withSessionDo { session =>
   session.execute("INSERT INTO demo.tweets(id, user, body, time, 
latitude, longitude) VALUES (19, 'Name', 'Body', '2016-03-19 09:00:00-0300', 
39, 39)")
}

// Does not work
val demo = sc.parallelize(Seq((9, "Name", "Body", "2016-03-29 
19:00:00-0300", 29, 29)))
// Raises the exception
demo.saveToCassandra("demo", "tweets", SomeColumns("id", "user", 
"body", "time", "latitude", "longitude"))

}
}





 The exception:
16/03/28 14:15:41 INFO CassandraConnector: Connected to Cassandra cluster: Test 
Cluster
Exception in thread "main" java.util.NoSuchElementException: Column  not found 
in demo.tweets
at 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
at 
com.datastax.spark.connector.cql.StructDef$$anonfun$columnByName$2.apply(Schema.scala:60)
at scala.collection.Map$WithDefault.default(Map.scala:52)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:153)
at com.datastax.spark.connector.cql.TableDef$$anonfun$9.apply(Schema.scala:152)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at com.datastax.spark.connector.cql.TableDef.(Schema.scala:152)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:283)
at 
com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchTables$1$2.apply(Schema.scala:271)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.Set$Set4.foreach(Set.scala:137)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at 
com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetch

Solr and vnodes anyone?

2016-03-28 Thread Jack Krupansky
Somebody recently asked me for advice on the use of Solr (DSE Search) and
vnodes, so I was wondering... is anybody here actually using Solr/DSE
Search with vnodes enabled? If so, with what token count? The default of
256 would result in somewhat suboptimal query performance, so the question
is whether 64 or even 32 would deliver acceptable query performance?
Anybody here have any practical experience on this issue, either testing or
even better, in production?

Absent any further input, my advice would be to limit DSE Search/Solr to a
token count of 64 per node.

-- Jack Krupansky


Acceptable repair time

2016-03-28 Thread Jack Krupansky
Someone recently asked me for advice when their repair time was 2-3 days. I
thought that was outrageous, but not unheard of. Personally, to me, 2-3
hours would be about the limit of what I could tolerate, and my personal
goal would be that a full repair of a node should take no longer than an
hour, maybe 90 minutes tops. But... achieving those more abbreviated repair
times would strongly suggest that the amount of data on each node be kept
down to a tiny fraction of a typical spinning disk drive, or even a
fraction of a larger SSD drive.

So, my question here is what people consider acceptable full repair times
for nodes and what the resulting node data size is.

What impact vnodes has on these numbers is a bonus question.

Thanks!

-- Jack Krupansky


Re: Counter values become under-counted when running repair.

2016-03-28 Thread Dikang Gu
Hi Aleksey, do you get a chance to take a look?

Thanks
Dikang.

On Thu, Mar 24, 2016 at 10:30 PM, Dikang Gu  wrote:

> @Aleksey, sure, here is the jira:
> https://issues.apache.org/jira/browse/CASSANDRA-11432
>
> Thanks!
>
> On Thu, Mar 24, 2016 at 5:32 PM, Aleksey Yeschenko 
> wrote:
>
>> Best open a JIRA ticket and I’ll have a look at what could be the reason.
>>
>> --
>> AY
>>
>> On 24 March 2016 at 23:20:55, Dikang Gu (dikan...@gmail.com) wrote:
>>
>> @Aleksey, we are writing to cluster with CL = 2, and reading with CL = 1.
>> And overall we have 6 copies across 3 different regions. Do you have
>> comments about our setup?
>>
>> During the repair, the counter value become inaccurate, we are still
>> playing with the repair, will keep you update with more experiments. But
>> do
>> you have any theory around that?
>>
>> Thanks a lot!
>> Dikang.
>>
>> On Thu, Mar 24, 2016 at 11:02 AM, Aleksey Yeschenko 
>> wrote:
>>
>> > After repair is over, does the value settle? What CLs do you write to
>> your
>> > counters with? What CLs are you reading with?
>> >
>> > --
>> > AY
>> >
>> > On 24 March 2016 at 06:17:27, Dikang Gu (dikan...@gmail.com) wrote:
>> >
>> > Hello there,
>> >
>> > We are experimenting Counters in Cassandra 2.2.5. Our setup is that we
>> > have
>> > 6 nodes, across three different regions, and in each region, the
>> > replication factor is 2. Basically, each nodes holds a full copy of the
>> > data.
>> >
>> > When are doing 30k/s counter increment/decrement per node, and at the
>> > meanwhile, we are double writing to our mysql tier, so that we can
>> measure
>> > the accuracy of C* counter, compared to mysql.
>> >
>> > The experiment result was great at the beginning, the counter value in
>> C*
>> > and mysql are very close. The difference is less than 0.1%.
>> >
>> > But when we start to run the repair on one node, the counter value in
>> C*
>> > become much less than the value in mysql, the difference becomes larger
>> > than 1%.
>> >
>> > My question is that is it a known problem that the counter value will
>> > become under-counted if repair is running? Should we avoid running
>> repair
>> > for counter tables?
>> >
>> > Thanks.
>> >
>> > --
>> > Dikang
>> >
>> >
>>
>>
>> --
>> Dikang
>>
>>
>
>
> --
> Dikang
>
>


-- 
Dikang