Rüdiger, I have tried CQL only, and it was failing after 127 records added. I have to check what is wrong. I have the keyspace and table definiton exactly as you.
I am new to scala, I do not know how to do this. I may try this in the evening. Yogi On Wed, Feb 19, 2014 at 2:50 PM, Rüdiger Klaehn <rkla...@gmail.com> wrote: > This must be something to do with server side validation. If I define the > table like this it does not happen: > > cqlsh> CREATE KEYSPACE IF NOT EXISTS test1 WITH REPLICATION = { 'class' : > 'SimpleStrategy', 'replication_factor' : 1 }; > cqlsh> use test1; > cqlsh:test1> create TABLE employees2 (time blob, name blob, value blob, > PRIMARY KEY (name, value)) WITH COMPACT STORAGE; > > I should really update the AstClient benchmark so that it creates the > table itself. But as I said I am not very familiar with astyanax (or with > cassandra in general for that matter). It seems that I just happened to hit > an issue the the first time I was playing with it... > > cheers, > > Rüdiger > > p.s. Did you also have a large performance difference between the Thrift > and the CQL? > > On Wed, Feb 19, 2014 at 10:57 PM, Yogi Nerella <ynerella...@gmail.com>wrote: > >> I have a two node cluster. Tried with both 2.0.4 and 2.0.5. >> I have tried your code, and exactly after inserting 127 rows, the next >> insert fails. >> >> 10.566482102276002 123 >> 2.7760618708015863 124 >> 8.936212688296054 125 >> 9.532923906962095 126 >> 7.5081516753554505 127 >> java.lang.RuntimeException: failed to write data to C* >> at demo.AstClient.insert(AstClient.java:75) >> at demo.AstClient.loadData(AstClient.java:112) >> at demo.AstClient.main(AstClient.java:127) >> Caused by: >> com.netflix.astyanax.connectionpool.exceptions.BadRequestException: >> BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=22(22), >> attempts=1]InvalidRequestException(why:String didn't validate.) >> at >> com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159) >> at >> com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:61) >> at >> com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28) >> at >> com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151) >> at >> com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69) >> at >> com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256) >> at >> com.netflix.astyanax.thrift.ThriftKeyspaceImpl.executeOperation(ThriftKeyspaceImpl.java:478) >> at >> com.netflix.astyanax.thrift.ThriftKeyspaceImpl.access$000(ThriftKeyspaceImpl.java:73) >> at >> com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1.execute(ThriftKeyspaceImpl.java:116) >> at demo.AstClient.insert(AstClient.java:72) >> ... 2 more >> Caused by: InvalidRequestException(why:String didn't validate.) >> at >> org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833) >> at >> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) >> at >> org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964) >> at >> org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950) >> at >> com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1$1.internalExecute(ThriftKeyspaceImpl.java:122) >> at >> com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1$1.internalExecute(ThriftKeyspaceImpl.java:119) >> at >> com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:56) >> ... 10 more >> >> >> >> On Wed, Feb 19, 2014 at 12:38 PM, Rüdiger Klaehn <rkla...@gmail.com>wrote: >> >>> On Wed, Feb 19, 2014 at 7:49 PM, Sylvain Lebresne >>> <sylv...@datastax.com>wrote: >>> >>>> On Wed, Feb 19, 2014 at 11:27 AM, Rüdiger Klaehn <rkla...@gmail.com>wrote: >>>> >>>>> >>>>> Am I doing something wrong, or is this a fundamental limitation of CQL. >>>>> >>>> >>>> Neither. I believe you are running into >>>> https://issues.apache.org/jira/browse/CASSANDRA-6737, which is a bug, >>>> a performance bug, which we should and will fix. So thanks for the report. >>>> If you could give a shot to the patch on that issue and check if it helps, >>>> that would definitively be much appreciated. >>>> >>>> Hi Sylvain, >>> >>> Yes, this issue looks almost identical to the problem I have been >>> experiencing. Great to hear that you are aware of the issue and that there >>> is a fix. >>> >>> I have cloned the cassandra repo, applied the patch, and built it. But >>> when I want to run the bechmark I get an exception. See below. I tried with >>> a non-managed dependency to >>> cassandra-driver-core-2.0.0-rc3-SNAPSHOT-jar-with-dependencies.jar, which I >>> compiled from source because I read that that might help. But that did not >>> make a difference. >>> >>> So currently I don't know how to give the patch a try. Any ideas? >>> >>> cheers, >>> >>> Rüdiger >>> >>> Exception in thread "main" java.lang.IllegalArgumentException: >>> replicate_on_write is not a column defined in this metadata >>> at >>> com.datastax.driver.core.ColumnDefinitions.getAllIdx(ColumnDefinitions.java:273) >>> at >>> com.datastax.driver.core.ColumnDefinitions.getFirstIdx(ColumnDefinitions.java:279) >>> at com.datastax.driver.core.Row.getBool(Row.java:117) >>> at >>> com.datastax.driver.core.TableMetadata$Options.<init>(TableMetadata.java:474) >>> at >>> com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107) >>> at >>> com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:128) >>> at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:89) >>> at >>> com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:259) >>> at >>> com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:214) >>> at >>> com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:161) >>> at >>> com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) >>> at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:890) >>> at >>> com.datastax.driver.core.Cluster$Manager.newSession(Cluster.java:910) >>> at >>> com.datastax.driver.core.Cluster$Manager.access$200(Cluster.java:806) >>> at com.datastax.driver.core.Cluster.connect(Cluster.java:158) >>> at >>> cassandra.CassandraTestMinimized$delayedInit$body.apply(CassandraTestMinimized.scala:31) >>> at scala.Function0$class.apply$mcV$sp(Function0.scala:40) >>> at >>> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) >>> at scala.App$$anonfun$main$1.apply(App.scala:71) >>> at scala.App$$anonfun$main$1.apply(App.scala:71) >>> at scala.collection.immutable.List.foreach(List.scala:318) >>> at >>> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) >>> at scala.App$class.main(App.scala:71) >>> at >>> cassandra.CassandraTestMinimized$.main(CassandraTestMinimized.scala:5) >>> at >>> cassandra.CassandraTestMinimized.main(CassandraTestMinimized.scala) >>> >>> >>>> There is absolutely no fundamental reason why a CQL operation would >>>> be more than 10 times slower than it's thrift equivalent, such dramatic >>>> difference is indicative of a bug, something obviously wrong. >>>> >>>> -- >>>> Sylvain >>>> >>> >>> >> >