Aaron,

It seems that we had a beta-1 node in our cluster of beta-2'.  Haven't had
the problem since.

Thanks for the help,

Frank

On Sat, Oct 16, 2010 at 1:50 PM, aaron morton <aa...@thelastpickle.com>wrote:

> Frank,
>
> Things are a bit clearer now. Think I had the wrong idea to start with.
>
> The server side error means this cassandra node does not know about the
> column family it was asked to read. I guess either the schema are out of
> sync on the nodes or there is a bug. How did you add the Keyspace?
>
> Check the keyspace definition on each node using either JConsole, nodetool
> or cassandra-cli to see if they match. There is a function
> called describe_schema_versions() on the 0.7 API, if your client supports it
> will tell you which schema versions are active in your cluster. Am guessing
> you have more than one active schema.
>
> You should probably get a better error message. Can you raise a bug for
> that please.
>
> Cheers
> Aaron
> On 16 Oct 2010, at 06:17, Frank LoVecchio wrote:
>
> Aaron,
>
> I updated the cassandra files and but still receive the same error (on
> client side) with a different line number 551:
>
> org.apache.thrift.TApplicationException: Internal error processing
> get_slice
> at
> org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>  at
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:551)
> at
> org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:531)
>  at org.scale7.cassandra.pelops.Selector$6.execute(Selector.java:538)
> at org.scale7.cassandra.pelops.Selector$6.execute(Selector.java:535)
>  at org.scale7.cassandra.pelops.Operand.tryOperation(Operand.java:45)
> at
> org.scale7.cassandra.pelops.Selector.getSuperColumnsFromRow(Selector.java:545)
>  at
> org.scale7.cassandra.pelops.Selector.getSuperColumnsFromRow(Selector.java:522)
> at
> com.isidorey.cassandra.dao.CassandraDAO.getSuperColumnsByKey(CassandraDAO.java:36)
>  at
> com.isidorey.cassandra.dao.CassandraDAO.getSuperColumnMap(CassandraDAO.java:82)
>
> On the server side, this is what we're seeing in Cassandra's log file:
>
> ERROR [pool-1-thread-2486] 2010-10-15 17:15:39,740 Cassandra.java (line
> 2876) Internal error processing get_slice
> java.lang.RuntimeException:
> org.apache.cassandra.db.UnserializableColumnFamilyException: Couldn't find
> cfId=1052
>  at
> org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:133)
> at
> org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:222)
>  at
> org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:300)
> at
> org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:261)
>  at
> org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:2868)
> at
> org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2724)
>  at
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:636)
> Caused by: org.apache.cassandra.db.UnserializableColumnFamilyException:
> Couldn't find cfId=1052
> at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:113)
>  at org.apache.cassandra.db.RowSerializer.deserialize(Row.java:76)
> at
> org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:114)
>  at
> org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:90)
> at
> org.apache.cassandra.service.StorageProxy.weakRead(StorageProxy.java:289)
>  at
> org.apache.cassandra.service.StorageProxy.readProtocol(StorageProxy.java:220)
> at
> org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:120)
>
>
> On Thu, Oct 14, 2010 at 6:29 PM, Aaron Morton <aa...@thelastpickle.com>wrote:
>
>> Am guessing but it looks like cassandra returned an error and the client
>> then had trouble reading the error.
>>
>> However if I look at the Beta 2 java thrift interface in Cassandra.java,
>> line 544 is not in recv_get_slice. May be nothing.
>>
>> Perhaps check the server for an error and double check your client is
>> coded for beta 2.
>>
>> Hope that helps.
>>
>> Aaron
>>
>>
>> On 15 Oct, 2010,at 12:32 PM, Frank LoVecchio <fr...@isidorey.com> wrote:
>>
>>  10:10:21,787 ERROR ~ Error getting Sensor
>> org.apache.thrift.TApplicationException: Internal error processing
>> get_slice
>> at org.apache.thrift.TApplicationException.read(
>> TApplicationException.java:108)
>> at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(
>> Cassandra.java:544)
>> at org.apache.cassandra.thrift.Cassandra$Client.get_slice(
>> Cassandra.java:524)
>> at org.scale7.cassandra.pelops.Selector$6.execute(Selector.java:538)
>> at org.scale7.cassandra.pelops.Selector$6.execute(Selector.java:535)
>> at org.scale7.cassandra.pelops.Operand.tryOperation(Operand.java:45)
>> at org.scale7.cassandra.pelops.Selector.getSuperColumnsFromRow(
>> Selector.java:545)
>> at org.scale7.cassandra.pelops.Selector.getSuperColumnsFromRow(
>> Selector.java:522)
>>
>> Not sure if this is a pelops thing or thrift.
>>
>> I spun up a new cluster of 3 nodes a couple of nights ago with the nightly
>> build 0.7 beta 2's.  When I include all 3 nodes in the Pelops Pool, and run
>> this:
>>
>> List<SuperColumn> cols = selector.getSuperColumnsFromRow(
>>                                getColFamilyName(), key,
>>                                Selector.newColumnsPredicateAll(true,
>> 1000),
>>                                ConsistencyLevel.ONE);
>>
>>  I get the error above.  When I create a new Pool with only 1 node, I get
>> different data back then if I create a new Pool with another single node,
>> but I don't get the error.  Why is this happening?  Here is my
>> cassandra.yaml file, which I haven't modified on the fly:
>>
>> # Configuration wiki:
>> http://wiki.apache.org/cassandra/StorageConfiguration
>> authority: org.apache.cassandra.auth.AllowAllAuthority
>> authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
>> auto_bootstrap: true
>> binary_memtable_throughput_in_mb: 256
>> cluster_name: DevCluster
>> column_index_size_in_kb: 64
>> commitlog_directory: /cassandra/commitlog
>> saved_caches_directory: /cassandra/saved_caches
>> commitlog_rotation_threshold_in_mb: 128
>> commitlog_sync: periodic
>> commitlog_sync_period_in_ms: 10000
>> concurrent_reads: 8
>> concurrent_writes: 50
>> data_file_directories:
>> - /cassandra/data
>> disk_access_mode: auto
>> endpoint_snitch: org.apache.cassandra.locator.SimpleSnitch
>> dynamic_snitch: true
>> hinted_handoff_enabled: true
>> in_memory_compaction_limit_in_mb: 256
>> keyspaces:
>>     - name: Keyspace
>>       replica_placement_strategy:
>> org.apache.cassandra.locator.SimpleStrategy
>>       replication_factor: 3
>>       column_families:
>>
>>         - name: Sensor
>>           column_type: Super
>>           compare_with: TimeUUIDType
>>           gc_grace_seconds: 864000
>>           keys_cached: 200000.0
>>           preload_row_cache: false
>>           read_repair_chance: 1.0
>>           rows_cached: 0.0
>>
>> listen_address: ec2-internal-ip1
>> memtable_flush_after_mins: 60
>> memtable_operations_in_millions: 0.3
>> memtable_throughput_in_mb: 64
>>  partitioner: org.apache.cassandra.dht.RandomPartitioner
>> phi_convict_threshold: 8
>> rpc_port: 9160
>> rpc_timeout_in_ms: 10000
>> seeds:
>> - ec2-internal-ip1
>> - ec2-internal-ip2
>> - ec2-internal-ip3
>> sliced_buffer_size_in_kb: 64
>> snapshot_before_compaction: false
>> storage_port: 7000
>> thrift_framed_transport_size_in_mb: 15
>> thrift_max_message_length_in_mb: 16
>> initial_token:
>>
>> Frank LoVecchio
>> Software Engineer
>> Isidorey LLC
>>
>> franklovecchio.com
>> rodsandricers.com
>>
>>
>
>

Reply via email to