Getting error while inserting data in cassandra table using Java with JDBC

2013-04-17 Thread himanshu.joshi

Hi,


When I am trying to insert the data into a table using Java with JDBC, I 
am getting the error


InvalidRequestException(why:cannot parse 'Jo' as hex bytes)

My insert quarry is:
insert into temp(id,name,value,url_id) VALUES(108, 'Aa','Jo',10);

This insert quarry is running successfully from CQLSH command prompt but 
not from the code


The quarry I have used to create the table in CQLSH is:

CREATE TABLE temp (
  id bigint PRIMARY KEY,
  dt_stamp timestamp,
  name text,
  url_id bigint,
  value text
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};



I guess the problem may because of undefined 
key_validation_class,default_validation_class and comparator etc.

Is there any way to define these attributes using CQLSH ?
I have already tried ASSUME command but it also have not resolved the 
problem.


I am a beginner in cassandra and need your guidance.

--
Thanks & Regards,
Himanshu Joshi



Re: Getting error while inserting data in cassandra table using Java with JDBC

2013-04-17 Thread himanshu.joshi


On 04/18/2013 12:06 AM, aaron morton wrote:

What version are you using ?
And what JDBC driver ?

Sounds like the driver is not converting the value to bytes for you.
I guess the problem may because of undefined 
key_validation_class,default_validation_class and comparator etc.

If you are using CQL these are not relevant.

Cheers

-
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 18/04/2013, at 1:31 AM, himanshu.joshi <mailto:himanshu.jo...@orkash.com>> wrote:



Hi,


When I am trying to insert the data into a table using Java with 
JDBC, I am getting the error


InvalidRequestException(why:cannot parse 'Jo' as hex bytes)

My insert quarry is:
insert into temp(id,name,value,url_id) VALUES(108, 'Aa','Jo',10);

This insert quarry is running successfully from CQLSH command prompt 
but not from the code


The quarry I have used to create the table in CQLSH is:

CREATE TABLE temp (
 id bigint PRIMARY KEY,
 dt_stamp timestamp,
 name text,
 url_id bigint,
 value text
) WITH
 bloom_filter_fp_chance=0.01 AND
 caching='KEYS_ONLY' AND
 comment='' AND
 dclocal_read_repair_chance=0.00 AND
 gc_grace_seconds=864000 AND
 read_repair_chance=0.10 AND
 replicate_on_write='true' AND
 populate_io_cache_on_flush='false' AND
 compaction={'class': 'SizeTieredCompactionStrategy'} AND
 compression={'sstable_compression': 'SnappyCompressor'};



I guess the problem may because of undefined 
key_validation_class,default_validation_class and comparator etc.

Is there any way to define these attributes using CQLSH ?
I have already tried ASSUME command but it also have not resolved the 
problem.


I am a beginner in cassandra and need your guidance.

--
Thanks & Regards,
Himanshu Joshi




Hi Aaron,

The problem is resolved now as I upgraded the version of JDBC to 1.2.2
Earlier I was using JDBC version 1.1.6 with Cassandra 1.2.2

Thanks for your guidance.

--
Thanks & Regards,
Himanshu Joshi



Error on Range queries

2013-05-03 Thread himanshu.joshi

Hi,

I have created a 2 node test cluster in Cassandra version 1.2.3 
with  Simple Strategy, Replication Factor 2 and 
ByteOrderedPartitioner(so as to get Range Query functionality).


When i am using a range query on a secondary index in CQLSH,  I am 
getting the error :


"Bad Request: No indexed columns present in by-columns clause with Equal 
operator
Perhaps you meant to use CQL 2? Try using the -2 option when starting 
cqlsh."


My query is: select * from temp where min_update >10 limit 5;



My table structure is:

CREATE TABLE temp (
  id bigint PRIMARY KEY,
  archive_name text,
  country_name text,
  description text,
  dt_stamp timestamp,
  location_id bigint,
  max_update bigint,
  min_update bigint
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};

CREATE INDEX temp_min_update_idx ON temp (min_update);


Range queries are working fine on primary key.


I am getting the same error on another query of an another table temp2:

select * from temp2 where reffering_url='Some URL';

this table is also having the secondary index on this field("reffering_url")

Any help would be appreciated.

--
Thanks & Regards,
Himanshu Joshi



Re: Error on Range queries

2013-05-06 Thread himanshu.joshi

Thanks aaron..

--
Regards
Himanshu Joshi


On 05/06/2013 02:22 PM, aaron morton wrote:

"Bad Request: No indexed columns present in by-columns clause with Equal 
operator
Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh."

My query is: select * from temp where min_update >10 limit 5;

You have to have at least one indexes column in the where clause that uses the 
equal operator.

Cheers
  
-

Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 4/05/2013, at 1:22 AM, himanshu.joshi  wrote:


Hi,

 I have created a 2 node test cluster in Cassandra version 1.2.3 with  
Simple Strategy, Replication Factor 2 and ByteOrderedPartitioner(so as to get 
Range Query functionality).

When i am using a range query on a secondary index in CQLSH,  I am getting the 
error :

"Bad Request: No indexed columns present in by-columns clause with Equal 
operator
Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh."

My query is: select * from temp where min_update >10 limit 5;



My table structure is:

CREATE TABLE temp (
   id bigint PRIMARY KEY,
   archive_name text,
   country_name text,
   description text,
   dt_stamp timestamp,
   location_id bigint,
   max_update bigint,
   min_update bigint
) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

CREATE INDEX temp_min_update_idx ON temp (min_update);


Range queries are working fine on primary key.


I am getting the same error on another query of an another table temp2:

select * from temp2 where reffering_url='Some URL';

this table is also having the secondary index on this field("reffering_url")

Any help would be appreciated.
--
Thanks & Regards,
Himanshu Joshi






--
Thanks & Regards,
Himanshu Joshi



java.lang.AssertionError on starting the node

2013-05-30 Thread himanshu.joshi

Hi,
I have created a 2 node test cluster in Cassandra version 1.2.3 
with  Simple Strategy and Replication Factor 2. The Java version is 
"1.6.0_27" The seed node is working fine but when I am starting the 
second node it is showing the following error:


ERROR 10:16:55,603 Exception in thread Thread[FlushWriter:2,5,main]
java.lang.AssertionError: 105565
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:342)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:176)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:481)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:440)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:679)

This node was working fine earlier and is having the data also.
Any help would be appreciated.

--
Thanks & Regards,
Himanshu Joshi



Re: java.lang.AssertionError on starting the node

2013-06-04 Thread himanshu.joshi

Hi,

I deleted a column family from the keyspace and the error has gone 
now.


Thanks for the reply.

--
Thanks & Regards,
Himanshu Joshi


On 05/31/2013 08:16 PM, S C wrote:
What was the node doing right before the ERROR? Can you post some more 
log?


Thanks,
SC



Date: Fri, 31 May 2013 10:57:38 +0530
From: himanshu.jo...@orkash.com
To: user@cassandra.apache.org
Subject: java.lang.AssertionError on starting the node

Hi,
I have created a 2 node test cluster in Cassandra version 1.2.3 
with  Simple Strategy and Replication Factor 2. The Java version is 
"1.6.0_27" The seed node is working fine but when I am starting the 
second node it is showing the following error:


ERROR 10:16:55,603 Exception in thread Thread[FlushWriter:2,5,main]
java.lang.AssertionError: 105565
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:342)
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:176)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:481)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:440)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:679)

This node was working fine earlier and is having the data also.
Any help would be appreciated.
--
Thanks & Regards,
Himanshu Joshi




New virtual node not starting

2013-06-05 Thread himanshu.joshi

Hi,

I have added a third node(virtual) to a previously existing two 
node test cluster(virtual). When I am starting the new node after doing 
changes in the configuration file, it is getting struck for hours every 
time showing the following logs:


..
.
 INFO 17:50:57,571 JOINING: Starting to bootstrap...
 INFO 17:50:57,927 Finished streaming session 
0b78bd70-cc48-11e2-a1d4-b5b04f1aa4dc from /192.168.1.110
 INFO 17:55:26,729 Compacting 
[SSTableReader(path='/hda1-1/cassandra/data/system/peers/system-peers-ib-4-Data.db'), 
SSTableReader(path='/hda1-1/cassandra/data/system/peers/system-peers-ib-1-Data.db'), 
SSTableReader(path='/hda1-1/cassandra/data/system/peers/system-peers-ib-3-Data.db'), 
SSTableReader(path='/hda1-1/cassandra/data/system/peers/system-peers-ib-2-Data.db')]
 INFO 17:55:26,877 Compacted 4 sstables to 
[/hda1-1/cassandra/data/system/peers/system-peers-ib-5,].  18,311 bytes 
to 17,815 (~97% of original) in 146ms = 0.116368MB/s.  6 total rows, 2 
unique.  Row merge counts were {1:0, 2:0, 3:2, 4:0, }

 INFO 21:50:24,926 Saved KeyCache (19 items) in 116 ms
 INFO 01:50:24,923 Saved KeyCache (19 items) in 112 ms
 INFO 05:50:24,927 Saved KeyCache (19 items) in 116 ms
 INFO 09:50:24,932 Saved KeyCache (19 items) in 120 ms


I am using Cassandra tarball version 1.2.3 on all nodes with  Simple 
Strategy, Replication Factor 2 and ByteOrderedPartitioner. The Java 
version is "1.6.0_27"


The steps I followed to add new virtual node are:

-Unzipped the tarball version in the desired path.
-Edited the cassandra.yaml 
file(cluster_name,listen_address,endpoint_snitch,num_tokens,seed_provider, 
paths for data and logs etc...).

-Then started cassandra on this new node which got struck.


Output of ./nodetool netstats on this node is:

Mode: JOINING
Not sending any streams.
 Nothing streaming from /192.168.1.103
Pool NameActive   Pending  Completed
Commandsn/a 0  3
Responses   n/a 0   11105

and for other two nodes is:

Mode: NORMAL
Not sending any streams.
Not receiving any streams.
Pool NameActive   Pending  Completed
Commandsn/a 0  0
Responses   n/a 0  11319

Mode: NORMAL
 Nothing streaming to /192.168.0.224
Not receiving any streams.
Pool NameActive   Pending  Completed
Commandsn/a 0  0
Responses   n/a 0  11283


Am I doing something wrong in creating a new node?

--
Thanks & Regards,
Himanshu Joshi