Hello All,
We have a schema which can be modelled as *(studentID int, subjectID int,
marks int, PRIMARY KEY(studentID, subjectID)*. There can be ~1M studentIDs
and for each studentID there can be ~10K subjectIDs. The queries can be
using studentID and studentID-subjectID We have a 3 node (each hav
Hello,
I am building a write client in java to insert records into Cassandra 2.0.5.
I am using the Datastax java driver.
Problem : The datamodel is dynamic. By dynamic, I mean that the number of
columns and the datatype of columns will be given as an input by the user. It
has only 1 keyspa
Hello Varsha
Your best bet is to go with blob type by serializing all data into bytes.
Another alternative is to use text and serialize to JSON.
For the dynamic columns, use clustering columns in CQL3 with blob/text type
Regards
Duy Hai DOAN
On Wed, Apr 2, 2014 at 11:21 AM, Raveendran, Va
I want to export all the data of particular column family to the text file
from Cassandra cluster.
I tried
copy keyspace.mycolumnfamily to '/root/ddd/xx.csv';
It gave me timeout error
I tried below in Cassandra.yaml
request_timeout_in_ms: 1000
read_request_timeout_in_ms: 1000
range_req
http://mail-archives.apache.org/mod_mbox/cassandra-user/201309.mbox/%3C9AF3ADEDDFED4DDEA840B8F5C6286BBA@vig.local%3E
http://stackoverflow.com/questions/18872422/rpc-timeout-error-while-exporting-data-from-cql
Google for more.
Best regards / Pagarbiai
Viktor Jevdokimov
Senior Developer
Email: v
Hi,
Thanks for replying.
I dint quite get what you meant by "use clustering columns in CQL3 with
blob/text type".
I have elaborated my problem statement below.
Assume the schema of the keyspace to which random records need to be inserted
is given in the following format :
KeySpace Name : KS_
Cassandra 1.2.15, using commodity hardware.
On Tue, Apr 1, 2014 at 6:37 PM, Robert Coli wrote:
> On Tue, Apr 1, 2014 at 3:24 PM, Redmumba wrote:
>
>> Is it possible to have true "drop in" node replacements? For example, I
>> have a cluster of 51 Cassandra nodes, 17 in each data center. I had
Thanks for the reply. Most of the solutions provided over web involves
some kind of 'where' clause in data extract and then export the next set
until done. I have column family with no time stamp and no other column I
can use to filter the data. One other solution provided was to use
pagination, b
Hello Shrikar,
We are still facing read latency issue, here is the histogram
http://pastebin.com/yEvMuHYh
On Sat, Mar 29, 2014 at 8:11 AM, Apoorva Gaurav
wrote:
> Hello Shrikar,
>
> Yes primary key is (studentID, subjectID). I had dropped the test table,
> recreating and populating it post whic