By dynamic columns, I mean columns not defined in schema. In current scenario, 
every row has some data in columns which are defined in schema while rest of 
the data is in columns which are not defined in schema. We used Thrift for 
inserting data.
In new schema, we want to create a collection column and put all the data which 
was there in columns NOT defined in schema to the collection. 

ThanksAnuj

Sent from Yahoo Mail on Android 
 
  On Wed, 3 Feb, 2016 at 12:36 am, DuyHai Doan<doanduy...@gmail.com> wrote:   
You 'll need to do the transformation in Spark, although I don't understand 
what you mean by "dynamic columns". Given the CREATE TABLE script you gave 
earlier, there is nothing such as dynamic columns
On Tue, Feb 2, 2016 at 8:01 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:

Will it be possible to read dynamic columns data from compact storage and 
trasform them as collection e.g. map in new table?

ThanksAnuj

Sent from Yahoo Mail on Android 
 
 On Wed, 3 Feb, 2016 at 12:28 am, DuyHai Doan<doanduy...@gmail.com> wrote:   So 
there is no "static" (in the sense of CQL static) column in your legacy table. 
Just define a Scala case class to match this table and use Spark to dump the 
content to a new non compact CQL table
On Tue, Feb 2, 2016 at 7:55 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:

Our old table looks like this from cqlsh:
CREATE TABLE table table1 (  key text,  "Col1" blob,  "Col2" text,  "Col3" 
text,  "Col4" text,  PRIMARY KEY (key)) WITH COMPACT STORAGE AND …
And it will have some dynamic text data which we are planning to add in 
collections..
Please let me know if you need more details..

ThanksAnujSent from Yahoo Mail on Android 
 
 On Wed, 3 Feb, 2016 at 12:14 am, DuyHai Doan<doanduy...@gmail.com> wrote:   
Can you give the CREATE TABLE script for you old compact storage table ? Or at 
least the cassandra-client creation script
On Tue, Feb 2, 2016 at 3:48 PM, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:

Thanks DuyHai !! We were also thinking to do it the "Spark" way but I was not 
sure that its would be so simple :)

We have a compact storage cf with each row having some data in staticly defined 
columns while other data in dynamic columns. Is the approach mentioned in link 
adaptable to the scenario where we want to migrate the existing data to a 
Non-Compact CF with static columns and collections ?

Thanks
Anuj

--------------------------------------------
On Tue, 2/2/16, DuyHai Doan <doanduy...@gmail.com> wrote:

 Subject: Re: Moving Away from Compact Storage
 To: user@cassandra.apache.org
 Date: Tuesday, 2 February, 2016, 12:57 AM

 Use Apache
 Spark to parallelize the data migration. Look at this piece
 of code 
https://github.com/doanduyhai/Cassandra-Spark-Demo/blob/master/src/main/scala/usecases/MigrateAlbumsData.scala#L58-L60
 If your source and target tables
 have the SAME structure (except for the COMPACT STORAGE
 clause), migration with Spark is a 2 lines of
 code
 On Mon, Feb 1, 2016 at 8:14
 PM, Anuj Wadehra <anujw_2...@yahoo.co.in>
 wrote:
 Hi
 Whats the fastest and reliable way
 to migrate data from a Compact Storage table to Non-Compact
 storage table?
 I was not
 able to find any command for dropping the compact storage
 directive..so I think migrating data is the only way...any
 suggestions?
 ThanksAnuj




  


  


  

Reply via email to