appen.
I also try use : getLongOption, but this exception still happen.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md
at first I want to ask issue on spark-case-connector project, but there are no
issues there, so I ask here.
Tks, qihuang.zheng
原始邮件
发
件
发件人:Jeff jirsajeff.ji...@crowdstrike.com
收件人:user@cassandra.apache.orgu...@cassandra.apache.org
发送时间:2015年10月22日(周四) 13:52
主题:Re: C* Table Changed and Data Migration with new primary key
Because the data format has changed, you’ll need to read it out and write it
back in again.
This means usin
; qihuang.zheng
>
> 原始邮件
> *发件人:* Jeff Jirsa
> *收件人:* user@cassandra.apache.org
> *发送时间:* 2015年10月22日(周四) 13:52
> *主题:* Re: C* Table Changed and Data Migration with new primary key
>
> Because the data format has changed, you’ll need to read it out and write
> it back
Consider the new 3.0 Materialized Views feature - you keep the existing
table and create three MVs, each with a different a primary key. Cassandra
will then populate the new MVs from the existing base table data.
See:
https://issues.apache.org/jira/browse/CASSANDRA-6477
-- Jack Krupansky
On Wed,
...@crowdstrike.com
收件人:user@cassandra.apache.orgu...@cassandra.apache.org
发送时间:2015年10月22日(周四) 13:52
主题:Re: C* Table Changed and Data Migration with new primary key
Because the data format has changed, you’ll need to read it out and write it
back in again.
This means using either a driver (java
Because the data format has changed, you’ll need to read it out and write it
back in again.
This means using either a driver (java, python, c++, etc), or something like
spark.
In either case, split up the token range so you can parallelize it for
significant speed improvements.
From: "qihu