I don't know that there are any projects out there addressing this but I
advise you to study LOAD ... INFILE in the MySQL manual specific to your
target version. It basically describes a CSV format, where a given file
represents a subset of data for a specific table. It is far and away the
fastest method for loading huge amounts of data into MySQL
non-transactionally.

On the downside, you are likely going to have to author your own Cassandra
client tool to generate those files.

On Tue, May 15, 2018, 6:59 AM Jing Meng, <self.rel...@gmail.com> wrote:

> Hi guys, for some historical reason, our cassandra cluster is currently
> overloaded and operating on that somehow becomes a nightmare. Anyway,
> (sadly) we're planning to migrate cassandra data back to mysql...
>
> So we're not quite clear how to migrating the historical data from
> cassandra.
>
> While as I know there is the COPY command, I wonder if it works in product
> env where more than hundreds gigabytes data are present. And, if it does,
> would it impact server performance significantly?
>
> Apart from that, I know spark-connector can be used to scan data from c*
> cluster, but I'm not that familiar with spark and still not sure whether
> write data to mysql database can be done naturally with spark-connector.
>
> Are there any suggestions/best-practice/read-materials doing this?
>
> Thanks!
>

Reply via email to