Hi guys, for some historical reason, our cassandra cluster is currently
overloaded and operating on that somehow becomes a nightmare. Anyway,
(sadly) we're planning to migrate cassandra data back to mysql...

So we're not quite clear how to migrating the historical data from
cassandra.

While as I know there is the COPY command, I wonder if it works in product
env where more than hundreds gigabytes data are present. And, if it does,
would it impact server performance significantly?

Apart from that, I know spark-connector can be used to scan data from c*
cluster, but I'm not that familiar with spark and still not sure whether
write data to mysql database can be done naturally with spark-connector.

Are there any suggestions/best-practice/read-materials doing this?

Thanks!

Reply via email to