I would also push for something besides a full refresh, if at all possible. It feels like a waste of resources to me – and not predictably scalable. Suggestions: use a queue to send writes to both systems. If the downstream system doesn’t handle TTL, perhaps set an expiration date and a purge query on the downstream target.
If you have to do the full refresh, perhaps a Spark job would be a decent solution. I would probably create a separate DC (with a lower replication factor and smaller number of nodes) just to handle the analytical/unload kind of workload (if the other functions of the cluster might be impacted by the unload). DSBulk from DataStax is very fast and scriptable, too. Sean Durity – Staff Systems Engineer, Cassandra From: JOHN, BIBIN <bj9...@att.com> Sent: Wednesday, February 19, 2020 5:25 PM To: user@cassandra.apache.org Subject: [EXTERNAL] RE: Mechanism to Bulk Export from Cassandra on daily Basis Thank you for suggestion. Full refresh is currently designed because with delta we cannot identify what got deleted. So downstreams prefer full data everyday. Thanks Bibin John From: Reid Pinchback <rpinchb...@tripadvisor.com<mailto:rpinchb...@tripadvisor.com>> Sent: Wednesday, February 19, 2020 3:14 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: Mechanism to Bulk Export from Cassandra on daily Basis To the question of ‘best approach’, so far the comments have been about alternatives in tools. Another axis you might want to consider is from the data model viewpoint. So, for example, let’s say you have 600M rows. You want to do a daily transfer of data for some reason. First question that comes to mind is, do you need all the data every day? Usually that would only be the case if all of the data is at risk of changing. Generally the way I’d cut down the pain on something like this is to figure out if the data model currently does, or could be made to, only mutate in a limited subset. Then maybe all you are transferring are the daily changes. Systems based on catching up to daily changes will usually be pulling single-digit percentages of data volume compared to the entire storage footprint. That’s not only a lot less data to pull, it’s also a lot less impact on the ongoing operations of the cluster while you are pulling that data. R From: "JOHN, BIBIN" <bj9...@att.com<mailto:bj9...@att.com>> Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <user@cassandra.apache.org<mailto:user@cassandra.apache.org>> Date: Wednesday, February 19, 2020 at 1:13 PM To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <user@cassandra.apache.org<mailto:user@cassandra.apache.org>> Subject: Mechanism to Bulk Export from Cassandra on daily Basis Message from External Sender Team, We have a requirement to bulk export data from Cassandra on daily basis? Table contain close to 600M records and cluster is having 12 nodes. What is the best approach to do this? Thanks Bibin John ________________________________ The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.