It really depends on how do you manage your nodes. With automation tools, like Ansible, it's much easier to manage the rackdc file per node. The "master list" doesn't need to exist, because the file is written once and will never get updated. The automation tool will create nodes based on the required DC/rack, and writes that information to the rackdc file during the node provisioning process. It's much faster to add nodes to a large cluster with rackdc file  - no rolling restart required.

On 02/06/2022 14:46, Durity, Sean R wrote:

I agree with Marc. We use the cassandra-topology.properties file (and PropertyFileSnitch) for our deployments. Having a file different on every node has never made sense to me. There would still have to be some master file somewhere from which to generate that individual node file. There is the (slight) penalty that a change in topology requires the distribution of a new file and a rolling restart.

Long live the PropertyFileSnitch! 😉

Sean R. Durity

*From:* Paulo Motta <pauloricard...@gmail.com>
*Sent:* Thursday, June 2, 2022 8:59 AM
*To:* user@cassandra.apache.org
*Subject:* [EXTERNAL] Re: Topology vs RackDC

It think topology file is better for static clusters, while rackdc for dynamic clusters where users can add/remove hosts without needing to update the topology file on all hosts.

On Thu, 2 Jun 2022 at 09:13 Marc Hoppins <marc.hopp...@eset.com> wrote:

    Hi all,

    Why is RACKDC preferred for production than TOPOLOGY?

    Surely one common file is far simpler to distribute than deal with
    the mucky-muck of various configs for each host if they are in one
    rack or another and/or one datacentre or another?  It is also
    fairly self-documenting of the setup with the entire cluster there
    in one file.

    From what I read in the documentation, regardless of which snitch
    one implements, cassandra-topology.properties will get read,
    either as a primary or as a backup...so why not just use topology
    for ALL cases?

    Thanks

    Marc

    INTERNAL USE

Reply via email to