Hi,
Is there any CLI or API which I can use to validate hint files checksum is
correct?
Reason to ask for such utility/CLI - We are doing rolling upgrade for
Cassandra in Google cloud. As approach we bring down nodes, take snapshot
backup, upgrade and bring node up. During this process hint file
26, 2019, at 6:37 AM, Shishir Kumar
> wrote:
>
>
> Hi,
>
> Is there any CLI or API which I can use to validate hint files checksum is
> correct?
>
> Reason to ask for such utility/CLI - We are doing rolling upgrade for
> Cassandra in Google cloud. As approach we bring do
Hi,
Need input on cassandra upgrade strategy for below:
1. We have Datacenter across 4 geography (multiple isolated deployments in
each DC).
2. Number of Cassandra nodes in each deployment is between 6 to 24
3. Data volume on each nodes between 150 to 400 GB
4. All production environment has DR se
ect your database from an outage of an
> entire rack. So performing an upgrade on a few nodes at a time within a
> rack is the same as a partial rack outage, from the database's perspective.
>
> Have a nice upgrade!
>
> Josh
>
> On Fri, Nov 29, 2019 at 7:22 AM Shish
k trace - anything else deserves a bug report.
>
> It’s unfortunate that people jump to just scrubbing the unreadable data -
> would appreciate an anonymized JIRA if possible. Alternatively work with
> your vendor to make sure they don’t have bugs in their readers somehow.
>
>
>
&g
Options: Assuming model and configurations are good and Data size per node
less than 1 TB (though no such Benchmark).
1. Infra scale for memory
2. Try to change disk_access_mode to mmap_index_only.
In this case you should not have any in memory DB tables.
3. Though Datastax do not recommended and
ch.com - A Total Solution for Data Gathering & Analysis*
> *-------*
>
>
> On Tue, Dec 3, 2019 at 5:53 PM Shishir Kumar
> wrote:
>
>> Options: Assuming model and configurations are good and Data size per
>> node less than
Hi,
Is it possible to get/predict how much time it will take for *nodetool -pr *to
complete on a node? Currently in one of my env (~800GB data per node in 6
node cluster), it is running since last 3 days.
Regards,
Shishir