Sub, unsub, and archive links can be found at:
http://cassandra.apache.org/community/
Plain text emails to the list also get a footer appended with similar info:
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
Fo
Hello team,
In some circumstances, my cluster was split onto two schema versions
(half on one version, and rest on another)
In the process of resolving this issue, I restarted some nodes.
Eventually, nodes migrated to one schema, but it was not clear why they
choose exactly this version of schema?
Hi Experts,
I have a 5 node cluster with 8 core CPU and 32 GiB RAM
If I have a write TPS of 5K/s and read TPS of 8K/s, I want to know what is
the optimal heap size configuration for each cassandra node.
Currently, the heap size is set at 8GB. How can I know if cassandra
requires more or less hea
Hi,
while we were running 'nodetool repair -full -dcpar' on one node we got the
following error:
ERROR [AntiEntropyStage:1] 2019-05-18 16:00:04,808
RepairMessageVerbHandler.java:177 - Table with id
5fb6b730-4ec3-11e9-b426-c3afc7dfebf6 was dropped during prepare phase of
repair
It looks like the
Someone issued a drop table statement?
--
Jeff Jirsa
> On May 20, 2019, at 9:14 AM, Oliver Herrmann wrote:
>
> Hi,
>
> while we were running 'nodetool repair -full -dcpar' on one node we got the
> following error:
>
> ERROR [AntiEntropyStage:1] 2019-05-18 16:00:04,808
> RepairMessageVerb
That's unlikely. We run the repair job from crontab every week when no
application is connected to the cluster. We had the same error for another
table for more than 3 weeks until we recreated it:
ERROR [AntiEntropyStage:1] 2019-04-13 16:00:18,397
RepairMessageVerbHandler.java:177 - Table with id
Repair doesn’t have a mechanism to drop a table
There are some race conditions in schema creation that can cause programmatic
schema condition (especially when multiple instances of the app can race) to
put things into a bad state.
If this is the problem, you’d want to inspect the cfid in the
It's not really something that can be easily calculated based on write
rate, but more something you have to find empirically and adjust
periodically.
Generally speaking, I'd start by running "nodetool gcstats" or similar and
just see what the GC pause stats look like. If it's not pausing much or
f
My guess is that the "latest" schema would be chosen but I am
definitely interested in in-depth explanation.
On Tue, 21 May 2019 at 00:28, Alexey Korolkov wrote:
>
> Hello team,
> In some circumstances, my cluster was split onto two schema versions
> (half on one version, and rest on another)
> I
10 matches
Mail list logo