AFAIK, Cassandra will not process schema changes in parallel. However,
by sending requests in parallel, you can minimise the time Cassandra
staying idle while the client is waiting for schema agreement after each
CREATE KEYSPACE statement.
On 09/03/2022 20:46, Leon Zaruvinsky wrote:
Hi Bowen,
Hi Bowen,
Haha, agree with you on wanting fewer keyspaces but unfortunately we're
kind of locked in to our architecture for the time being.
We do part of what you're saying, in that we shut down all but one node and
then run CREATE against that single node. But we do that serially,
O(keyspaces).
First of all, you really shouldn't have that many keyspaces. Put that
aside, the quickest way to create large number of keyspaces without
causing schema disagreement is create keyspaces in parallel over a
connection pool with a number of connections all against the same single
Cassandra node. B
Hey folks,
A step in our Cassandra restore process is to re-create every keyspace that
existed in the backup in a brand new cluster. Because these creations are
sequential, and because we have _a lot_ of keyspaces, this ends up being
the slowest part of our restore. We already have some optimiza
Hi all,
Some problems with the display. Resending my query-
I am modelling a table for a shopping site where we store products for
customers and their data in json. Max prods for a customer is 10k.
We initially designed this table with the architecture below:
cust_prods(cust_id bigint PK, prod_id
Hi all,
I am modelling a table for a shopping site where we store products for
customers and their data in json. Max prods for a customer is 10k.
>>We initially designed this table with the architecture below:
cust_prods(cust_id bigint PK, prod_id bigint CK, prod_data text).
cust_id is partition
It sounds like you either have hot partition(s) or hardware issue on
that node. I;m mentioning hardware issue because I had a server with
faulty CPU fan and the CPU on it overheats and causes frequency
throttling, the result is a single server with much higher load than the
rest of the nodes in