I think my suggestion was unclear. I was referring to the name guardrail,
using the same infra as guardrails, rather than a separate concept. Not
applying it like we do table options.
On Tue, Jun 25, 2024 at 12:44 AM Bernardo Botella <
conta...@bernardobotella.com> wrote:
> Hi Ariel and Jon,
>
Hi Ariel and Jon,
Let me address your question first. Yes, AND is supported in the proposal.
Below you can find some examples of different constraints applied to the same
column.
As per the LENGTH name instead of sizeOf as in the proposal, I am also not
opposed to it if it is more consistent w
+1 nb. I too see these tools (bulk analytics and scc) as complementary as has
been said. SCC also does some nice things to support Spark Streaming that I
don't think are addressed by the bulk analytics subproject today.
Regarding dsbulk, I think that's another thread but it's something we're
Likewise - another vote in favor of bringing in this subproject.
Any thoughts on bringing in dsbulk as well? dsbulk has a lower barrier to entry
than Spark Cassandra Connector, addresses a real need for users, and appears to
be at a similar place in its project lifecycle.
Abe
> On Jun 24, 2024
Yeah, having the connector will enhance the Cassandra ecosystem. I'm looking
forward to this contribution.
On 2024/06/24 17:28:48 "C. Scott Andreas" wrote:
> Supportive of accepting a donation of the Spark Cassandra Connector under the
> project's umbrella as well - I think that would be very we
Hi,
I see a vote for this has been called. I should have provided more prompt
feedback sooner.
I am a strong +1 on adding column level constraints being a good thing to add.
I'm not too concerned about row/partition/table level constraints, but I would
like to change the syntax before I would
Hi,
SGTM. It's not just what we return though it's also supporting UPSERT for RMR
updates? Because our transactions are one shot I don't think you could do that
because the statement that does INSERT IF NOT EXIST would not generate a row
that is visible to a later UPDATE statement in the same t
It sounds like the best course of action for now would be to keep the
current behavior.
However, we might want to fold this into CASSANDRA-18107 as a specific
concern around what we return when an explicit SELECT isn't present in the
transaction.
i.e. For any update, we'll have something like (co
Supportive of accepting a donation of the Spark Cassandra Connector under the project's umbrella
as well - I think that would be very welcome and appreciated. Spark Cassandra Connector and the
Analytics library are also suited to slightly different usage patterns. SCC can be a good fit for
Spar
Hi,
I think the current behavior maps to SQL more than CQL. In SQL an update
doesn't generate an error if the row to be updating doesn't exist it just
return 0 rows updated.
If someone wanted an upsert or increment behavior in their transaction could
they accomplish it with the current transa
I love where this is going. I have one question , however. I think it would
be more consistent if these were table level guardrails. Is there anything
that prevents us from utilizing the same underlying system and terminology
for both the node level guardrails and the table ones?
If we can avoid
To your point about Guardrails vs. Constraints, I do think the distinct roles
of “cluster operator” and “application developer” help show how these two
frameworks are both valuable. I don’t think I’d expect a cluster operator to be
involved in every table design decision, but being able to set w
Hi everyone,
I would like to start the voting for CEP-42.
Proposal:
https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-42%3A+Constraints+Framework
Discussion: https://lists.apache.org/thread/xc2phmxgsc7t3y9b23079vbflrhyyywj
The vote will be open for 72 hours. A vote passes if there are a
Thanks for the comments Jordan.
Completely agreed that we will need to be careful on not accepting constraints
that require a read before a write. It is called out on the CEP itself, and
will have to be enforced in the future.
After all the feedback and discussion, I think we are ready to move
I also think it would be a great contribution, especially since the bulk
analytics library can’t be used by the majority of teams, since it’s hard
coded to only work with single token clusters.
On Mon, Jun 24, 2024 at 9:51 AM Dinesh Joshi wrote:
> This would be a great contribution to have for
This would be a great contribution to have for the Analytics subproject.
The current bulk functionality in the Analytics subproject complements the
spark-cassandra-connector so I see it as a good fit for donation.
On Mon, Jun 24, 2024 at 12:32 AM Mick Semb Wever wrote:
>
> What are folks thought
What are folks thoughts on accepting a donation of
the spark-cassandra-connector project into the Analytics subproject ?
A number of folks have requested this, stating that they cannot contribute
to the project while it is under DataStax. The project has largely been in
maintenance mode the past
17 matches
Mail list logo