On Tue, Dec 15, 2015 at 4:41 PM, Jack Krupansky
wrote:
> Can a core Cassandra committer verify if removing the compactions_in_progress
> folder is indeed to desired and recommended solution to this problem, or
> whether it might in fact be a bug that this workaround is needed at all?
> Thanks!
>
Can a core Cassandra committer verify if removing the compactions_in_progress
folder is indeed to desired and recommended solution to this problem, or
whether it might in fact be a bug that this workaround is needed at all?
Thanks!
-- Jack Krupansky
On Thu, Dec 10, 2015 at 5:34 PM, Mikhail Strebk
It should all just work as expected, as if by magic. That's the whole point
of having MV, so that Cassandra does all the bookkeeping for you. Yes, the
partition key can change, so an update to the base table can cause one (or
more) MV rows to be deleted and one (or more) new MV rows to be created.
In the case of an update to the source table where data is changed, a
tombstone will be generated for the old value and an insert will be
generated for the new value. This happens serially for the source
partition, so if there are multiple updates to the same partition, a
tombstone will be generate
Haven't had a chance to yet but I will. However, trying might not fully explain
what happens behind the scenes, ie, you'd see the effect but not everything
that happens.
Thanks.
Sent from my iPhone
> On 15 Dec 2015, at 23:41, Laing, Michael wrote:
>
> why don't you just try it?
>
>> On Tu
why don't you just try it?
On Tue, Dec 15, 2015 at 6:30 PM, Will Zhang
wrote:
> Hi all,
>
> I originally raised this on SO, but not really getting any answer there,
> thought I give it a try here.
>
>
> Just thinking about this so please correct my understanding if any of this
> isn't right.
>
>
Hi all,
I originally raised this on SO, but not really getting any answer there,
thought I give it a try here.
Just thinking about this so please correct my understanding if any of this
isn't right.
Environment: Apache Cassandra v3.0.0
Say you have a table and a materialized view created on
I agree with Jon. It's almost a statistical certainty that such updates
will be processed out of order some of the time because the clock sync
between machines will never be perfect.
Depending on how your actual code that shows this problem is structured,
there are ways to reduce or eliminate such
High volume updates to a single key in a distributed system that relies on
a timestamp for conflict resolution is not a particularly great idea. If
you ever do this from multiple clients you'll find unexpected results at
least some of the time.
On Tue, Dec 15, 2015 at 12:41 PM Paulo Motta
wrote:
> We are using 2.1.7.1
Then you should be able to use the java driver timestamp generators.
> So, we need to look for clock sync issues between nodes in our ring? How
close do they need to be?
millisecond precision since that is the server precision for timestamps, so
probably NTP should do the
How much data and request load do you expect on this small cluster? If
fairly light, fine. But if heavy, be careful.
Someone else can chime in if they have a more solid answer on how many CPU
cores Cassandra needs, but less than two per node seems inappropriate
unless load is extremely light since
On Tue, Dec 15, 2015 at 2:57 PM Paulo Motta
wrote:
> What cassandra and driver versions are you running?
>
>
We are using 2.1.7.1
> It may be that the second update is getting the same timestamp as the
> first, or even a lower timestamp if it's being processed by another server
> with unsynced
What cassandra and driver versions are you running?
It may be that the second update is getting the same timestamp as the
first, or even a lower timestamp if it's being processed by another server
with unsynced clock, so that update may be getting lost.
If you have high frequency updates in the s
Philip,
I don't see the benefit to have a multi-DC C* cluster in this case. What
you need is two separate C* clusters and use Kafka record/replay writes to
DR. DR only receives writes from Kafka consumer. You won't need to deal
with "Removing everything from Cassandra that -isn't- in Kafka".
On M
We are encountering a situation in our environment (a 6-node Cassandra
ring) where we are trying to insert a row and then immediately update it,
using LOCAL_QUORUM consistency (replication factor = 3). I have replicated
the issue using the following code:
https://gist.github.com/jwcarman/72714e6d
On Tue, Dec 15, 2015 at 11:15 AM, Jonathan Haddad wrote:
> If I had to choose between running 3x docker instances and 1x instance on
> a single server, I'd choose the single one. Instead of dealing with RF
> changing nonsense I'd just set up a 2nd data center w/ 3 nodes and move to
> that when y
If I had to choose between running 3x docker instances and 1x instance on a
single server, I'd choose the single one. Instead of dealing with RF
changing nonsense I'd just set up a 2nd data center w/ 3 nodes and move to
that when you're ready. No downtime, easy.
With that said - Starting off wit
On Mon, Dec 14, 2015 at 10:53 PM, Vladimir Prudnikov
wrote:
> Is it hard to start with 3 nodes on one server running in docker and then
> just move 2 nodes to the separate servers?
>
FWIW, if you *absolutely knew* that you were going to need the scale and
for some reason could not convince the m
Yes... I agree with Rob here. I don't see much benchmarking required for
versions of Cassandra that aren't actively supported by the committers.
On Tue, Dec 15, 2015 at 10:52 AM Robert Coli wrote:
> On Tue, Dec 15, 2015 at 6:28 AM, Andy Kruth wrote:
>
>> We are trying to decide how to proceed
On Mon, Dec 14, 2015 at 10:53 PM, Vladimir Prudnikov
wrote:
> Save money. I don’t have huge enterprise behind me nor investor’s money on
> my bank account. I just created an app and want to launch it and see if it
> is what users will use and pay for. Once I get users using it I can scale
> my ha
On Tue, Dec 15, 2015 at 6:28 AM, Andy Kruth wrote:
> We are trying to decide how to proceed with development and support of
> YCSB bindings for older versions of Cassandra, namely Cassandra 7, 8, and
> 10.
>
> We would like to continue dev and support on these if the use of those
> versions of Ca
I assume you mean Cassandra 0.7, 0.8, and 1.0? I think most users are on
2.x now, but I don't have any stats.
--
Michael Mior
michael.m...@gmail.com
2015-12-15 9:28 GMT-05:00 Andy Kruth :
> We are trying to decide how to proceed with development and support of
> YCSB bindings for older versions
We are trying to decide how to proceed with development and support of YCSB
bindings for older versions of Cassandra, namely Cassandra 7, 8, and 10.
We would like to continue dev and support on these if the use of those
versions of Cassandra is still prevalent. If not, then a deprecation cycle
may
23 matches
Mail list logo