Thanks Jeff!
On Mon, Sep 18, 2017 at 9:31 AM, Jeff Jirsa wrote:
> Haven't tried out CDC, but the answer based on the design doc is yes - you
> have to manually dedup CDC at the consumer level
>
>
>
>
> --
> Jeff Jirsa
>
>
> On Sep 17, 2017, at 6:21 PM, Michael Fong wrote:
>
> Thanks for your re
You might find this interesting:
https://medium.com/@foundev/synthetic-sharding-in-cassandra-to-deal-with-large-partitions-2124b2fd788b
Cheers,
Stefano
On Mon, Sep 18, 2017 at 5:07 AM, Adam Smith wrote:
> Dear community,
>
> I have a table with inlinks to URLs, i.e. many URLs point to
> http://
Hi Alex,
I now ran nodetool repair –full –pr keyspace cfs on all nodes in parallel and
this may pop up now:
0.176.38.128 (progress: 1%)
[2017-09-18 07:59:17,145] Some repair failed
[2017-09-18 07:59:17,151] Repair command #3 finished in 0 seconds
error: Repair job has failed with the error messa
You could dig a bit more in the logs to see what precisely failed.
I suspect anticompaction to still be responsible for conflicts with
validation compaction (so you should see validation failures on some nodes).
The only way to fully disable anticompaction will be to run subrange
repairs.
The two
Hi, there isn't a compaction task feature in
mesosphere/dcos-cassandra-service like repair and cleanup.
Is anybody working on it or is there any plan to add in later releases?
Regards
Hello again,
digged a bit further. Comparing 1hr flight recording sessions for both, 2.1 and
3.0 with the same incoming simulated load from our loadtest environment.
We are heavily write than read bound in this environment/scenario and it looks
like there is a noticeable/measurable difference i
The command you're running will cause anticompaction and the range borders for
all instances at the same time
Since only one repair session can anticompact any given sstable, it's almost
guaranteed to fail
Run it on one instance at a time
--
Jeff Jirsa
> On Sep 18, 2017, at 1:11 AM, Steinm
Hi Jeff,
understood. That’s quite a change then coming from 2.1 from an operational POV.
Thanks again.
Thomas
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Montag, 18. September 2017 15:56
To: user@cassandra.apache.org
Subject: Re: Multi-node repair fails after upgrading to 3.0.14
The comma
Sorry I may be wrong about the cause - didn't see -full
Mea culpa, its early here and I'm not awake
--
Jeff Jirsa
> On Sep 18, 2017, at 7:01 AM, Steinmaurer, Thomas
> wrote:
>
> Hi Jeff,
>
> understood. That’s quite a change then coming from 2.1 from an operational
> POV.
>
> Thanks a
@jeff what do you think is the best approach here to fix this problem?
Thank you all for helping me.
>Thursday, September 14, 2017 3:28 PM -07:00 from kurt greaves
>:
>
>Sorry that only applies our you're using NTS. You're right that simple
>strategy won't work very well in this case. To migrat
The hard part here is nobody's going to be able to tell you exactly what's
involved in fixing this because nobody sees your ring
And since you're using vnodes and have a nontrivial number of instances,
sharing that ring (and doing anything actionable with it) is nontrivial.
If you weren't usin
On Cassandra 2.2, consider a table like
CREATE TABLE my_keyspace.my_table_name ( a_number int, a_date timestamp,
a_blob blob, a_flag boolean, another_date timestamp, a_name text,
another_name text, another_number int, final_text_field text,
PRIMARY KEY (a_number, a_
For those of you who like trivia, simpleSnitch is hard coded to report every
node in DC in “datacenter1” and in rack “rack1”, there’s no way around it.
https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/locator/SimpleSnitch.java#L28
For what its worth, the problem isn't the snitch it's the replication strategy
- he's using the right snitch but SimpleStrategy ignores it
That's the same reason that adding a new DC doesn't work - the relocation
strategy is dc agnostic and changing it safely IS the problem
--
Jeff Jirsa
>
Sorry, you’re right. This is what happens when you try to do two things at
once. Google too quickly, look like an idiot. Thanks for the correction.
> On Sep 18, 2017, at 1:37 PM, Jeff Jirsa wrote:
>
> For what its worth, the problem isn't the snitch it's the replication
> strategy - he's u
How would setting the consistency to ALL help? Wouldn’t that just cause EVERY
read/write to fail after the ALTER until the repair is complete?
Sincerely,
Myron A. Semack
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Monday, September 18, 2017 2:42 PM
To: user@cassandra.apache.org
Subject: R
Hi Cassandra users,
I have a question about the ConsistencyLevel and the MUTATION operation.
According to the write path documentation, the first action executed by a
replica node is to write the mutation into the commitlog, the mutation is
ACK only if this action is performed.
I suppose that th
Using CL:ALL basically forces you to always include the first replica in
the query.
The first replica will be the same for both SimpleStrategy/SimpleSnitch and
NetworkTopologyStrategy/EC2Snitch.
It's basically the only way we can guarantee we're not going to lose a row
because it's only written t
No worries, that makes both of us, my first contribution to this thread was
similarly going-too-fast and trying to remember things I don't use often (I
thought originally SimpleStrategy would consult the EC2 snitch, but it
doesn't).
- Jeff
On Mon, Sep 18, 2017 at 1:56 PM, Jon Haddad
wrote:
> So
https://issues.apache.org/jira/browse/CASSANDRA-13153 implies full repairs
still triggers anti-compaction on non-repaired SSTables (if I'm reading
that right), so might need to make sure you don't run multiple repairs at
the same time across your nodes (if your using vnodes), otherwise could
still
> Does the coordinator "cancel" the mutation on the "committed" nodes (and
> how)?
No. Those mutations are applied on those nodes.
> Is it an heuristic case where two nodes have the data whereas they
> shouldn't and we hope that HintedHandoff will replay the mutation ?
Yes. But really you shou
So I haven't completely thought through this, so don't just go ahead and do
it. Definitely test first. Also if anyone sees something terribly wrong
don't be afraid to say.
Seeing as you're only using SimpleStrategy and it doesn't care about racks,
you could change to SimpleSnitch, or GossipingProp
OK, thanks you.
De : kurt greaves [mailto:k...@instaclustr.com]
Envoyé : mardi 19 septembre 2017 06:35
À : User
Objet : Re: ConsitencyLevel and Mutations : Behaviour if the update of the
commitlog fails
Does the coordinator "cancel" the mutation on the "committed" nodes (and how)?
No. Those mu
In 4.0 anti-compaction is no longer run after full repairs, so we
should probably backport this behavior to 3.0, given there are known
limitations with incremental repair on 3.0 and non-incremental users
may want to run keep running full repairs without the additional cost
of anti-compaction.
Woul
24 matches
Mail list logo