Jon,
BTW, I wasn't suggesting that claim v3 would choose a plan with violations
over one without - so I don't think there is a bug.
The plan below which I said scored well, but was "worse" than a purely
sequential plan, is worse only in the sense that it does not cope as well
with dual node failu
That's what I get for doing things from memory and not running the
simulator :)
I think you're right about the actual operation of the preference lists -
but i haven't had a chance to look over the code or run some simulations.
The effect isn't quite as severe, but as you say unevenly loads the clu
Jon,
With regards to this snippet below, I think I get your point, but I don't
think the example is valid:
>>>
If with N=3 if a node goes down, all of the responsibility for that
node is shift to another single node in the cluster.
n1 | n2 | n3 | n4 | n1 | n2 | n3 | n4(Q=8 S=4,TargetN4)
I don't contribute to this list as much as I lurk in #riak (craque), but
it's really great to see this kind of community support somewhere,
especially at a large place that is heavily invested in riak itself.
I have considered posting some of the operational lessons I've learned over
the past five
Jon,
Many thanks for taking the time to look at this. You've given me lots to
think about, so I will take some time before updating my write-up to take
account of your feedback.
I need to go back and look at the safe transfers issues. I spent some time
trying to work out how the claimaint trans
> ... before the list of Basho's Github people
> (https://github.com/orgs/basho/people) who still work at Basho is reduced to
> zero?
Just a note on that list: these are the (few) people who took the
trouble to flip the visibility of their membership in their profiles.
Github seems to have chang
Thanks for the excellent writeup.
I have a few notes on your writeup and then a little history to help
explain the motivation for the v3 work.
The Claiming Problem
One other property of the broader claim algorithm + claimant + handoff
manager group of processes that's worth mentioning is saf
Thanks for the writeup and detailed investigation, Martin.
We ran into these issues a few months when we expanded a 5 node cluster
into a 8 node cluster. We ended up rebuilding the cluster and writing a
small escript to verify that the generated riak ring lived up to our
requirements (which were 1
Apologies in advance if this doesn't quite submit correctly to the list.
We [bet365] are very much interested in the continued development of Riak in
its current incarnation, with Core continuing to be underpinned by distributed
Erlang. We are very keen to help to build / shape / support the com
Back to the original post, the important point for me is that this is not
really about riak-core, but Riak, the database.
The OP in TL;DR form:
1. A thorough report of a long lived bug in claim that means many node/ring
combos end up with multiple replicas on one physical node, silently!
2. A p
I'd like to keep the core project going, just depends on how much interest
there is.
There are a lot of separate issues and stalled initiatives, if anyone likes
to discuss them. Some have to do simply with scaling Distributed Erlang.
Theres a riak core mailing list as well that probably could use s
We're looking at mainly leveraging partisan for changing the
underlying communication structure -- we hope to have via support in
Partisan soon along with connection multiplexing, so we hope to avoid
bottlenecks related to head-of-line-blocking in distributed Erlang, be
able to support SSL/TLS easi
Chris,
Is this only the communications part, so the core concepts like the Ring,
preflists, the Claimant role, the claim algo etc will remain the same?
Where's the best place to start reading about Partisan, I'm interested in
the motivation for changing that part of Core. Is there a special use
I'd love to see riak_core on partisan. I'm eyeing using it in an upcoming
internal project.
On Tue, May 16, 2017 at 3:06 PM, Christopher Meiklejohn <
christopher.meiklej...@gmail.com> wrote:
> For what it's worth, the Lasp community is looking at doing a fork of
> Riak Core replacing all communi
For what it's worth, the Lasp community is looking at doing a fork of
Riak Core replacing all communication with our Partisan library and
moving it completely off of distributed Erlang. We'd love to hear
from more folks that are interested in this work.
- Christopher
On Tue, May 16, 2017 at 6:53
I'm aware of a few other companies and individuals who are interested in
continued development and support in a post-Basho world. Ideally the
community can come together and contribute to a single, canonical fork.
Semi-related, there's a good chance this mailing list won't last much
longer, either
16 matches
Mail list logo