Can Riak Enterprise replicate between rings where each ring has a different
number of partitions?
Our five-node ring was originally configured with 64 partitions, and I saw that
Basho is recommending 512 for that number of machines.
Any ideas on how to make as-painless-a-migration-as-possible
Yes, we have done excatly that. When we migrated from 256 to 128
partitions in a live dual-cluster system, we took one cluster down.
Wiped the data, changed number of partitions, brought it back up and
synced all data back with a full sync. Then we did the same with the
other cluster.
However
Riak is all about high availability, if eventually consistent data is
not a problem OR, you can cover those aspects of the CAP concept with an
in-memory caching system and a sort of a locking mechanism to emulate
the core atomic action of your application (put-if-absent) then I would
say, you a
On Fri, Oct 19, 2012 at 6:57 AM, Guido Medina wrote:
> Riak is all about high availability, if eventually consistent data is not a
> problem
What is the 'eventually consistent' result of simultaneous inserts of
different values for a new key at different nodes? Does partitioning
affect this cas
It depends, if you have siblings enabled at the bucket, then you need to
resolve the conflicts using the object vclock, if you are not using
siblings, last write wins, either way, I haven't got any good results by
delegating that tasks to Riak, with siblings, eventually I ran Riak out
in speed
On Fri, Oct 19, 2012 at 8:02 AM, Guido Medina wrote:
> It depends, if you have siblings enabled at the bucket, then you need to
> resolve the conflicts using the object vclock,
How does that work for simultaneous initial inserts?
> if you are not using
> siblings, last write wins, either way, I
Locking mechanism on a single server is easy, on a cluster is not,
that's why you don't see too many multi masters databases right? Riak
instead focused on high availability and partitioning, but no
consistency, if you notice, consistency is related with locking, with 1
single access per key, s
So... no answers.
I guess there are no smart minds at Basho working on M/R currently. Too
bad, but I guess a company has to choose its priority.
On Tue, Oct 16, 2012 at 11:03 AM, Callixte Cauchois
wrote:
> Hi there,
>
> as part of my evaluation of Riak, I am looking at the M/R capabilities and
>
About distributed locking mechanism, you might wanna take a look at
Google services, something called Chubby? Ctrl + F on that link:
http://en.wikipedia.org/wiki/Distributed_lock_manager
Regards,
Guido.
On 19/10/12 16:47, Guido Medina wrote:
Locking mechanism on a single server is easy, on a
Pawel,
On Tue, Oct 9, 2012 at 5:21 PM, kamiseq wrote:
> hi all,
>
> right now we are using solr as search index and we are inserting data
> manually. so there is nothing to stop us from creating many indexes
> (sort of views) on same entity, aggregate data and so on.
> can something like that be
On Fri, Oct 19, 2012 at 8:48 AM, Callixte Cauchois
wrote:
> So... no answers.
> I guess there are no smart minds at Basho working on M/R currently. Too bad,
> but I guess a company has to choose its priority.
A lovely "good morning" to you, too.
>
as part of my evaluation of Riak, I am looking a
Hey Mark,
really sorry if I sounded aggressive or whatever. English is not my primary
language and sometimes I do not sound as I wanted... I just wanted to
acknowledge that no answer was kind of an answer to my questions.
And yes, I will share how I would like M/R to behave for future reference,
e
Dave,
64 is fine for a 6 node cluster. Rune gives a great rundown of the
downsides of large rings on small numbers of machines in his post.
Usually our recommendation is for ~10 ring partitions per physical
machine, rounded up to the next power of two. Where did you see the
recommendation for 51
13 matches
Mail list logo