Hi,
I'm testing out a rolling upgrade with 1.0 pre2 and found that when I turn
off legacy_keylisting Riak crashes on me. Things work fine when
legacy_keylisting is turned on.
Here is what I found in the crash log.
2011-09-11 07:15:44 =SUPERVISOR REPORT
> Supervisor: {local,riak_pipe_fit
AFAIK you need to make sure that all nodes are upgraded first, and then turn
legacy_keylisting off. The crash you see is not (necessarily) related to
keylisting, but MapReduce. I hope to push out a new version of the Ruby
client in the next few weeks to support the new features.
On Sun, Sep 11, 2
Yeah, I did upgrade all nodes first, then changed the setting.
On Sun, Sep 11, 2011 at 10:32 AM, Sean Cribbs wrote:
> AFAIK you need to make sure that all nodes are upgraded first, and then
> turn legacy_keylisting off. The crash you see is not (necessarily) related
> to keylisting, but MapReduc
Hi Ryan
Were there any errors in the error.log or console.log files?
Thanks
Dan
Sent from my iPad
On Sep 11, 2011, at 10:43 AM, Ryan Caught wrote:
> Yeah, I did upgrade all nodes first, then changed the setting.
>
> On Sun, Sep 11, 2011 at 10:32 AM, Sean Cribbs wrote:
> AFAIK you need to ma
> At present (0.14.x series) my understanding is that when a new node is added
> to the cluster, it claims a portion of the ring and services requests for
> that portion before all the data is actually present on the node. Is that
> correct?
Yes, that's correct.
> If so, as long as you're able
Rolling upgrades is even simplifier than the approach mentioned by
Sean, although that would work just fine as well.
In short, upgrading one cluster to another should "just work". New
versus legacy gossip is auto-negotiated across a cluster.
The legacy_gossip setting that Sean mentioned exists to
console.log and error.log at that time:
> 2011-09-11 07:15:44.043 [error] <0.142.0> Supervisor riak_pipe_builder_sup
> had child at module undefined at <0.2724.0> exit with reason killed in
> context child_terminated
> 2011-09-11 07:15:44.051 [error] <0.143.0> Supervisor riak_pipe_fitting_sup
> h
That's really cool. I'm glad you guys spent time making sure that rolling
upgrades work smoothly.
Is there any surprising behavior to look out for while the cluster has some
new and some old nodes?
On Sun, Sep 11, 2011 at 11:34 AM, Joseph Blomstedt wrote:
> Each 0.14.2 is upgraded to a 1.0 nod
Joe,
Thanks for the explanation it's super helpful. Everything made complete
sense, but I wanted to double check my understanding on one paragraph:
Given how 0.14.2 would immediately reassign partition ownership
> before actually transferring the data, it was entirely possible to have 2
> or mor
> Let's say I have N=3, R=2. Is the situation in the above paragraph possible
> because hypothetically the new node joining the ring could claim 2 of the 3
> vnodes that hold replicas of a certain document? So when I do a read with a
> R=2, 2 of the 3 replicas are now in vnodes claimed by new phy
I am using riaksearch 0.14.2
and python riak client 1.3.0
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Python-Riak-Search-Returning-NoneType-object-tp3317538p3327943.html
Sent from the Riak Users mailing list archive at Nabble.com.
I am using riaksearch 0.14.2
and python riak client 1.3.0
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Python-Riak-Search-Returning-NoneType-object-tp3317538p3327945.html
Sent from the Riak Users mailing list archive at Nabble.com.
12 matches
Mail list logo