Re: N val nad W - can write ever fail??

2012-11-13 Thread kamiseq
yes, that was my conclusion thats why my question was 'it is possible to
fail write'.
in a simple scenario you can use R val to ensure that you have enough nodes
to read from and riak will fail if you have less and this could be nice
sometimes. my point was that you cannot fail write, riak will never
complain.

anyway thanks for response


pozdrawiam
Paweł Kamiński

kami...@gmail.com
pkaminski@gmail.com
__


On 13 November 2012 00:21, Eric Redmond  wrote:

> Pawel,
>
> I think you've wandered into a real disconnect between how we communicate
> Riak replication, and how it actually works.
>
> Although we say that N="nodes to replicate to", in reality, N="vnodes
> replicated to with every attempt to ensure they are different nodes with no
> guarantee". This is why you can have 3 nodes with 64 vnodes, and still set
> N>3.
>
> Now W is just the number of successful responses for a write to be
> considered a success. W can be any number less than or equal to N, but the
> difference will still be replicated as well, just asynchronously. For
> example, if N=3, W=2, once two nodes (well, vnodes) have responded, your
> write is considered a success. In the background, Riak will still ensure
> that third node is replicated to, giving you three total replicas.
>
> Hope that helps,
> Eric
>
>
>
> On Nov 12, 2012, at 3:01 PM, kamiseq  wrote:
>
> this is funny but recently I started to think again about how riak works
> and I thought I know more or less the basics ;]
>
> but I start digging and again I read
> http://docs.basho.com/riak/latest/tutorials/fast-track/Tunable-CAP-Controls-in-Riak/and
>  then made few tests with riak bucket configured as follows
>
> N = 3
> W = 3
>
> I have 2 nodes running in my cluster so far (only 2 are connected) for
> test and I can still write with hinted handoff. I updated the data and I
> wrote again with same key - I got success.
>
> I changed properties on per request basis and I set w=1 I stopped first
> node (so I had only one running) and put new key and value. I started first
> node and query for the key and I got data from both nodes. I did the same
> with new key again this time with w=3. and again it was successful.
>
> what is the real difference between N and W. if hinted handoff always save
> data for later synchronisation can write ever fail. are there any
> differences between first write and later updates.
>
> I hope it is not really stupid question
>
> pozdrawiam
> Paweł Kamiński
>
> kami...@gmail.com
> pkaminski@gmail.com
> __
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: N val nad W - can write ever fail??

2012-11-13 Thread Jeremiah Peschka
You can use pr and pw to require reads and writes to hit primary vnodes. I 
suspect this makes it possible to fail writes in the way you are describing.

---
Jeremiah Peschka
Founder, Brent Ozar Unlimited

On Nov 13, 2012, at 12:51 AM, kamiseq  wrote:

> yes, that was my conclusion thats why my question was 'it is possible to fail 
> write'. 
> in a simple scenario you can use R val to ensure that you have enough nodes 
> to read from and riak will fail if you have less and this could be nice 
> sometimes. my point was that you cannot fail write, riak will never complain.
> 
> anyway thanks for response
> 
> 
> pozdrawiam
> Paweł Kamiński
> 
> kami...@gmail.com
> pkaminski@gmail.com
> __
> 
> 
> On 13 November 2012 00:21, Eric Redmond  wrote:
>> Pawel,
>> 
>> I think you've wandered into a real disconnect between how we communicate 
>> Riak replication, and how it actually works.
>> 
>> Although we say that N="nodes to replicate to", in reality, N="vnodes 
>> replicated to with every attempt to ensure they are different nodes with no 
>> guarantee". This is why you can have 3 nodes with 64 vnodes, and still set 
>> N>3.
>> 
>> Now W is just the number of successful responses for a write to be 
>> considered a success. W can be any number less than or equal to N, but the 
>> difference will still be replicated as well, just asynchronously. For 
>> example, if N=3, W=2, once two nodes (well, vnodes) have responded, your 
>> write is considered a success. In the background, Riak will still ensure 
>> that third node is replicated to, giving you three total replicas.
>> 
>> Hope that helps,
>> Eric
>> 
>> 
>> 
>> On Nov 12, 2012, at 3:01 PM, kamiseq  wrote:
>> 
>>> this is funny but recently I started to think again about how riak works 
>>> and I thought I know more or less the basics ;]
>>> 
>>> but I start digging and again I read 
>>> http://docs.basho.com/riak/latest/tutorials/fast-track/Tunable-CAP-Controls-in-Riak/
>>>  and then made few tests with riak bucket configured as follows
>>> 
>>> N = 3
>>> W = 3
>>> 
>>> I have 2 nodes running in my cluster so far (only 2 are connected) for test 
>>> and I can still write with hinted handoff. I updated the data and I wrote 
>>> again with same key - I got success. 
>>> 
>>> I changed properties on per request basis and I set w=1 I stopped first 
>>> node (so I had only one running) and put new key and value. I started first 
>>> node and query for the key and I got data from both nodes. I did the same 
>>> with new key again this time with w=3. and again it was successful.
>>> 
>>> what is the real difference between N and W. if hinted handoff always save 
>>> data for later synchronisation can write ever fail. are there any 
>>> differences between first write and later updates.
>>> 
>>> I hope it is not really stupid question
>>> 
>>> pozdrawiam
>>> Paweł Kamiński
>>> 
>>> kami...@gmail.com
>>> pkaminski@gmail.com
>>> __
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Quick Q re: error

2012-11-13 Thread Martin Streicher

The issue was inconsistent configuration. There was a call to Riak::Client.new 
in config/initializers/riak.rb and two more calls in a method to do searches. I 
expected Riak::Client.new to perhaps pick up settings from a YML file each 
time, but I think I was wrong. I changed the code to create one connection in 
the initializers file, pointing to the node defined in the YML file, and then 
reuse that connection. 

Martin



On Nov 10, 2012, at 12:05 PM, Sean Cribbs wrote:

> Can you ping the node? i.e. Riak::Client.ping
> 
> On Fri, Nov 9, 2012 at 9:17 AM, Martin Streicher
>  wrote:
>> 
>> Locally, everything runs fine when on 127.0.0.1.
>> 
>> If I change to 192.168.0.2 and change the vm.args and app.config and remove 
>> the data/rings, I fail to connect when trying to enable search.
>> 
>> I can get further if I do
>> 
>> client = Riak::Client.new(nodes: [{host: '192.168.0.2'}])
>> 
>> and then enable search using that client variable.
>> 
>> I think I am misunderstanding something about the nodes and which nodes the 
>> operations are going to.
>> 
>> 
>> 
>> 
>> On Nov 9, 2012, at 12:10 PM, Sean Cribbs wrote:
>> 
>>> Well, it's pretty obvious that it simply can't connect, especially
>>> since your configuration is the default -- which means HTTP pointed to
>>> 127.0.0.1:8098. Is your Riak node running and on that port? Can you
>>> hit it from curl on the command line?
>>> 
>>> On Fri, Nov 9, 2012 at 7:36 AM, Martin Streicher
>>>  wrote:
 
 I'm not doing anything special in Riak::Client.
 
 How can I narrow what the RuntimeError is?
 
 
 
 On Nov 9, 2012, at 10:21 AM, Sean Cribbs wrote:
 
> Martin,
> 
> How is your Riak::Client object configured? That looks like one of two
> possibilities: first, the request failed 3 times in a row; second, one
> of the backends is raising an exception that seems network related but
> isn't one of the standard network errors (i.e it's a RuntimeError, but
> it should be a SystemError with econnrefused errno!).
> 
> On Fri, Nov 9, 2012 at 7:12 AM, Martin Streicher
>  wrote:
>> 
>> I get a connection refused error when I try to enable search from my 
>> code.
>> 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:450:in
>>  `rescue in recover_from': Connection refused - connect(2) (RuntimeError)
>>  from 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:422:in
>>  `recover_from'
>>  from 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:284:in
>>  `http'
>>  from 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:129:in
>>  `backend'
>>  from 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:246:in
>>  `get_bucket_props'
>>  from 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/bucket.rb:77:in
>>  `props'
>>  from 
>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/bucket.rb:68:in
>>  `props='
>>  from 
>> /var/www/zutron/releases/20121108191450/lib/classes/riak_search.rb:6:in 
>> `enable_search'
>> 
>> enable_search is:
>> 
>> def self.enable_search(bucket_name, client = Riak::Client.new)
>>  bucket = client.bucket bucket_name
>>  bucket.props = {search: true}
>> end
>> 
>> Any ideas why it's failing? Search is enabled on the machine.
>> 
>> 
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
 
>>> 
>>> 
>>> 
>>> --
>>> Sean Cribbs 
>>> Software Engineer
>>> Basho Technologies, Inc.
>>> http://basho.com/
>> 
> 
> 
> 
> -- 
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ruby Riak client -- threadsafe?

2012-11-13 Thread Sean Cribbs
Scott,

It is generally safe to use the Ruby client from multiple threads. A
few items that are infrequently touched (memoized buckets and their
properties) are not, but normal requests are all handled by a
thread-safe connection pool. Ripple historically kept a client in the
thread-locals in order to have a sort of globally-referrable Client
instance; similar to how ActiveRecord's connections work, but not as
sophisticated. This could be improved in the future, perhaps
instantiating it in the Railtie instead to be used by all threads.

On Mon, Nov 12, 2012 at 5:01 PM, Scott Hyndman  wrote:
> Hi there,
>
> I noticed this post from a bit over a year ago
> (http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-June/004592.html),
> and wondered whether this was still the case. I've noticed one detail
> suggesting that it may be (the fact that Ripple creates a client per
> thread), and one that contradicts (the existence of HTTP and PB connection
> pools on the client).
>
> Can anyone shed some insight?
>
> Scott
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
Sean Cribbs 
Software Engineer
Basho Technologies, Inc.
http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Key removal using Bitcask and expiry_secs

2012-11-13 Thread Sean Cribbs
Hi again Scott,

Expired keys in bitcask will be removed from disk when
compaction/merging occurs, but until then will exist in the keydir
(AFAIK). Reads to an expired key simply return 'not found'.

On Mon, Nov 12, 2012 at 5:02 PM, Scott Hyndman  wrote:
> Hi there,
>
> When expired, at what point will the key be removed from memory? Is the
> expiration process automated, or trigged by a read?
>
> Scott
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
Sean Cribbs 
Software Engineer
Basho Technologies, Inc.
http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Quick Q re: error

2012-11-13 Thread Sean Cribbs
Glad you figured it out, Martin.

As you may have noticed, while Ripple provides the tight Rails
integration with initializers and such, the basic client does not. If
people think that's a valuable feature, I'd be happy to accept
pull-requests. :D

On Tue, Nov 13, 2012 at 9:30 AM, Martin Streicher
 wrote:
>
> The issue was inconsistent configuration. There was a call to 
> Riak::Client.new in config/initializers/riak.rb and two more calls in a 
> method to do searches. I expected Riak::Client.new to perhaps pick up 
> settings from a YML file each time, but I think I was wrong. I changed the 
> code to create one connection in the initializers file, pointing to the node 
> defined in the YML file, and then reuse that connection.
>
> Martin
>
>
>
> On Nov 10, 2012, at 12:05 PM, Sean Cribbs wrote:
>
>> Can you ping the node? i.e. Riak::Client.ping
>>
>> On Fri, Nov 9, 2012 at 9:17 AM, Martin Streicher
>>  wrote:
>>>
>>> Locally, everything runs fine when on 127.0.0.1.
>>>
>>> If I change to 192.168.0.2 and change the vm.args and app.config and remove 
>>> the data/rings, I fail to connect when trying to enable search.
>>>
>>> I can get further if I do
>>>
>>> client = Riak::Client.new(nodes: [{host: '192.168.0.2'}])
>>>
>>> and then enable search using that client variable.
>>>
>>> I think I am misunderstanding something about the nodes and which nodes the 
>>> operations are going to.
>>>
>>>
>>>
>>>
>>> On Nov 9, 2012, at 12:10 PM, Sean Cribbs wrote:
>>>
 Well, it's pretty obvious that it simply can't connect, especially
 since your configuration is the default -- which means HTTP pointed to
 127.0.0.1:8098. Is your Riak node running and on that port? Can you
 hit it from curl on the command line?

 On Fri, Nov 9, 2012 at 7:36 AM, Martin Streicher
  wrote:
>
> I'm not doing anything special in Riak::Client.
>
> How can I narrow what the RuntimeError is?
>
>
>
> On Nov 9, 2012, at 10:21 AM, Sean Cribbs wrote:
>
>> Martin,
>>
>> How is your Riak::Client object configured? That looks like one of two
>> possibilities: first, the request failed 3 times in a row; second, one
>> of the backends is raising an exception that seems network related but
>> isn't one of the standard network errors (i.e it's a RuntimeError, but
>> it should be a SystemError with econnrefused errno!).
>>
>> On Fri, Nov 9, 2012 at 7:12 AM, Martin Streicher
>>  wrote:
>>>
>>> I get a connection refused error when I try to enable search from my 
>>> code.
>>>
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:450:in
>>>  `rescue in recover_from': Connection refused - connect(2) 
>>> (RuntimeError)
>>>  from 
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:422:in
>>>  `recover_from'
>>>  from 
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:284:in
>>>  `http'
>>>  from 
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:129:in
>>>  `backend'
>>>  from 
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:246:in
>>>  `get_bucket_props'
>>>  from 
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/bucket.rb:77:in
>>>  `props'
>>>  from 
>>> /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/bucket.rb:68:in
>>>  `props='
>>>  from 
>>> /var/www/zutron/releases/20121108191450/lib/classes/riak_search.rb:6:in 
>>> `enable_search'
>>>
>>> enable_search is:
>>>
>>> def self.enable_search(bucket_name, client = Riak::Client.new)
>>>  bucket = client.bucket bucket_name
>>>  bucket.props = {search: true}
>>> end
>>>
>>> Any ideas why it's failing? Search is enabled on the machine.
>>>
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>> --
>> Sean Cribbs 
>> Software Engineer
>> Basho Technologies, Inc.
>> http://basho.com/
>



 --
 Sean Cribbs 
 Software Engineer
 Basho Technologies, Inc.
 http://basho.com/
>>>
>>
>>
>>
>> --
>> Sean Cribbs 
>> Software Engineer
>> Basho Technologies, Inc.
>> http://basho.com/
>



-- 
Sean Cribbs 
Software Engineer
Basho Technologies, Inc.
http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ruby Riak client -- threadsafe?

2012-11-13 Thread Scott Hyndman
Great. Thanks Sean.

On 13 November 2012 10:37, Sean Cribbs  wrote:

> Scott,
>
> It is generally safe to use the Ruby client from multiple threads. A
> few items that are infrequently touched (memoized buckets and their
> properties) are not, but normal requests are all handled by a
> thread-safe connection pool. Ripple historically kept a client in the
> thread-locals in order to have a sort of globally-referrable Client
> instance; similar to how ActiveRecord's connections work, but not as
> sophisticated. This could be improved in the future, perhaps
> instantiating it in the Railtie instead to be used by all threads.
>
> On Mon, Nov 12, 2012 at 5:01 PM, Scott Hyndman 
> wrote:
> > Hi there,
> >
> > I noticed this post from a bit over a year ago
> > (
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-June/004592.html
> ),
> > and wondered whether this was still the case. I've noticed one detail
> > suggesting that it may be (the fact that Ripple creates a client per
> > thread), and one that contradicts (the existence of HTTP and PB
> connection
> > pools on the client).
> >
> > Can anyone shed some insight?
> >
> > Scott
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
>
>
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Quick Q re: error

2012-11-13 Thread Martin Streicher

I'd guess it's just simple riak.yml support, albeit with an ability to define 
multiple nodes for the client. We'd also have to add support for Rails: 
riak.yml would be in one location for Rails apps and another for Ruby apps. 


On Nov 13, 2012, at 10:40 AM, Sean Cribbs wrote:

> Glad you figured it out, Martin.
> 
> As you may have noticed, while Ripple provides the tight Rails
> integration with initializers and such, the basic client does not. If
> people think that's a valuable feature, I'd be happy to accept
> pull-requests. :D
> 
> On Tue, Nov 13, 2012 at 9:30 AM, Martin Streicher
>  wrote:
>> 
>> The issue was inconsistent configuration. There was a call to 
>> Riak::Client.new in config/initializers/riak.rb and two more calls in a 
>> method to do searches. I expected Riak::Client.new to perhaps pick up 
>> settings from a YML file each time, but I think I was wrong. I changed the 
>> code to create one connection in the initializers file, pointing to the node 
>> defined in the YML file, and then reuse that connection.
>> 
>> Martin
>> 
>> 
>> 
>> On Nov 10, 2012, at 12:05 PM, Sean Cribbs wrote:
>> 
>>> Can you ping the node? i.e. Riak::Client.ping
>>> 
>>> On Fri, Nov 9, 2012 at 9:17 AM, Martin Streicher
>>>  wrote:
 
 Locally, everything runs fine when on 127.0.0.1.
 
 If I change to 192.168.0.2 and change the vm.args and app.config and 
 remove the data/rings, I fail to connect when trying to enable search.
 
 I can get further if I do
 
 client = Riak::Client.new(nodes: [{host: '192.168.0.2'}])
 
 and then enable search using that client variable.
 
 I think I am misunderstanding something about the nodes and which nodes 
 the operations are going to.
 
 
 
 
 On Nov 9, 2012, at 12:10 PM, Sean Cribbs wrote:
 
> Well, it's pretty obvious that it simply can't connect, especially
> since your configuration is the default -- which means HTTP pointed to
> 127.0.0.1:8098. Is your Riak node running and on that port? Can you
> hit it from curl on the command line?
> 
> On Fri, Nov 9, 2012 at 7:36 AM, Martin Streicher
>  wrote:
>> 
>> I'm not doing anything special in Riak::Client.
>> 
>> How can I narrow what the RuntimeError is?
>> 
>> 
>> 
>> On Nov 9, 2012, at 10:21 AM, Sean Cribbs wrote:
>> 
>>> Martin,
>>> 
>>> How is your Riak::Client object configured? That looks like one of two
>>> possibilities: first, the request failed 3 times in a row; second, one
>>> of the backends is raising an exception that seems network related but
>>> isn't one of the standard network errors (i.e it's a RuntimeError, but
>>> it should be a SystemError with econnrefused errno!).
>>> 
>>> On Fri, Nov 9, 2012 at 7:12 AM, Martin Streicher
>>>  wrote:
 
 I get a connection refused error when I try to enable search from my 
 code.
 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:450:in
  `rescue in recover_from': Connection refused - connect(2) 
 (RuntimeError)
 from 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:422:in
  `recover_from'
 from 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:284:in
  `http'
 from 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:129:in
  `backend'
 from 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/client.rb:246:in
  `get_bucket_props'
 from 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/bucket.rb:77:in
  `props'
 from 
 /var/www/zutron/shared/bundle/ruby/1.9.1/gems/riak-client-1.0.5/lib/riak/bucket.rb:68:in
  `props='
 from 
 /var/www/zutron/releases/20121108191450/lib/classes/riak_search.rb:6:in
  `enable_search'
 
 enable_search is:
 
 def self.enable_search(bucket_name, client = Riak::Client.new)
 bucket = client.bucket bucket_name
 bucket.props = {search: true}
 end
 
 Any ideas why it's failing? Search is enabled on the machine.
 
 
 
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>>> 
>>> 
>>> --
>>> Sean Cribbs 
>>> Software Engineer
>>> Basho Technologies, Inc.
>>> http://basho.com/
>> 
> 
> 
> 
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
 
>>> 
>>> 
>>> 
>>> --
>>> Sean Cribbs

Re: More Migration Questions

2012-11-13 Thread Shane McEwan

Anyone? Beuller? :-)

Installing Riak 1.1.1 on the new nodes, copying the data directories 
from the old nodes, issuing a "reip" on all the new nodes, starting up, 
waiting for partition handoffs to complete, shutting down, upgrading to 
1.2.1 and starting up again got us to where we want to be. But this is 
not very convenient.


What do I do when I come to creating our test environment where I'll be 
wanting to copy production data onto the test nodes on a regular basis? 
At that point I won't have the "luxury" of downgrading to 1.1.1 to have 
a working "reip" command.


Surely there's gotta be an easier way to spin up a new cluster with new 
names and IPs but with old data?


Shane.

On 08/11/12 21:10, Shane McEwan wrote:

G'day!

Just to add to the list of people asking questions about migrating to
1.2.1 . . .

We're about to migrate our 4 node production Riak database from 1.1.1 to
1.2.1. At the same time we're also migrating from virtual machines to
physical machines. These machines will have new names and IP addresses.

The process of doing rolling upgrades is well documented but I'm unsure
of the correct procedure for moving to an entirely new cluster.

We have the luxury of a maintenance window so we don't need to keep
everything running during the migration. Therefore the current plan is
to stop the current cluster, copy the Riak data directories to the new
machines and start up the new cluster. The hazy part of the process is
how we "reip" the database so it will work in the new cluster.

We've tried using the "riak-admin reip" command but were left with one
of our nodes in "(legacy)" mode according to "riak-admin member-status".
From an earlier E-Mail thread[1] it seems like "reip" is deprecated and
we should be doing a "cluster force replace" instead.

So, would the new procedure be the following?

1. Shutdown old cluster
2. Copy data directory
3. Start new cluster (QUESTION: The new nodes don't own any of the
partitions in the data directory. What does it do?) (QUESTION: The new
nodes won't be part of a cluster yet. Do I need to "join" them before I
can do any of the following commands? Or do I just put all the joins and
force-replace commands into the same plan and commit it all together?)
3. Issue "riak-admin cluster force-replace old-node1 new-node1"
(QUESTION: Do I run this command just on "new-node1" or on all nodes?)
4. Issue "force-replace" commands for the remaining three nodes.
5. Issue a "cluster plan" and "cluster commit" to commit the changes.
6. Cross fingers.

In my mind the "replace" and/or "force-replace" commands are something
we would use it we had a failed node and needed to bring a spare online
to take over. It doesn't feel like something you would do if you don't
already have a cluster in place and are needing to "replace" ALL nodes.

Of course, we want to test this procedure before doing it for real. What
are the risks of doing the above procedure while the old cluster is
still running? While the new nodes are on a segregated network and
shouldn't be able to contact the old nodes what would happen if we did
the above and found the network wasn't as segregated as we originally
thought? Would the new nodes start trying to communicate with the old
nodes before the "force-replace" can take effect? Or, because all the
cluster changes are atomic there won't be any risk of that?

Sorry for all the questions. I'm just trying to get a clear procedure
for moving an entire cluster to new hardware and hopefully this thread
will help other people in the future.

Thanks in advance!

Shane.

[1] http://comments.gmane.org/gmane.comp.db.riak.user/8418


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: More Migration Questions

2012-11-13 Thread Thomas Santero
Hi Shane,

I'm sorry for the delay on this. Over the weekend I was working to
replicate your setup so I can answer your question from experience. Alas,
time got the best of me and I have not yet finished.

That said, I'm inclined to suggest upgrading riak on your current cluster
first and then using riak-admin replace to move off of the VM's and onto
metal.

* In this scenario, do a rolling upgrade (including making backups) of the
current cluster.
* Install riak onto the new machines
* join the first machine to the cluster
* use riak-admin replace to replace one of the old nodes with the new node
* wait for ring-ready, then repeat for the other nodes.

Tom

On Tue, Nov 13, 2012 at 11:59 AM, Shane McEwan  wrote:

> Anyone? Beuller? :-)
>
> Installing Riak 1.1.1 on the new nodes, copying the data directories from
> the old nodes, issuing a "reip" on all the new nodes, starting up, waiting
> for partition handoffs to complete, shutting down, upgrading to 1.2.1 and
> starting up again got us to where we want to be. But this is not very
> convenient.
>
> What do I do when I come to creating our test environment where I'll be
> wanting to copy production data onto the test nodes on a regular basis? At
> that point I won't have the "luxury" of downgrading to 1.1.1 to have a
> working "reip" command.
>
> Surely there's gotta be an easier way to spin up a new cluster with new
> names and IPs but with old data?
>
> Shane.
>
>
> On 08/11/12 21:10, Shane McEwan wrote:
>
>> G'day!
>>
>> Just to add to the list of people asking questions about migrating to
>> 1.2.1 . . .
>>
>> We're about to migrate our 4 node production Riak database from 1.1.1 to
>> 1.2.1. At the same time we're also migrating from virtual machines to
>> physical machines. These machines will have new names and IP addresses.
>>
>> The process of doing rolling upgrades is well documented but I'm unsure
>> of the correct procedure for moving to an entirely new cluster.
>>
>> We have the luxury of a maintenance window so we don't need to keep
>> everything running during the migration. Therefore the current plan is
>> to stop the current cluster, copy the Riak data directories to the new
>> machines and start up the new cluster. The hazy part of the process is
>> how we "reip" the database so it will work in the new cluster.
>>
>> We've tried using the "riak-admin reip" command but were left with one
>> of our nodes in "(legacy)" mode according to "riak-admin member-status".
>> From an earlier E-Mail thread[1] it seems like "reip" is deprecated and
>> we should be doing a "cluster force replace" instead.
>>
>> So, would the new procedure be the following?
>>
>> 1. Shutdown old cluster
>> 2. Copy data directory
>> 3. Start new cluster (QUESTION: The new nodes don't own any of the
>> partitions in the data directory. What does it do?) (QUESTION: The new
>> nodes won't be part of a cluster yet. Do I need to "join" them before I
>> can do any of the following commands? Or do I just put all the joins and
>> force-replace commands into the same plan and commit it all together?)
>> 3. Issue "riak-admin cluster force-replace old-node1 new-node1"
>> (QUESTION: Do I run this command just on "new-node1" or on all nodes?)
>> 4. Issue "force-replace" commands for the remaining three nodes.
>> 5. Issue a "cluster plan" and "cluster commit" to commit the changes.
>> 6. Cross fingers.
>>
>> In my mind the "replace" and/or "force-replace" commands are something
>> we would use it we had a failed node and needed to bring a spare online
>> to take over. It doesn't feel like something you would do if you don't
>> already have a cluster in place and are needing to "replace" ALL nodes.
>>
>> Of course, we want to test this procedure before doing it for real. What
>> are the risks of doing the above procedure while the old cluster is
>> still running? While the new nodes are on a segregated network and
>> shouldn't be able to contact the old nodes what would happen if we did
>> the above and found the network wasn't as segregated as we originally
>> thought? Would the new nodes start trying to communicate with the old
>> nodes before the "force-replace" can take effect? Or, because all the
>> cluster changes are atomic there won't be any risk of that?
>>
>> Sorry for all the questions. I'm just trying to get a clear procedure
>> for moving an entire cluster to new hardware and hopefully this thread
>> will help other people in the future.
>>
>> Thanks in advance!
>>
>> Shane.
>>
>> [1] 
>> http://comments.gmane.org/**gmane.comp.db.riak.user/8418
>>
>>
>> __**_
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/**mailman/listinfo/riak-users_**lists.basho.com
>>
>
> __**_
> riak-users mailing list
> riak-users@lists.basho.com
> http://lis

ANN: riak_dt Preview AMI

2012-11-13 Thread Jordan West
Hey Riak Users,

ami-46a3272f contains a preview release of riak_dt (
http://github.com/basho/riak_dt), which brings support for PN-Counters (and
other future work on CRDTs) to Riak. If you hadn't had a chance to check it
out I highly suggest watching Russell Brown and Sean Cribbs speak about the
work at RICON (http://vimeo.com/52414903) and checking out their slides (
https://speakerdeck.com/basho/data-structures-in-riak).

To get started you can follow the exact same instruction for setting up the
yokozuna preview (
https://github.com/rzezeski/yokozuna/blob/master/docs/EC2.md) except
use ami-46a3272f instead of the yokozuna ami (ami-9c2d96f5):

  ec2-run-instances ami-46a3272f -k  -n 


More information about riak_dt can be found in the README:

https://github.com/basho/riak_dt/blob/master/README.md

Cheers,

Jordan
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


ANN: riak-js 0.9.0 released

2012-11-13 Thread Mathias Meyer
Hey all,

I'm happy to announce the 0.9.0 release of riak-js, the Riak client for 
Node.js. It's a complete rewrite in plain old JavaScript, bringing some new 
functionality along the way. You can read all about the fresh release over on 
the Basho blog [1].

riak-js now has a new home [2] and fully updated documentation [3].

Let me know if you run into any issues or have any questions!

npm install riak-js@0.9.0 and off you go!

Happy JavaScripting!

Cheers, Mathias

[1] http://basho.com/blog/technical/2012/11/13/riak-js-fresh-start/
[2] https://github.com/mostlyserious/riak-js
[3] http://mostlyserious.github.com/riak-js/


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ANN: riak-js 0.9.0 released

2012-11-13 Thread Alexander Sicular
Are you published in npm? 

npm install riak is for the mranney version. 


@siculars
http://siculars.posterous.com

Sent from my iRotaryPhone

On Nov 13, 2012, at 2:33 PM, Mathias Meyer  wrote:

> Hey all,
> 
> I'm happy to announce the 0.9.0 release of riak-js, the Riak client for 
> Node.js. It's a complete rewrite in plain old JavaScript, bringing some new 
> functionality along the way. You can read all about the fresh release over on 
> the Basho blog [1].
> 
> riak-js now has a new home [2] and fully updated documentation [3].
> 
> Let me know if you run into any issues or have any questions!
> 
> npm install riak-js@0.9.0 and off you go!
> 
> Happy JavaScripting!
> 
> Cheers, Mathias
> 
> [1] http://basho.com/blog/technical/2012/11/13/riak-js-fresh-start/
> [2] https://github.com/mostlyserious/riak-js
> [3] http://mostlyserious.github.com/riak-js/
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ANN: riak-js 0.9.0 released

2012-11-13 Thread Christopher Meiklejohn
On Tuesday, November 13, 2012 at 3:24 PM, Alexander Sicular wrote:
> Are you published in npm? 
> 
> npm install riak is for the mranney version.
It appears you mistyped the library name:

https://npmjs.org/package/riak-js

npm install riak-js

- Chris 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ANN: riak-js 0.9.0 released

2012-11-13 Thread Mathias Meyer
I mistyped that indeed. The correct name is indeed riak-js on npmjs. Sorry! 

Cheers, Mathias 


On Tuesday, 13. November 2012 at 21:26, Christopher Meiklejohn wrote:

> On Tuesday, November 13, 2012 at 3:24 PM, Alexander Sicular wrote:
> > Are you published in npm? 
> > 
> > npm install riak is for the mranney version.
> It appears you mistyped the library name:
> 
> https://npmjs.org/package/riak-js
> 
> npm install riak-js
> 
> - Chris 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ANN: riak_dt Preview AMI

2012-11-13 Thread Sebastian Cohnen
AWESOME! thanks for sharing!

On 13.11.2012, at 20:33, Jordan West  wrote:

> Hey Riak Users,
> 
> ami-46a3272f contains a preview release of riak_dt 
> (http://github.com/basho/riak_dt), which brings support for PN-Counters (and 
> other future work on CRDTs) to Riak. If you hadn't had a chance to check it 
> out I highly suggest watching Russell Brown and Sean Cribbs speak about the 
> work at RICON (http://vimeo.com/52414903) and checking out their slides 
> (https://speakerdeck.com/basho/data-structures-in-riak). 
> 
> To get started you can follow the exact same instruction for setting up the 
> yokozuna preview 
> (https://github.com/rzezeski/yokozuna/blob/master/docs/EC2.md) except use 
> ami-46a3272f instead of the yokozuna ami (ami-9c2d96f5):
> 
>   ec2-run-instances ami-46a3272f -k  -n 
> 
> 
> More information about riak_dt can be found in the README: 
> 
> https://github.com/basho/riak_dt/blob/master/README.md
> 
> Cheers,
> 
> Jordan
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ANN: riak-js 0.9.0 released

2012-11-13 Thread Alexander Sicular
Thanks. Great work. Will definitely check this out.

@siculars
http://siculars.posterous.com

Sent from my rotary phone.
On Nov 13, 2012 3:30 PM, "Mathias Meyer"  wrote:

>  I mistyped that indeed. The correct name is indeed riak-js on npmjs.
> Sorry!
>
> Cheers, Mathias
>
> On Tuesday, 13. November 2012 at 21:26, Christopher Meiklejohn wrote:
>
> On Tuesday, November 13, 2012 at 3:24 PM, Alexander Sicular wrote:
>
> Are you published in npm?
>
> npm install riak is for the mranney version.
>
> It appears you mistyped the library name:
>
> https://npmjs.org/package/riak-js
>
> npm install riak-js
>
> - Chris
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ANN: riak-js 0.9.0 released

2012-11-13 Thread gibraltar

+1 Thanks. I've been waiting for this for a long time. Already using the older 
version and will migrate to this one soon.

gibraltar.

On Nov 13, 2012, at 3:40 PM, Alexander Sicular  wrote:

> Thanks. Great work. Will definitely check this out.
> 
> @siculars
> http://siculars.posterous.com
> 
> Sent from my rotary phone.
> 
> On Nov 13, 2012 3:30 PM, "Mathias Meyer"  wrote:
> I mistyped that indeed. The correct name is indeed riak-js on npmjs. Sorry!
> 
> Cheers, Mathias
> On Tuesday, 13. November 2012 at 21:26, Christopher Meiklejohn wrote:
> 
>> On Tuesday, November 13, 2012 at 3:24 PM, Alexander Sicular wrote:
>>> Are you published in npm?
>>> 
>>> npm install riak is for the mranney version.
>> It appears you mistyped the library name:
>> 
>> https://npmjs.org/package/riak-js
>> 
>> npm install riak-js
>> 
>> - Chris
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: More Migration Questions

2012-11-13 Thread Martin Woods
Hi Tom

I'd be very interested to know if Shane's approach should work, or if you
know of any good reason why that approach would cause issues.

Also, aren't there several very real business use cases here that users of
Riak will inevitably encounter, and must be able to satisfy? Shane mentions
two use cases below: creation of a test environment using a copy of data
from a production cluster; and the migration of data within one cloud
provider from one set of systems to a distinct, separate set of systems.

To add to this, what about the case where a Riak customer needs to move
from one cloud provider to another? How does this customer take his data
with him?

All of the above cases require that a separate cluster be spun up from the
original cluster, with different names and IP addresses for the Riak nodes
 involved in the cluster.

None of these use cases are satisfied by using the riak-admin cluster
command.

It seemed that this was the purpose of the reip command, but if Basho is
poised to deprecate this command, and indeed no longer recommends its use,
how are the previous cases supported? Surely these are important scenarios
for users of Riak, and therefore Basho?

At one level, it seems it should be entirely possible to simply copy the
data directory from each Riak node and tell Riak that the node names and IP
addresses have changed (reip!). So what's the problem with doing this?

Regards,
Martin.


On 13 November 2012 17:16, Thomas Santero  wrote:

> Hi Shane,
>
> I'm sorry for the delay on this. Over the weekend I was working to
> replicate your setup so I can answer your question from experience. Alas,
> time got the best of me and I have not yet finished.
>
> That said, I'm inclined to suggest upgrading riak on your current cluster
> first and then using riak-admin replace to move off of the VM's and onto
> metal.
>
> * In this scenario, do a rolling upgrade (including making backups) of the
> current cluster.
> * Install riak onto the new machines
> * join the first machine to the cluster
> * use riak-admin replace to replace one of the old nodes with the new node
> * wait for ring-ready, then repeat for the other nodes.
>
> Tom
>
>
> On Tue, Nov 13, 2012 at 11:59 AM, Shane McEwan  wrote:
>
>> Anyone? Beuller? :-)
>>
>> Installing Riak 1.1.1 on the new nodes, copying the data directories from
>> the old nodes, issuing a "reip" on all the new nodes, starting up, waiting
>> for partition handoffs to complete, shutting down, upgrading to 1.2.1 and
>> starting up again got us to where we want to be. But this is not very
>> convenient.
>>
>> What do I do when I come to creating our test environment where I'll be
>> wanting to copy production data onto the test nodes on a regular basis? At
>> that point I won't have the "luxury" of downgrading to 1.1.1 to have a
>> working "reip" command.
>>
>> Surely there's gotta be an easier way to spin up a new cluster with new
>> names and IPs but with old data?
>>
>> Shane.
>>
>>
>> On 08/11/12 21:10, Shane McEwan wrote:
>>
>>> G'day!
>>>
>>> Just to add to the list of people asking questions about migrating to
>>> 1.2.1 . . .
>>>
>>> We're about to migrate our 4 node production Riak database from 1.1.1 to
>>> 1.2.1. At the same time we're also migrating from virtual machines to
>>> physical machines. These machines will have new names and IP addresses.
>>>
>>> The process of doing rolling upgrades is well documented but I'm unsure
>>> of the correct procedure for moving to an entirely new cluster.
>>>
>>> We have the luxury of a maintenance window so we don't need to keep
>>> everything running during the migration. Therefore the current plan is
>>> to stop the current cluster, copy the Riak data directories to the new
>>> machines and start up the new cluster. The hazy part of the process is
>>> how we "reip" the database so it will work in the new cluster.
>>>
>>> We've tried using the "riak-admin reip" command but were left with one
>>> of our nodes in "(legacy)" mode according to "riak-admin member-status".
>>> From an earlier E-Mail thread[1] it seems like "reip" is deprecated and
>>> we should be doing a "cluster force replace" instead.
>>>
>>> So, would the new procedure be the following?
>>>
>>> 1. Shutdown old cluster
>>> 2. Copy data directory
>>> 3. Start new cluster (QUESTION: The new nodes don't own any of the
>>> partitions in the data directory. What does it do?) (QUESTION: The new
>>> nodes won't be part of a cluster yet. Do I need to "join" them before I
>>> can do any of the following commands? Or do I just put all the joins and
>>> force-replace commands into the same plan and commit it all together?)
>>> 3. Issue "riak-admin cluster force-replace old-node1 new-node1"
>>> (QUESTION: Do I run this command just on "new-node1" or on all nodes?)
>>> 4. Issue "force-replace" commands for the remaining three nodes.
>>> 5. Issue a "cluster plan" and "cluster commit" to commit the changes.
>>> 6. Cross fingers.
>>>
>>> In my mind the "replace" and/o

Re: More Migration Questions

2012-11-13 Thread Matt Black
I still haven't really gotten to the bottom of the best way to do this
(short of paying for
MDC
):

http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-October/009951.html

Previously, I've used backup/restore for situations like this, but our
backup has now grown to around 100GB - so it has become impractical.

Shane, in your maintenance window could you:
* create your new cluster
* stop any new data being added to the old cluster
* run a riak-admin backup
* run a riak-admin restore into the new one

The maintenance window here saves you a lot of trouble... Unfortunately,
most people won't get one ;)

Cheers
Matt



On 14 November 2012 09:44, Martin Woods  wrote:

> Hi Tom
>
> I'd be very interested to know if Shane's approach should work, or if you
> know of any good reason why that approach would cause issues.
>
> Also, aren't there several very real business use cases here that users of
> Riak will inevitably encounter, and must be able to satisfy? Shane mentions
> two use cases below: creation of a test environment using a copy of data
> from a production cluster; and the migration of data within one cloud
> provider from one set of systems to a distinct, separate set of systems.
>
> To add to this, what about the case where a Riak customer needs to move
> from one cloud provider to another? How does this customer take his data
> with him?
>
> All of the above cases require that a separate cluster be spun up from the
> original cluster, with different names and IP addresses for the Riak nodes
>  involved in the cluster.
>
> None of these use cases are satisfied by using the riak-admin cluster
> command.
>
> It seemed that this was the purpose of the reip command, but if Basho is
> poised to deprecate this command, and indeed no longer recommends its use,
> how are the previous cases supported? Surely these are important scenarios
> for users of Riak, and therefore Basho?
>
> At one level, it seems it should be entirely possible to simply copy the
> data directory from each Riak node and tell Riak that the node names and IP
> addresses have changed (reip!). So what's the problem with doing this?
>
> Regards,
> Martin.
>
>
> On 13 November 2012 17:16, Thomas Santero  wrote:
>
>> Hi Shane,
>>
>> I'm sorry for the delay on this. Over the weekend I was working to
>> replicate your setup so I can answer your question from experience. Alas,
>> time got the best of me and I have not yet finished.
>>
>> That said, I'm inclined to suggest upgrading riak on your current cluster
>> first and then using riak-admin replace to move off of the VM's and onto
>> metal.
>>
>> * In this scenario, do a rolling upgrade (including making backups) of
>> the current cluster.
>> * Install riak onto the new machines
>> * join the first machine to the cluster
>> * use riak-admin replace to replace one of the old nodes with the new node
>> * wait for ring-ready, then repeat for the other nodes.
>>
>> Tom
>>
>>
>> On Tue, Nov 13, 2012 at 11:59 AM, Shane McEwan wrote:
>>
>>> Anyone? Beuller? :-)
>>>
>>> Installing Riak 1.1.1 on the new nodes, copying the data directories
>>> from the old nodes, issuing a "reip" on all the new nodes, starting up,
>>> waiting for partition handoffs to complete, shutting down, upgrading to
>>> 1.2.1 and starting up again got us to where we want to be. But this is not
>>> very convenient.
>>>
>>> What do I do when I come to creating our test environment where I'll be
>>> wanting to copy production data onto the test nodes on a regular basis? At
>>> that point I won't have the "luxury" of downgrading to 1.1.1 to have a
>>> working "reip" command.
>>>
>>> Surely there's gotta be an easier way to spin up a new cluster with new
>>> names and IPs but with old data?
>>>
>>> Shane.
>>>
>>>
>>> On 08/11/12 21:10, Shane McEwan wrote:
>>>
 G'day!

 Just to add to the list of people asking questions about migrating to
 1.2.1 . . .

 We're about to migrate our 4 node production Riak database from 1.1.1 to
 1.2.1. At the same time we're also migrating from virtual machines to
 physical machines. These machines will have new names and IP addresses.

 The process of doing rolling upgrades is well documented but I'm unsure
 of the correct procedure for moving to an entirely new cluster.

 We have the luxury of a maintenance window so we don't need to keep
 everything running during the migration. Therefore the current plan is
 to stop the current cluster, copy the Riak data directories to the new
 machines and start up the new cluster. The hazy part of the process is
 how we "reip" the database so it will work in the new cluster.

 We've tried using the "riak-admin reip" command but we

Re: More Migration Questions

2012-11-13 Thread Shane McEwan

G'day Tom and Matt. Thanks for your suggestions.

Even though our two cluster networks are separate it would theoretically 
be possible to have nodes from the new cluster join the old cluster and 
migrate data that way. However, we would prefer to leave the old cluster 
untouched as much as possible in case we need to abort the migration and 
continue to use the old cluster in production. Performing an upgrade on 
our currently working nodes and then essentially removing them from the 
cluster is an additional risk that we'd like to avoid.


I can certainly see how MDC would solve the problem but from a budget 
point of view it's not an option for us at the moment.


The backup-restore option is always a possibility however our database 
is currently around 80GB compressed and I'd hate to think how long it 
would take to dump and restore that. While we have a maintenance window 
I suspect it won't be long enough. :-(


We have a solution for now by using the 1.1.1 "reip" command which gets 
our migration and upgrade done. However, looking to the future it would 
be nice if there was a reliable way of moving data from one node to 
another "out of band".


Shane.

On 13/11/12 22:54, Matt Black wrote:

I still haven't really gotten to the bottom of the best way to do this
(short of paying for MDC
):

http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-October/009951.html

Previously, I've used backup/restore for situations like this, but our
backup has now grown to around 100GB - so it has become impractical.

Shane, in your maintenance window could you:
* create your new cluster
* stop any new data being added to the old cluster
* run a riak-admin backup
* run a riak-admin restore into the new one

The maintenance window here saves you a lot of trouble... Unfortunately,
most people won't get one ;)

Cheers
Matt



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: timeout error for size>40k & changing q_limit has no affect

2012-11-13 Thread Mark Phillips
Hi Venki,

I know this email is about two months late, but I figured it was worth
responding to as I've had this response as a draft for the last 60 or
days or so. :)

At any rate, it *looks* like you are (or were) hitting this:

https://github.com/basho/riak_kv/issues/290

It has been patched and will be fixed in the 1.3 release of Riak which
is slated for release towards the middle of January (if all goes to
plan).

Mark

On Sat, Sep 8, 2012 at 3:46 AM, Venki Yedidha
 wrote:
> Hi All,
>
>  I am getting the following error code from Riak when I execute my map
> reduce..
>
> {"phase":0,"error":"[timeout]","input":"{<<\"20120708\">>,<<\"JM\">>}","type":"forward_preflist","stack":"[]"}
>
> This was happening from the past two days when my [bucket,key] count
> went huge upto 40k...
>
> I tried changing the q_limit value from 64 to 2000, but still no affect
> so far..
>
>
> Please help on the above..
>
> Thanks,
> Venkatesh
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com