OSX issue

2013-08-13 Thread Federico Mosca
I have a problem to install riak, I am on a OSX of 2013

git clone https://github.com/basho/riak
cd riak
make rel

cd linking; make export
cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC  -UDEBUG
 -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
-DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1  -DFORCE_PR_LOG
-D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_
-I../../../dist/include/nspr -I../../../pr/include
-I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
#include 
 ^
1 error generated.
make[6]: *** [prlink.o] Error 1
make[5]: *** [export] Error 2
make[4]: *** [export] Error 2
make[3]: *** [export] Error 2
make[2]: ***
[/Users/Federico/riak/deps/erlang_js/c_src/system/lib/libnspr4.a] Error 2
make[1]: *** [c_src] Error 2
ERROR: Command [compile] failed!
make: *** [compile] Error 1
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OSX issue

2013-08-13 Thread Bhuwan Chawla
Here's another thread with a similar issue:

http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-August/009110.html




On Tue, Aug 13, 2013 at 7:58 AM, Federico Mosca  wrote:

> I have a problem to install riak, I am on a OSX of 2013
>
> git clone https://github.com/basho/riak
> cd riak
> make rel
>
> cd linking; make export
> cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC  -UDEBUG
>  -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
> -DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1  -DFORCE_PR_LOG
> -D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_
> -I../../../dist/include/nspr -I../../../pr/include
> -I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
> prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
> #include 
>  ^
> 1 error generated.
> make[6]: *** [prlink.o] Error 1
> make[5]: *** [export] Error 2
> make[4]: *** [export] Error 2
> make[3]: *** [export] Error 2
> make[2]: ***
> [/Users/Federico/riak/deps/erlang_js/c_src/system/lib/libnspr4.a] Error 2
> make[1]: *** [c_src] Error 2
> ERROR: Command [compile] failed!
> make: *** [compile] Error 1
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


different versions in upgrade

2013-08-13 Thread Louis-Philippe Perron
Hi riak peoples,
I'm in the process of adding a new node to a aging (1 node) cluster.  I
would like to know what would be the prefered incrementing upgrade to get
all my nodes on the latest riak version.  The best scenario would also have
the least downtime.  The old node is at riak version 1.2.1.

My actual plan is:

- install riak 1.4.1 on the new node
- add the new 1.4.1 node to the old 1.2.1 cluster.
- bring the 1.2.1 node offline
- upgrade the 1.2.1 node to 1.4.1
- put the upgraded node back online

will this work?
thanks!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OSX issue

2013-08-13 Thread Federico Mosca
I saw it, but how can I remove all the erlang?
the ln -s does not work for me


2013/8/13 Bhuwan Chawla 

> Here's another thread with a similar issue:
>
>
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-August/009110.html
>
>
>
>
> On Tue, Aug 13, 2013 at 7:58 AM, Federico Mosca  wrote:
>
>> I have a problem to install riak, I am on a OSX of 2013
>>
>> git clone https://github.com/basho/riak
>> cd riak
>> make rel
>>
>> cd linking; make export
>> cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC  -UDEBUG
>>  -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
>> -DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1  -DFORCE_PR_LOG
>> -D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_
>> -I../../../dist/include/nspr -I../../../pr/include
>> -I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
>> prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
>> #include 
>>  ^
>> 1 error generated.
>> make[6]: *** [prlink.o] Error 1
>> make[5]: *** [export] Error 2
>> make[4]: *** [export] Error 2
>> make[3]: *** [export] Error 2
>> make[2]: ***
>> [/Users/Federico/riak/deps/erlang_js/c_src/system/lib/libnspr4.a] Error 2
>> make[1]: *** [c_src] Error 2
>> ERROR: Command [compile] failed!
>> make: *** [compile] Error 1
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: different versions in upgrade

2013-08-13 Thread Jeremiah Peschka
From http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it looks 
like you should upgrade to 1.3.2 and then 1.4.1

Depending on how badly you need the extra capacity, it would probably be better 
to start by upgrading all nodes and then adding the new one.

--
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop

On Aug 13, 2013, at 5:06 AM, Louis-Philippe Perron  wrote:

> Hi riak peoples,
> I'm in the process of adding a new node to a aging (1 node) cluster.  I would 
> like to know what would be the prefered incrementing upgrade to get all my 
> nodes on the latest riak version.  The best scenario would also have the 
> least downtime.  The old node is at riak version 1.2.1.
> 
> My actual plan is:
> 
> - install riak 1.4.1 on the new node
> - add the new 1.4.1 node to the old 1.2.1 cluster.
> - bring the 1.2.1 node offline
> - upgrade the 1.2.1 node to 1.4.1
> - put the upgraded node back online
> 
> will this work?
> thanks!
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: different versions in upgrade

2013-08-13 Thread Bhuwan Chawla
Having done a similar upgrade, a gotcha to keep in mind:


"Note for Secondary Index users
If you use Riak's Secondary Indexes and are upgrading from a version prior
to Riak version 1.3.1, you need to reformat the indexes using the
riak-admin reformat-indexes command"



On Tue, Aug 13, 2013 at 8:36 AM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wrote:

> From http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it
> looks like you should upgrade to 1.3.2 and then 1.4.1
>
> Depending on how badly you need the extra capacity, it would probably be
> better to start by upgrading all nodes and then adding the new one.
>
> --
> Jeremiah Peschka - Founder, Brent Ozar Unlimited
> MCITP: SQL Server 2008, MVP
> Cloudera Certified Developer for Apache Hadoop
>
> On Aug 13, 2013, at 5:06 AM, Louis-Philippe Perron 
> wrote:
>
> Hi riak peoples,
> I'm in the process of adding a new node to a aging (1 node) cluster.  I
> would like to know what would be the prefered incrementing upgrade to get
> all my nodes on the latest riak version.  The best scenario would also have
> the least downtime.  The old node is at riak version 1.2.1.
>
> My actual plan is:
>
> - install riak 1.4.1 on the new node
> - add the new 1.4.1 node to the old 1.2.1 cluster.
> - bring the 1.2.1 node offline
> - upgrade the 1.2.1 node to 1.4.1
> - put the upgraded node back online
>
> will this work?
> thanks!
>
>
>  ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: different versions in upgrade

2013-08-13 Thread Guido Medina
Same here, except that Riak 1.3.2 did that for me automatically. As 
Jeremiah mentioned, you should go first to 1.3.2 on all nodes, per node 
the first time Riak starts it will take some time upgrading the 2i 
indexes storage format, if you see any weirdness then execute 
"riak-admin reformat-indexes" as soon as you upgrade a node, 1 by 1.


Before you even start read the release notes for each major version 
besides the rolling upgrade doc:


*Riak 1.3.2:* https://github.com/basho/riak/blob/1.3/RELEASE-NOTES.md
*Riak 1.4.1:* https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md

HTH,

Guido.

On 13/08/13 13:41, Bhuwan Chawla wrote:

Having done a similar upgrade, a gotcha to keep in mind:


"Note for Secondary Index users
If you use Riak's Secondary Indexes and are upgrading from a version 
prior to Riak version 1.3.1, you need to reformat the indexes using 
the riak-admin reformat-indexes command"




On Tue, Aug 13, 2013 at 8:36 AM, Jeremiah Peschka 
mailto:jeremiah.pesc...@gmail.com>> wrote:


From
http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it
looks like you should upgrade to 1.3.2 and then 1.4.1

Depending on how badly you need the extra capacity, it would
probably be better to start by upgrading all nodes and then adding
the new one.

-- 
Jeremiah Peschka - Founder, Brent Ozar Unlimited

MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop

On Aug 13, 2013, at 5:06 AM, Louis-Philippe Perron
mailto:lpper...@gmail.com>> wrote:


Hi riak peoples,
I'm in the process of adding a new node to a aging (1 node)
cluster.  I would like to know what would be the prefered
incrementing upgrade to get all my nodes on the latest riak
version.  The best scenario would also have the least downtime.
 The old node is at riak version 1.2.1.

My actual plan is:

- install riak 1.4.1 on the new node
- add the new 1.4.1 node to the old 1.2.1 cluster.
- bring the 1.2.1 node offline
- upgrade the 1.2.1 node to 1.4.1
- put the upgraded node back online

will this work?
thanks!


___
riak-users mailing list
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OSX issue

2013-08-13 Thread Todd Tyree
Hi Federico,

Are you building Riak for production or as a development/test environment?
 Can I ask why you aren't using the precompiled tarball? [0]

If you are looking for a quick dev/test environment, there is an OSX devrel
[1] launcher on github [2].  We use it all the time to get a devrel cluster
running quickly.

Best,
Todd

[0] http://docs.basho.com/riak/latest/ops/building/installing/mac-osx/
[1] http://docs.basho.com/riak/latest/quickstart/



On Tue, Aug 13, 2013 at 12:58 PM, Federico Mosca  wrote:

> I have a problem to install riak, I am on a OSX of 2013
>
> git clone https://github.com/basho/riak
> cd riak
> make rel
>
> cd linking; make export
> cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC  -UDEBUG
>  -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
> -DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1  -DFORCE_PR_LOG
> -D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_
> -I../../../dist/include/nspr -I../../../pr/include
> -I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
> prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
> #include 
>  ^
> 1 error generated.
> make[6]: *** [prlink.o] Error 1
> make[5]: *** [export] Error 2
> make[4]: *** [export] Error 2
> make[3]: *** [export] Error 2
> make[2]: ***
> [/Users/Federico/riak/deps/erlang_js/c_src/system/lib/libnspr4.a] Error 2
> make[1]: *** [c_src] Error 2
> ERROR: Command [compile] failed!
> make: *** [compile] Error 1
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
*Todd Tyree*
Client Services Engineer Basho 

mobile: +44(0)7861 220 182
web: www.basho.com
github: tatyree 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OSX issue

2013-08-13 Thread Todd Tyree
Apologies, I forgot to send the URL to the devrel launcher repo.  Here it
is:

https://github.com/basho/riak-dev-cluster


On Tue, Aug 13, 2013 at 1:51 PM, Todd Tyree  wrote:

> Hi Federico,
>
> Are you building Riak for production or as a development/test environment?
>  Can I ask why you aren't using the precompiled tarball? [0]
>
> If you are looking for a quick dev/test environment, there is an OSX
> devrel [1] launcher on github [2].  We use it all the time to get a devrel
> cluster running quickly.
>
> Best,
> Todd
>
> [0] http://docs.basho.com/riak/latest/ops/building/installing/mac-osx/
> [1] http://docs.basho.com/riak/latest/quickstart/
>
>
>
> On Tue, Aug 13, 2013 at 12:58 PM, Federico Mosca  wrote:
>
>> I have a problem to install riak, I am on a OSX of 2013
>>
>> git clone https://github.com/basho/riak
>> cd riak
>> make rel
>>
>> cd linking; make export
>> cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC  -UDEBUG
>>  -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1
>> -DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1  -DFORCE_PR_LOG
>> -D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_
>> -I../../../dist/include/nspr -I../../../pr/include
>> -I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
>> prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
>> #include 
>>  ^
>> 1 error generated.
>> make[6]: *** [prlink.o] Error 1
>> make[5]: *** [export] Error 2
>> make[4]: *** [export] Error 2
>> make[3]: *** [export] Error 2
>> make[2]: ***
>> [/Users/Federico/riak/deps/erlang_js/c_src/system/lib/libnspr4.a] Error 2
>> make[1]: *** [c_src] Error 2
>> ERROR: Command [compile] failed!
>> make: *** [compile] Error 1
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> *Todd Tyree*
> Client Services Engineer Basho 
>
> mobile: +44(0)7861 220 182
> web: www.basho.com
> github: tatyree 
>



-- 
*Todd Tyree*
Client Services Engineer Basho 

mobile: +44(0)7861 220 182
web: www.basho.com
github: tatyree 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Changing the filesystem

2013-08-13 Thread dilip kumar
Hi,

How do I change the filesystem where the RIAK CS buckets could run. Changing 
the data_root values in storage_backend is not working as it is specified in a 
FAQ 
(http://docs.basho.com/riakcs/latest/cookbooks/faqs/riak-cs/#is-it-possible-to-specify-a-file-system-where-my-r).

When I change the below specified data_root values in Storage Backend, the 
"stanchion start" and "riak-cs start" are not working. 



{add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
{storage_backend, riak_cs_kv_multi_backend},
{multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
{multi_backend_default, be_default},
{multi_backend, [
    {be_default, riak_kv_eleveldb_backend, [
        {max_open_files, 50},
        {data_root, "/var/lib/riak/leveldb"}
    ]},
    {be_blocks, riak_kv_bitcask_backend, [
        {data_root, "/var/lib/riak/bitcask"}
    ]}
]},

Regards,
Dilip___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[ANN] Riak CS 1.4.0 is Now Official

2013-08-13 Thread Seth Thomas
Hi All, On behalf of Basho, I'm excited to announce that Riak CS 1.4.0 is now official. Riak CS is Basho's open source cloud storage software. The biggest feature additions are support for the Swift API and Keystone authentication, which enables CS to be a drop-in storage replacement for OpenStack deployments. We also fixed numerous bugs, including query string authentication for multi-part uploads and better connection handing of transient Riak failures.* The full release notes can be found here: http://git.io/xlL8uwThe Riak CS docs have been updated accordingly:* The 1.4.0 packages are here: http://docs.basho.com/riakcs/latest/riakcs-downloads/* Full documentation is here: http://docs.basho.com/riakcs/latest/* Rolling upgrade docs are here: http://docs.basho.com/riakcs/latest/cookbooks/Rolling-Upgrades-For-Riak-CS/In addition to the release notes, we have a full overview of the release on the Basho Blog. We're also doing a webcast on August 23rd for anyone is interested in attending. * Blog post: http://basho.com/riak-cs-1-4-is-now-available/* Webcast details and registration: http://info.basho.com/RiakCS1.4_Aug23.htmlLet us know if you have any questions, comments, or issues. Thanks to everyone who made this release possible with their usage, bug reports, and contributions. As usual, we couldn't have shipped this without you. And be sure to grab your ticket for RICON West - ricon.io/west.html - to hack on Riak and Riak CS alongside the Basho team and community this October in San Francisco. Best, Seth and the Basho Team___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: different versions in upgrade

2013-08-13 Thread Guido Medina
Also, in theory if you have at least 5 nodes in the cluster one node 
down at a time doesn't stop your cluster from working properly.


You could do the following node by node which I have done several times:

1. Stop Riak on the upgrading node and in another node mark the
   upgrading node as down (riak-admin down riak@upgrading-node)
2. Upgrade Riak on that node to version 1.3.2, start it up and wait
   till it is completely operative (riak_kv is up and all transfers are
   finished, check by typing "riak-admin transfers")
3. For Riak 1.3.2 once the step is over, type "riak-admin
   reformat-indexes" and tail -f /var/log/riak/console.log which should
   be done really fast if there isn't anything to fix.
4. Do 1 to 3 per node.
5. Do 1 and 2 but for for Riak 1.4.1.


HTH,

Guido.

On 13/08/13 13:50, Guido Medina wrote:
Same here, except that Riak 1.3.2 did that for me automatically. As 
Jeremiah mentioned, you should go first to 1.3.2 on all nodes, per 
node the first time Riak starts it will take some time upgrading the 
2i indexes storage format, if you see any weirdness then execute 
"riak-admin reformat-indexes" as soon as you upgrade a node, 1 by 1.


Before you even start read the release notes for each major version 
besides the rolling upgrade doc:


*Riak 1.3.2:* https://github.com/basho/riak/blob/1.3/RELEASE-NOTES.md
*Riak 1.4.1:* https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md

HTH,

Guido.

On 13/08/13 13:41, Bhuwan Chawla wrote:

Having done a similar upgrade, a gotcha to keep in mind:


"Note for Secondary Index users
If you use Riak's Secondary Indexes and are upgrading from a version 
prior to Riak version 1.3.1, you need to reformat the indexes using 
the riak-admin reformat-indexes command"




On Tue, Aug 13, 2013 at 8:36 AM, Jeremiah Peschka 
mailto:jeremiah.pesc...@gmail.com>> wrote:


From
http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it looks
like you should upgrade to 1.3.2 and then 1.4.1

Depending on how badly you need the extra capacity, it would
probably be better to start by upgrading all nodes and then
adding the new one.

-- 
Jeremiah Peschka - Founder, Brent Ozar Unlimited

MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop

On Aug 13, 2013, at 5:06 AM, Louis-Philippe Perron
mailto:lpper...@gmail.com>> wrote:


Hi riak peoples,
I'm in the process of adding a new node to a aging (1 node)
cluster.  I would like to know what would be the prefered
incrementing upgrade to get all my nodes on the latest riak
version.  The best scenario would also have the least downtime.
 The old node is at riak version 1.2.1.

My actual plan is:

- install riak 1.4.1 on the new node
- add the new 1.4.1 node to the old 1.2.1 cluster.
- bring the 1.2.1 node offline
- upgrade the 1.2.1 node to 1.4.1
- put the upgraded node back online

will this work?
thanks!


___
riak-users mailing list
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OSX issue

2013-08-13 Thread Federico Mosca
I followed also the [1],
anyway I had two version of erlang


2013/8/13 Todd Tyree 

> Apologies, I forgot to send the URL to the devrel launcher repo.  Here it
> is:
>
> https://github.com/basho/riak-dev-cluster
>
>
> On Tue, Aug 13, 2013 at 1:51 PM, Todd Tyree  wrote:
>
>> Hi Federico,
>>
>> Are you building Riak for production or as a development/test
>> environment?  Can I ask why you aren't using the precompiled tarball? [0]
>>
>> If you are looking for a quick dev/test environment, there is an OSX
>> devrel [1] launcher on github [2].  We use it all the time to get a devrel
>> cluster running quickly.
>>
>> Best,
>> Todd
>>
>> [0] http://docs.basho.com/riak/latest/ops/building/installing/mac-osx/
>> [1] http://docs.basho.com/riak/latest/quickstart/
>>
>>
>>
>> On Tue, Aug 13, 2013 at 12:58 PM, Federico Mosca wrote:
>>
>>> I have a problem to install riak, I am on a OSX of 2013
>>>
>>> git clone https://github.com/basho/riak
>>> cd riak
>>> make rel
>>>
>>> cd linking; make export
>>> cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC
>>>  -UDEBUG  -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1
>>> -DHAVE_SOCKLEN_T=1 -DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1
>>>  -DFORCE_PR_LOG -D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_
>>> -I../../../dist/include/nspr -I../../../pr/include
>>> -I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
>>> prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
>>> #include 
>>>  ^
>>> 1 error generated.
>>> make[6]: *** [prlink.o] Error 1
>>> make[5]: *** [export] Error 2
>>> make[4]: *** [export] Error 2
>>> make[3]: *** [export] Error 2
>>> make[2]: ***
>>> [/Users/Federico/riak/deps/erlang_js/c_src/system/lib/libnspr4.a] Error 2
>>> make[1]: *** [c_src] Error 2
>>> ERROR: Command [compile] failed!
>>> make: *** [compile] Error 1
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>>
>> --
>> *Todd Tyree*
>> Client Services Engineer Basho 
>>
>> mobile: +44(0)7861 220 182
>> web: www.basho.com
>> github: tatyree 
>>
>
>
>
> --
> *Todd Tyree*
> Client Services Engineer Basho 
>
> mobile: +44(0)7861 220 182
> web: www.basho.com
> github: tatyree 
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: different versions in upgrade

2013-08-13 Thread Charlie Voiselle
Louis-Philippe et al:

You can follow the rolling upgrade procedure to upgrade a node from 1.2 to 
1.4.x directly.  The note in the instructions only concerns upgrading from 1.0 
to 1.4.  

No need to stop at 1.3.2.

Thanks,
Charlie Voiselle

On Aug 13, 2013, at 9:23 AM, Guido Medina  wrote:

> Also, in theory if you have at least 5 nodes in the cluster one node down at 
> a time doesn't stop your cluster from working properly.
> 
> You could do the following node by node which I have done several times:
> Stop Riak on the upgrading node and in another node mark the upgrading node 
> as down (riak-admin down riak@upgrading-node)
> Upgrade Riak on that node to version 1.3.2, start it up and wait till it is 
> completely operative (riak_kv is up and all transfers are finished, check by 
> typing "riak-admin transfers")
> For Riak 1.3.2 once the step is over, type "riak-admin reformat-indexes" and 
> tail -f /var/log/riak/console.log which should be done really fast if there 
> isn't anything to fix.
> Do 1 to 3 per node.
> Do 1 and 2 but for for Riak 1.4.1.
> 
> HTH,
> 
> Guido.
> 
> On 13/08/13 13:50, Guido Medina wrote:
>> Same here, except that Riak 1.3.2 did that for me automatically. As Jeremiah 
>> mentioned, you should go first to 1.3.2 on all nodes, per node the first 
>> time Riak starts it will take some time upgrading the 2i indexes storage 
>> format, if you see any weirdness then execute "riak-admin reformat-indexes" 
>> as soon as you upgrade a node, 1 by 1.
>> 
>> Before you even start read the release notes for each major version besides 
>> the rolling upgrade doc:
>> 
>> Riak 1.3.2: https://github.com/basho/riak/blob/1.3/RELEASE-NOTES.md
>> Riak 1.4.1: https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
>> 
>> HTH,
>> 
>> Guido.
>> 
>> On 13/08/13 13:41, Bhuwan Chawla wrote:
>>> Having done a similar upgrade, a gotcha to keep in mind: 
>>> 
>>> 
>>> "Note for Secondary Index users
>>> If you use Riak's Secondary Indexes and are upgrading from a version prior 
>>> to Riak version 1.3.1, you need to reformat the indexes using the 
>>> riak-admin reformat-indexes command"
>>> 
>>> 
>>> 
>>> On Tue, Aug 13, 2013 at 8:36 AM, Jeremiah Peschka 
>>>  wrote:
>>> From http://docs.basho.com/riak/latest/ops/running/rolling-upgrades/ it 
>>> looks like you should upgrade to 1.3.2 and then 1.4.1
>>> 
>>> Depending on how badly you need the extra capacity, it would probably be 
>>> better to start by upgrading all nodes and then adding the new one.
>>> 
>>> --
>>> Jeremiah Peschka - Founder, Brent Ozar Unlimited
>>> MCITP: SQL Server 2008, MVP
>>> Cloudera Certified Developer for Apache Hadoop
>>> 
>>> On Aug 13, 2013, at 5:06 AM, Louis-Philippe Perron  
>>> wrote:
>>> 
 Hi riak peoples,
 I'm in the process of adding a new node to a aging (1 node) cluster.  I 
 would like to know what would be the prefered incrementing upgrade to get 
 all my nodes on the latest riak version.  The best scenario would also 
 have the least downtime.  The old node is at riak version 1.2.1.
 
 My actual plan is:
 
 - install riak 1.4.1 on the new node
 - add the new 1.4.1 node to the old 1.2.1 cluster.
 - bring the 1.2.1 node offline
 - upgrade the 1.2.1 node to 1.4.1
 - put the upgraded node back online
 
 will this work?
 thanks!
 
 
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing the filesystem

2013-08-13 Thread Hector Castro
Hi Dilip,

Are you making these changes to Riak's app.config?

If the `riak-cs start` command isn't working, that's generally an
indicator that Riak is not running. What happens when you execute
`riak ping`?

--
Hector


On Tue, Aug 13, 2013 at 9:20 AM, dilip kumar  wrote:
> Hi,
>
> How do I change the filesystem where the RIAK CS buckets could run. Changing
> the data_root values in storage_backend is not working as it is specified in
> a FAQ
> (http://docs.basho.com/riakcs/latest/cookbooks/faqs/riak-cs/#is-it-possible-to-specify-a-file-system-where-my-r).
>
> When I change the below specified data_root values in Storage Backend, the
> "stanchion start" and "riak-cs start" are not working.
>
>
>
> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
> {storage_backend, riak_cs_kv_multi_backend},
> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
> {multi_backend_default, be_default},
> {multi_backend, [
> {be_default, riak_kv_eleveldb_backend, [
> {max_open_files, 50},
> {data_root, "/var/lib/riak/leveldb"}
> ]},
> {be_blocks, riak_kv_bitcask_backend, [
> {data_root, "/var/lib/riak/bitcask"}
> ]}
> ]},
>
> Regards,
> Dilip
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak compile error: BOOST_DISABLE_THREADS

2013-08-13 Thread handler
Hi!
OS - Debian 6 2.6.32-5-amd64
gcc - version 4.7.2 (GCC)
boost version 1.51

make
...
error: #error "Threading support unavaliable: it has been explicitly
disabled with BOOST_DISABLE_THREADS"
...

How to solve that error?

Thanks



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/Riak-compile-error-BOOST-DISABLE-THREADS-tp4028798.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak 2 Node cluster

2013-08-13 Thread Баканов Дмитрий
Hello,

I need to decide what database we will choose for our project. Certainly, we 
need only 2 physical nodes (active-standby). Riak is good for us, becase it is 
Erlang-based, as our project. But is's known that riak cluster should have at 
least five nodes. I have some problems with my cluster, same as in this E-mail: 
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-March/007988.html
Could you cleraly explain, please, is it possible to tune riak to work 
correctly with only 2 physical nodes, or we need to choose another DBMS?

Dmitry

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing the filesystem

2013-08-13 Thread dilip kumar
Hi Hector,

This is what happens, after changing the directories in riak_kv section on 
/etc/riak/app.config:

# riak restart

  ok
# stanchion restart
  ok
# riak-cs start
   riak-cs failed to start within 15 seconds,

   see the output of 'riak-cs console' for more information.
   If you want to wait longer, set the environment variable
   WAIT_FOR_ERLANG to the number of seconds to wait.
# riak ping

    Node 'riak@machine105' not responding to pings.

Below is the output of `riak-cs console`

Eshell V5.9.1  (abort with ^G)
(riak-cs@machine105)1> 12:24:36.456 [error] CRASH REPORT Process <0.130.0> with 
0 neighbours exited with reason: {tcp,econnrefused} in gen_server:init_it/6 
line 320
/usr/lib64/riak-cs/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed.
                                                                        Erlang 
has closed
                                                                                
         {"Kernel pid 
terminated",application_controller,"{application_start_failure,riak_cs,{shutdown,{riak_cs_app,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) 
({application_start_failure,riak_cs,{shutdown,{riak_cs_app,start,[normal,[]]}}})


Regards,
Dilip



 From: Hector Castro 
To: dilip kumar  
Cc: "riak-users@lists.basho.com"  
Sent: Tuesday, 13 August 2013 7:49 PM
Subject: Re: Changing the filesystem
 

Hi Dilip,

Are you making these changes to Riak's app.config?

If the `riak-cs start` command isn't working, that's generally an
indicator that Riak is not running. What happens when you execute
`riak ping`?

--
Hector


On Tue, Aug 13, 2013 at 9:20 AM, dilip kumar  wrote:
> Hi,
>
> How do I change the filesystem where the RIAK CS buckets could run. Changing
> the data_root values in storage_backend is not working as it is specified in
> a FAQ
> (http://docs.basho.com/riakcs/latest/cookbooks/faqs/riak-cs/#is-it-possible-to-specify-a-file-system-where-my-r).
>
> When I change the below specified data_root values in Storage Backend, the
> "stanchion start" and "riak-cs start" are not working.
>
>
>
> {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},
> {storage_backend, riak_cs_kv_multi_backend},
> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
> {multi_backend_default, be_default},
> {multi_backend, [
>     {be_default, riak_kv_eleveldb_backend, [
>         {max_open_files, 50},
>         {data_root, "/var/lib/riak/leveldb"}
>     ]},
>     {be_blocks, riak_kv_bitcask_backend, [
>         {data_root, "/var/lib/riak/bitcask"}
>     ]}
> ]},
>
> Regards,
> Dilip
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing the filesystem

2013-08-13 Thread John White
Dilip,Can you restart Riak with a riak stop then riak start?  If this fails a riak ping, can you please attach a riak console output.-- John White On August 13, 2013 at 7:25:51 PM, dilip kumar (dilip_nuta...@yahoo.co.in) wrote: Hi Hector,This is what happens, after changing the directories in riak_kv section on /etc/riak/app.config:# riak restart  ok# stanchion restart  ok# riak-cs start   riak-cs failed to start within 15 seconds,   see the output of 'riak-cs console' for more information.   If you want to wait longer, set the environment variable   WAIT_FOR_ERLANG to the number of seconds to wait.# riak
 ping    Node 'riak@machine105' not responding to pings.Below is the output of `riak-cs console`Eshell V5.9.1  (abort with ^G)(riak-cs@machine105)1> 12:24:36.456 [error] CRASH REPORT Process <0.130.0> with 0 neighbours exited with reason: {tcp,econnrefused} in gen_server:init_it/6 line 320/usr/lib64/riak-cs/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed.                                             
                           Erlang has closed                                                                                         {"Kernel pid terminated",application_controller,"{application_start_failure,riak_cs,{shutdown,{riak_cs_app,start,[normal,[]]}}}"}Kernel pid terminated (application_controller) ({application_start_failure,riak_cs,{shutdown,{riak_cs_app,start,[normal,[]]}}})Regards,DilipFrom: Hector Castro  To: dilip kumar  Cc: "riak-users@lists.basho.com"   Sent: Tuesday, 13 August 2013 7:49 PM Subject: Re: Changing the
 filesystem   Hi Dilip,Are you making these changes to Riak's app.config?If the `riak-cs start` command isn't working, that's generally anindicator that Riak is not running. What happens when you execute`riak ping`?--HectorOn Tue, Aug 13, 2013 at 9:20 AM, dilip kumar  wrote:> Hi,>> How do I change the filesystem where the RIAK CS buckets could run. Changing> the data_root values in storage_backend is not working as it is specified in> a FAQ> (http://docs.basho.com/riakcs/latest/cookbooks/faqs/riak-cs/#is-it-possible-to-specify-a-file-system-where-my-r).>> When I change the below specified data_root values in Storage Backend, the> "stanchion start" and "riak-cs start" are not working. {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.3.1/ebin"]},> {storage_backend, riak_cs_kv_multi_backend},> {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},> {multi_backend_default, be_default},> {multi_backend, [>     {be_default, riak_kv_eleveldb_backend, [>         {max_open_files, 50},>         {data_root, "/var/lib/riak/leveldb"}>     ]},>     {be_blocks, riak_kv_bitcask_backend, [>         {data_root, "/var/lib/riak/bitcask"}>     ]}>
 ]},>> Regards,> Dilip>> ___> riak-users mailing list> riak-users@lists.basho.com> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


LevelDB performance (block size question)

2013-08-13 Thread István
Hi guys,

I am setting up a new Riak cluster and I was wondering if there is any
drawback of increasing the LevelDB blocksize from 4K to 64K. The reason is
that we have all of the values way bigger than 4K and I guess from the
performance point of view it would make sense to increase the block size.
The tests are still running to confirm this theory but I wanted to clarify
that there is no big red flag of doing that from the Riak side. I found the
following discussion about changing block size:

https://groups.google.com/forum/#!msg/leveldb/2JJ4smpSC6Q/1Z7aDSeHiRkJ

Is that a good idea to experiment with this in Riak to achieve better
performance?

Thank you in advance,
Istvan


-- 
the sun shines for all
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: LevelDB performance (block size question)

2013-08-13 Thread Matthew Von-Maszewski
Istvan,

"block_size" is not a "size", it is a threshold.  Data is never split across 
blocks.  A single block contains one or more key/value pairs.  leveldb starts a 
new block only when the total size of all key/values in the current block 
exceed the threshold.  

Your must set block_size to a multiple of your typical key/value size if you 
desire multiple per block.

Plus side:  block_size is computed before compression.  So, you might get nice 
reduction in total disk size by having multiple, mutually compressible items in 
a block.  leveldb iterators / Riak 2i might give you slightly better 
performance with bigger blocks because there are fewer reads if the keys needed 
are in the same block (or fewer blocks).

Negative side:  the entire block, not single key/value pairs, go into the block 
cache uncompressed (cache_size).  You can quickly overwhelm the block cache 
with lots of large blocks.  Also random reads / Gets have to read, decompress, 
and CRC check the entire block.  Therefore it costs you more disk transfer and 
decompression/CRC CPU time to read random values from bigger blocks.


I suggest you experiment with your dataset and usage patterns.  Be sure to 
build big sample datasets before starting to measure and/or restart Riak 
between building and measuring.  These are ways to make sure you see the impact 
of random reads.

Matthew


On Aug 13, 2013, at 2:51 PM, István  wrote:

> Hi guys,
> 
> I am setting up a new Riak cluster and I was wondering if there is any 
> drawback of increasing the LevelDB blocksize from 4K to 64K. The reason is 
> that we have all of the values way bigger than 4K and I guess from the 
> performance point of view it would make sense to increase the block size. The 
> tests are still running to confirm this theory but I wanted to clarify that 
> there is no big red flag of doing that from the Riak side. I found the 
> following discussion about changing block size:
> 
> https://groups.google.com/forum/#!msg/leveldb/2JJ4smpSC6Q/1Z7aDSeHiRkJ
> 
> Is that a good idea to experiment with this in Riak to achieve better 
> performance?
> 
> Thank you in advance,
> Istvan
> 
> 
> -- 
> the sun shines for all
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-13 Thread Dave Martorana
An interesting hybrid that I'm coming around to seems to be using a Unix
release - OmniOS has an AMI, for instance - and ZFS. With a large-enough
store, I can run without EBS on my nodes, and have a single ZFS backup
instance with a huge amount of slow-EBS storage for accepting ZFS snapshots.

I'm still learning all the pieces, but luckily I have a company upstairs
from me that does a very similar thing with > 300TB and is willing to help
me set up my ZFS backup infrastructure.

Dave


On Mon, Aug 12, 2013 at 10:00 PM, Brady Wetherington
wrote:

> I will probably stick with EBS-store for now. I don't know how comfortable
> I can get with a replica that could disappear with simply an unintended
> reboot (one of my nodes just did that randomly today, for example). Sure, I
> would immediately start rebuilding it as soon as that were to happen, but
> we could be talking a pretty huge chunk of data that would have to get
> rebuilt out of the cluster. And that sounds scary. Even though, logically,
> I understand that it should not be.
>
> I will get there; I'm just a little cautious. As I learn Riak better and
> get more comfortable with it, maybe I would be able to start to move in a
> direction like that. And certainly as the performance characteristics of
> EBS-volumes start to bite me in the butt; that might force me to get
> comfortable with instance-store real quick. I would at least hope to be
> serving a decent-sized chunk of my data from memory, however.
>
> As for throwing my instances in one AZ - I don't feel comfortable with
> that either. I'll try out the way I'm saying and will report back - do I
> end up with crazy latencies all over the map, or does it seem to "just
> work?" We'll see.
>
> In the meantime, I still feel funny about "breaking the rules" on the
> 5-node cluster policy. Given my other choices as having been kinda
> nailed-down for now, what do you guys think of that?
>
> E.g. - should I take the risk of putting a 5th instance up in the same AZ
> as one of the others, or should I just "be ok" with having 4? Or should I
> do something weird like changing my 'n' value to be one fewer or something
> like that? (I think, as I understand it so far, I'm really liking "n=3,
> w=2, r=2" - but I could change it if it made more sense with the topology
> I've selected.)
>
> -B.
>
>
> Date: Sun, 11 Aug 2013 18:57:11 -0600
>> From: Jared Morrow 
>> To: Jeremiah Peschka 
>> Cc: riak-users 
>> Subject: Re: Practical Riak cluster choices in AWS (number of nodes?
>> AZ's?)
>> Message-ID:
>> <
>> cacusovelpu8yfcivykexm9ztkhq-kdnowk1afvpflcsip2h...@mail.gmail.com>
>> Content-Type: text/plain; charset="iso-8859-1"
>>
>>
>> +1 to what Jeremiah said, putting a 4 or 5 node cluster in each US West
>> and
>> US East using MDC between them would be the optimum solution.  I'm also
>> not
>> buying consistent latencies between AZ's, but I've also not tested it
>> personally in a production environment.  We have many riak-users members
>> on
>> AWS, so hopefully more experienced people will chime in.
>>
>> If you haven't seen them already, here's what I have in my "Riak on AWS"
>> bookmark folder:
>>
>> http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf
>> http://docs.basho.com/riak/latest/ops/tuning/aws/
>> http://basho.com/riak-on-aws-deployment-options/
>>
>> -Jared
>>
>>
>>
>>
>> On Sun, Aug 11, 2013 at 6:11 PM, Jeremiah Peschka <
>> jeremiah.pesc...@gmail.com> wrote:
>>
>> > I'd be wary of using EBS backed nodes for Riak - with only a single
>> > ethernet connection, it wil be very easy to saturate the max of 1000mbps
>> > available in a single AWS NIC (unless you're using cluster compute
>> > instances). I'd be more worried about temporarily losing contact with a
>> > node through network saturation than through AZ failure, truthfully.
>> >
>> > The beauty of Riak is that a node can drop and you can replace it with
>> > minimal fuss. Use that to your advantage and make every node in the
>> cluster
>> > disposable.
>> >
>> > As far as doubling up in one AZ goes - if you're worried about AZ
>> failure,
>> > you should treat each AZ as a separate data center and design your
>> failure
>> > scenarios accordingly. Yes, Amazon say you should put one Riak node in
>> each
>> > AZ; I'm not buying that. With no guarantee around latency, and no
>> control
>> > around between DCs, you need to be very careful how much of that latency
>> > you're willing to introduce into your application.
>> >
>> > Were I in your position, I'd stand up a 5 node cluster in US-WEST-2 and
>> be
>> > done with it. I'd consider Riak EE for my HA/DR solution once the
>> business
>> > decides that off-site HA/DR is something it wants/needs.
>> >
>> >
>> > ---
>> > Jeremiah Peschka - Founder, Brent Ozar Unlimited
>> > MCITP: SQL Server 2008, MVP
>> > Cloudera Certified Developer for Apache Hadoop
>> >
>> >
>> > On Sun, Aug 11, 2013 at 1:52 PM, Brady Wetherington <
>> br...@bespincorp.com>wrote:
>> >
>> >> Hi all -
>> >>
>> >>

Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-13 Thread Brady Wetherington
That *does* sound like an interesting way to do it. Kinda
best-of-both-worlds, depending on your backup schemes and whatnot. I'm
definitely curious to hear about how it works out for you.

-B.


On Tue, Aug 13, 2013 at 4:03 PM, Dave Martorana  wrote:

> An interesting hybrid that I'm coming around to seems to be using a Unix
> release - OmniOS has an AMI, for instance - and ZFS. With a large-enough
> store, I can run without EBS on my nodes, and have a single ZFS backup
> instance with a huge amount of slow-EBS storage for accepting ZFS snapshots.
>
> I'm still learning all the pieces, but luckily I have a company upstairs
> from me that does a very similar thing with > 300TB and is willing to help
> me set up my ZFS backup infrastructure.
>
> Dave
>
>
> On Mon, Aug 12, 2013 at 10:00 PM, Brady Wetherington  > wrote:
>
>> I will probably stick with EBS-store for now. I don't know how
>> comfortable I can get with a replica that could disappear with simply an
>> unintended reboot (one of my nodes just did that randomly today, for
>> example). Sure, I would immediately start rebuilding it as soon as that
>> were to happen, but we could be talking a pretty huge chunk of data that
>> would have to get rebuilt out of the cluster. And that sounds scary. Even
>> though, logically, I understand that it should not be.
>>
>> I will get there; I'm just a little cautious. As I learn Riak better and
>> get more comfortable with it, maybe I would be able to start to move in a
>> direction like that. And certainly as the performance characteristics of
>> EBS-volumes start to bite me in the butt; that might force me to get
>> comfortable with instance-store real quick. I would at least hope to be
>> serving a decent-sized chunk of my data from memory, however.
>>
>> As for throwing my instances in one AZ - I don't feel comfortable with
>> that either. I'll try out the way I'm saying and will report back - do I
>> end up with crazy latencies all over the map, or does it seem to "just
>> work?" We'll see.
>>
>> In the meantime, I still feel funny about "breaking the rules" on the
>> 5-node cluster policy. Given my other choices as having been kinda
>> nailed-down for now, what do you guys think of that?
>>
>> E.g. - should I take the risk of putting a 5th instance up in the same AZ
>> as one of the others, or should I just "be ok" with having 4? Or should I
>> do something weird like changing my 'n' value to be one fewer or something
>> like that? (I think, as I understand it so far, I'm really liking "n=3,
>> w=2, r=2" - but I could change it if it made more sense with the topology
>> I've selected.)
>>
>> -B.
>>
>>
>> Date: Sun, 11 Aug 2013 18:57:11 -0600
>>> From: Jared Morrow 
>>> To: Jeremiah Peschka 
>>> Cc: riak-users 
>>> Subject: Re: Practical Riak cluster choices in AWS (number of nodes?
>>> AZ's?)
>>> Message-ID:
>>> <
>>> cacusovelpu8yfcivykexm9ztkhq-kdnowk1afvpflcsip2h...@mail.gmail.com>
>>> Content-Type: text/plain; charset="iso-8859-1"
>>>
>>>
>>> +1 to what Jeremiah said, putting a 4 or 5 node cluster in each US West
>>> and
>>> US East using MDC between them would be the optimum solution.  I'm also
>>> not
>>> buying consistent latencies between AZ's, but I've also not tested it
>>> personally in a production environment.  We have many riak-users members
>>> on
>>> AWS, so hopefully more experienced people will chime in.
>>>
>>> If you haven't seen them already, here's what I have in my "Riak on AWS"
>>> bookmark folder:
>>>
>>> http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf
>>> http://docs.basho.com/riak/latest/ops/tuning/aws/
>>> http://basho.com/riak-on-aws-deployment-options/
>>>
>>> -Jared
>>>
>>>
>>>
>>>
>>> On Sun, Aug 11, 2013 at 6:11 PM, Jeremiah Peschka <
>>> jeremiah.pesc...@gmail.com> wrote:
>>>
>>> > I'd be wary of using EBS backed nodes for Riak - with only a single
>>> > ethernet connection, it wil be very easy to saturate the max of
>>> 1000mbps
>>> > available in a single AWS NIC (unless you're using cluster compute
>>> > instances). I'd be more worried about temporarily losing contact with a
>>> > node through network saturation than through AZ failure, truthfully.
>>> >
>>> > The beauty of Riak is that a node can drop and you can replace it with
>>> > minimal fuss. Use that to your advantage and make every node in the
>>> cluster
>>> > disposable.
>>> >
>>> > As far as doubling up in one AZ goes - if you're worried about AZ
>>> failure,
>>> > you should treat each AZ as a separate data center and design your
>>> failure
>>> > scenarios accordingly. Yes, Amazon say you should put one Riak node in
>>> each
>>> > AZ; I'm not buying that. With no guarantee around latency, and no
>>> control
>>> > around between DCs, you need to be very careful how much of that
>>> latency
>>> > you're willing to introduce into your application.
>>> >
>>> > Were I in your position, I'd stand up a 5 node cluster in US-WEST-2
>>> and be
>>> > done with it. I'd consider Riak EE for my HA/DR 

Re: LevelDB performance (block size question)

2013-08-13 Thread István
Hi Matthew,

Thank you for the explanation.

I am experimenting with different block size and making sure I have at
least 100G data  on disk for the tests.

I.


On Tue, Aug 13, 2013 at 12:11 PM, Matthew Von-Maszewski
wrote:

> Istvan,
>
> "block_size" is not a "size", it is a threshold.  Data is never split
> across blocks.  A single block contains one or more key/value pairs.
>  leveldb starts a new block only when the total size of all key/values in
> the current block exceed the threshold.
>
> Your must set block_size to a multiple of your typical key/value size if
> you desire multiple per block.
>
> Plus side:  block_size is computed before compression.  So, you might get
> nice reduction in total disk size by having multiple, mutually compressible
> items in a block.  leveldb iterators / Riak 2i might give you slightly
> better performance with bigger blocks because there are fewer reads if the
> keys needed are in the same block (or fewer blocks).
>
> Negative side:  the entire block, not single key/value pairs, go into the
> block cache uncompressed (cache_size).  You can quickly overwhelm the block
> cache with lots of large blocks.  Also random reads / Gets have to read,
> decompress, and CRC check the entire block.  Therefore it costs you more
> disk transfer and decompression/CRC CPU time to read random values from
> bigger blocks.
>
>
> I suggest you experiment with your dataset and usage patterns.  Be sure to
> build big sample datasets before starting to measure and/or restart Riak
> between building and measuring.  These are ways to make sure you see the
> impact of random reads.
>
> Matthew
>
>
> On Aug 13, 2013, at 2:51 PM, István  wrote:
>
> Hi guys,
>
> I am setting up a new Riak cluster and I was wondering if there is any
> drawback of increasing the LevelDB blocksize from 4K to 64K. The reason is
> that we have all of the values way bigger than 4K and I guess from the
> performance point of view it would make sense to increase the block size.
> The tests are still running to confirm this theory but I wanted to clarify
> that there is no big red flag of doing that from the Riak side. I found the
> following discussion about changing block size:
>
> https://groups.google.com/forum/#!msg/leveldb/2JJ4smpSC6Q/1Z7aDSeHiRkJ
>
> Is that a good idea to experiment with this in Riak to achieve better
> performance?
>
> Thank you in advance,
> Istvan
>
>
> --
> the sun shines for all
>
>
>  ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>


-- 
the sun shines for all
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-13 Thread John Eikenberry
Brady Wetherington wrote:

> First off - I know 5 instances is the "magic number" of instances to have.
> If I understand the thinking here, it's that at the default redundancy
> level ('n'?) of 3, it is most likely to start getting me some scaling
> (e.g., performance > just that of a single node), and yet also have
> redundancy; whereby I can lose one box and not start to take a performance
> hit.

With n=3 wouldn't you just need to avoid having more than 2 (of 5) nodes in the
same zone? With 5 nodes you shouldn't have to worry about replicas being on the
same node, so if you only have 2 nodes in 1 zone you wouldn't lose data if you
lost a zone.

The only place I see there being a problem is in regions with only 2 zones or
when you need to expand beyond the 2/zone number. Then you just have to do
backups and accept that you will suffer an outage if you lose a zone.

The cure for all this is having riak get so called "rack awareness" so you can
configure it to make sure that data is replicated across multiple zones. This
is supposed to be coming at some point [1].

[1] https://github.com/basho/riak/issues/308

> My question is - I think I can only do 4 in a way that makes sense. I only
> have 4 AZ's that I can use right now; AWS won't let me boot instances in
> 1a. My concern is if I try to do 5, I will be "doubling up" in one AZ - and
> in AWS you're almost as likely to lose an entire AZ as you are a single
> instance. And so, if I have instances doubled-up in one AZ (let's say
> us-east-1e), and then I lose 1e, I've now lost two instances. What are the
> chances that all three of my replicas of some chunk of my data are on those
> two instances? I know that it's not guaranteed that all replicas are on
> separate nodes.
> 
> So is it better for me to ignore the recommendation of 5 nodes, and just do
> 4? Or to ignore the fact that I might be doubling-up in one AZ? Also,
> another note. These are designed to be 'durable' nodes, so if one should go
> down I would expect to bring it back up *with* its data - or, if I
> couldn't, I would do a force-replace or replace and rebuild it from the
> other replicas. I'm definitely not doing instance-store. So I don't know if
> that mitigates my need for a full 5 nodes. I would also consider losing one
> node to be "degraded" and would probably seek to fix that problem as soon
> as possible, so I wouldn't expect to be in that situation for long. I would
> probably tolerate a drop in performance during that time, too. (Not a
> super-severe one, but 20-30 percent? Sure.)
> 
> What do you folks think?
> 
> -B.

> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


-- 

John Eikenberry
[ j...@zhar.net - http://zhar.net ]
[ PGP public key @ http://zhar.net/jae_at_zhar_net.gpg ]

Sic gorgiamus allos subjectatos nunc


signature.asc
Description: Digital signature
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


vm.args change for 15% to 80% improvement in leveldb

2013-08-13 Thread Matthew Von-Maszewski
** The following is copied from Basho's leveldb wiki page:

https://github.com/basho/leveldb/wiki/Riak-tuning-1



Summary:

leveldb has a higher read and write throughput in Riak if the Erlang scheduler 
count is limited to half the number of CPU cores. Tests have demonstrated 
improvements of 15% to 80% greater throughput.

The scheduler limit is set in the vm.args file:

+S x:x

where "x" is the number of schedulers Erlang may use. Erlang's default value of 
"x" is the total number of CPUs in the system. For Riak installations using 
leveldb, the recommendation is to set "x" to half the number of CPUs. Virtual 
environments are not yet tested.

Example: for 24 CPU system

+S 12:12

Discussion:

We have tested a limited number of CPU configurations and customer loads. In 
all cases, there is a performance increase when the +S option is added to the 
vm.args file to reduce the number of Erlang schedulers. The working hypothesis 
is that the Erlang schedulers perform enough "busy wait" work that they always 
create context switch away from leveldb when leveldb is actually the only 
system task with real work.

The tests included 8 CPU (no hyper threading, physical cores only) and 24 CPU 
(12 physical cores with hyper threading) systems. All were 64bit Intel 
platforms. Generalized findings:

• servers running higher number of vnodes (64) had larger performance 
gains than those with fewer (8)
• servers running SSD arrays had larger performance gains than those 
running SATA arrays
• Get and Write operations showed performance gains, 2i query 
operations (leveldb iterators) were unchanged
• Not recommended for servers with less than 8 CPUs (go no lower than 
+S 4:4)

Performance improvements were as high as 80% over extended, heavily loaded 
intervals on servers with SSD arrays and 64 vnodes. No test resulted in worse 
performance due to the addition of +S x:x.

The +S x:x configuration change does not have to be implemented simultaneously 
to an entire Riak cluster. The change may be applied to a single server for 
verification. Steps: update the vm.args file, then restart the Riak node. 
Erlang command line changes to schedules were ineffective.

This configuration change has been running in at least one large, 
multi-datacenter production environment for several months.


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: LevelDB performance (block size question)

2013-08-13 Thread István
It seems Riak does not like the leveldb block_size to be changed to 64k.

App config:

app.config: {sst_block_size, 65536},



basho_bench logs:

18:04:38.010 [info]
Errors:[{{delete,delete},542},{{get,get},15921},{{put,put},1253},{{{delete,delete},disconnected},542},{{{get,get},disconnected},15921},{{{put,put},disconnected}
,1250},{{{put,put},timeout},3}]
18:04:48.003 [info]
Errors:[{{delete,delete},1131},{{get,get},35704},{{put,put},2738},{{{delete,delete},disconnected},1131},{{{get,get},disconnected},35704},{{{put,put},disconnecte
d},2732},{{{put,put},timeout},6}]


node error.log:

dev2/log/error.log.3:2013-08-09 14:50:51.203 [error]
<0.3399.0>@riak_api_pb_server:handle_info:141 Unrecognized message
{909113,{error,timeout}}
dev2/log/error.log.3:2013-08-09 14:50:51.207 [error]
<0.3446.0>@riak_api_pb_server:handle_info:141 Unrecognized message
{22197453,{error,timeout}}
dev2/log/error.log.3:2013-08-09 14:53:54.267 [error]
<0.5125.3>@riak_api_pb_server:handle_info:141 Unrecognized message
{13631220,{error,timeout}}

dev2/log/error.log.3:2013-08-09 15:15:19.979 [error] <0.655.0> gen_fsm
<0.655.0> in state active terminated with reason: bad argument in call to
ets:lookup(ets_riak_core_ring_manager, {bucket,<<"test">>}) in
riak_core_ring_manager:get_bucket_meta/1 line 179



I have deleted all the data between the tests and some tests are still
running but it seems this configuration is not ideal.

The important part of the basho_bench configuration:

{mode, max}.
{duration, 10}.
{concurrent, 64}.
{driver, basho_bench_driver_riakc_pb}.
{key_generator, {int_to_bin, {uniform_int, 100}}}.
{value_generator, {exponential_bin, 524288, 2048}}.


I am running additional tests with different cache size, it might have an
impact on how the system behaves.

Regards,
Istvan


On Tue, Aug 13, 2013 at 3:12 PM, István  wrote:

> Hi Matthew,
>
> Thank you for the explanation.
>
> I am experimenting with different block size and making sure I have at
> least 100G data  on disk for the tests.
>
> I.
>
>
> On Tue, Aug 13, 2013 at 12:11 PM, Matthew Von-Maszewski <
> matth...@basho.com> wrote:
>
>> Istvan,
>>
>> "block_size" is not a "size", it is a threshold.  Data is never split
>> across blocks.  A single block contains one or more key/value pairs.
>>  leveldb starts a new block only when the total size of all key/values in
>> the current block exceed the threshold.
>>
>> Your must set block_size to a multiple of your typical key/value size if
>> you desire multiple per block.
>>
>> Plus side:  block_size is computed before compression.  So, you might get
>> nice reduction in total disk size by having multiple, mutually compressible
>> items in a block.  leveldb iterators / Riak 2i might give you slightly
>> better performance with bigger blocks because there are fewer reads if the
>> keys needed are in the same block (or fewer blocks).
>>
>> Negative side:  the entire block, not single key/value pairs, go into the
>> block cache uncompressed (cache_size).  You can quickly overwhelm the block
>> cache with lots of large blocks.  Also random reads / Gets have to read,
>> decompress, and CRC check the entire block.  Therefore it costs you more
>> disk transfer and decompression/CRC CPU time to read random values from
>> bigger blocks.
>>
>>
>> I suggest you experiment with your dataset and usage patterns.  Be sure
>> to build big sample datasets before starting to measure and/or restart Riak
>> between building and measuring.  These are ways to make sure you see the
>> impact of random reads.
>>
>> Matthew
>>
>>
>> On Aug 13, 2013, at 2:51 PM, István  wrote:
>>
>> Hi guys,
>>
>> I am setting up a new Riak cluster and I was wondering if there is any
>> drawback of increasing the LevelDB blocksize from 4K to 64K. The reason is
>> that we have all of the values way bigger than 4K and I guess from the
>> performance point of view it would make sense to increase the block size.
>> The tests are still running to confirm this theory but I wanted to clarify
>> that there is no big red flag of doing that from the Riak side. I found the
>> following discussion about changing block size:
>>
>> https://groups.google.com/forum/#!msg/leveldb/2JJ4smpSC6Q/1Z7aDSeHiRkJ
>>
>> Is that a good idea to experiment with this in Riak to achieve better
>> performance?
>>
>> Thank you in advance,
>> Istvan
>>
>>
>> --
>> the sun shines for all
>>
>>
>>  ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>
>
> --
> the sun shines for all
>
>
>


-- 
the sun shines for all
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-13 Thread Brady Wetherington
One thing that I *think* I've figured out is that the number of "how many
replicas can you lose and stay up" is actually n-w for writes, and n-r for
reads -

So with n=3 and r=2 and w=2, the loss of two replicas due to AZ failure
means that I still *have* my data ("durability") but I might lose _access_
to it ("availability") for a little bit. And with that weird feature that
Riak has (the feature's name escapes me for now?) I might even be able to
write new data if my cluster figures out that the downed nodes are actually
down; I think it just stores the writes on the remaining boxen, and
eventually it gets distributed back once the nodes come back. Neat stuff.

So after working through all of that, I *think* I actually have an argument
I can make for 4 replicas as being somewhat superior to 5. Since I'm on
AWS, I can scale by "embiggening" my nodes for a while, until I hit up to
around the 128GB RAM boxes; then I can start to double-up on AZ's (to keep
things simple, I'd probably go from 4 straight to 8). I would probably - at
that point - start to have to do some math to figure out what new 'n' might
make sense. Maybe n: 5, r: 3, w: 3? I'll cross that bridge when I come to
it (and I know there's all kinds of awful misery with changing 'n' values
in a bucket; forcing read-repairs and all kinds of stuff so that your reads
and writes don't start failing. But again, by then I might have dedicated
minions I could make figure that stuff out). Or maybe there's an inherent
advantage to going straight to 8 instead of just 'embiggening'. Again, I'll
cross that bridge (probably by talking to you all!) when I come to it.

I think the Rack Awareness sounds like a *great* feature - but I'd also
love something that's a little more...strict about making sure that my
replicas never live on the same node (current advice is that you should
have four boxes for an 'n' of 3 to ensure one box doesn't have two copies
of data; I'd love it if at some point they could make that guarantee with
number of boxes=n. I understand it's being worked-on). Once rack-awareness
comes in - or the n=number of boxes fix comes in - I'll probably have to
re-ponder my math. That'll be a good problem for me to have, though :)

-B.


On Tue, Aug 13, 2013 at 8:21 PM, John Eikenberry  wrote:

> Brady Wetherington wrote:
>
> > First off - I know 5 instances is the "magic number" of instances to
> have.
> > If I understand the thinking here, it's that at the default redundancy
> > level ('n'?) of 3, it is most likely to start getting me some scaling
> > (e.g., performance > just that of a single node), and yet also have
> > redundancy; whereby I can lose one box and not start to take a
> performance
> > hit.
>
> With n=3 wouldn't you just need to avoid having more than 2 (of 5) nodes
> in the
> same zone? With 5 nodes you shouldn't have to worry about replicas being
> on the
> same node, so if you only have 2 nodes in 1 zone you wouldn't lose data if
> you
> lost a zone.
>
> The only place I see there being a problem is in regions with only 2 zones
> or
> when you need to expand beyond the 2/zone number. Then you just have to do
> backups and accept that you will suffer an outage if you lose a zone.
>
> The cure for all this is having riak get so called "rack awareness" so you
> can
> configure it to make sure that data is replicated across multiple zones.
> This
> is supposed to be coming at some point [1].
>
> [1] https://github.com/basho/riak/issues/308
>
> > My question is - I think I can only do 4 in a way that makes sense. I
> only
> > have 4 AZ's that I can use right now; AWS won't let me boot instances in
> > 1a. My concern is if I try to do 5, I will be "doubling up" in one AZ -
> and
> > in AWS you're almost as likely to lose an entire AZ as you are a single
> > instance. And so, if I have instances doubled-up in one AZ (let's say
> > us-east-1e), and then I lose 1e, I've now lost two instances. What are
> the
> > chances that all three of my replicas of some chunk of my data are on
> those
> > two instances? I know that it's not guaranteed that all replicas are on
> > separate nodes.
> >
> > So is it better for me to ignore the recommendation of 5 nodes, and just
> do
> > 4? Or to ignore the fact that I might be doubling-up in one AZ? Also,
> > another note. These are designed to be 'durable' nodes, so if one should
> go
> > down I would expect to bring it back up *with* its data - or, if I
> > couldn't, I would do a force-replace or replace and rebuild it from the
> > other replicas. I'm definitely not doing instance-store. So I don't know
> if
> > that mitigates my need for a full 5 nodes. I would also consider losing
> one
> > node to be "degraded" and would probably seek to fix that problem as soon
> > as possible, so I wouldn't expect to be in that situation for long. I
> would
> > probably tolerate a drop in performance during that time, too. (Not a
> > super-severe one, but 20-30 percent? Sure.)
> >
> > What do you folks think?
> >
> >

Re: vm.args change for 15% to 80% improvement in leveldb

2013-08-13 Thread Jeremiah Peschka
When you say "CPU" does that mean "logical CPU core"? Or is this actually
referring to physical CPU cores?

E.g. On my laptop with 4 physical cores + HyperThreading, should I set +S
to +S 4:4

You hint that it doesn't matter, but I just wanted to trick you into
explicitly saying something.

---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop


On Tue, Aug 13, 2013 at 5:38 PM, Matthew Von-Maszewski
wrote:

> ** The following is copied from Basho's leveldb wiki page:
>
> https://github.com/basho/leveldb/wiki/Riak-tuning-1
>
>
>
> Summary:
>
> leveldb has a higher read and write throughput in Riak if the Erlang
> scheduler count is limited to half the number of CPU cores. Tests have
> demonstrated improvements of 15% to 80% greater throughput.
>
> The scheduler limit is set in the vm.args file:
>
> +S x:x
>
> where "x" is the number of schedulers Erlang may use. Erlang's default
> value of "x" is the total number of CPUs in the system. For Riak
> installations using leveldb, the recommendation is to set "x" to half the
> number of CPUs. Virtual environments are not yet tested.
>
> Example: for 24 CPU system
>
> +S 12:12
>
> Discussion:
>
> We have tested a limited number of CPU configurations and customer loads.
> In all cases, there is a performance increase when the +S option is added
> to the vm.args file to reduce the number of Erlang schedulers. The working
> hypothesis is that the Erlang schedulers perform enough "busy wait" work
> that they always create context switch away from leveldb when leveldb is
> actually the only system task with real work.
>
> The tests included 8 CPU (no hyper threading, physical cores only) and 24
> CPU (12 physical cores with hyper threading) systems. All were 64bit Intel
> platforms. Generalized findings:
>
> • servers running higher number of vnodes (64) had larger
> performance gains than those with fewer (8)
> • servers running SSD arrays had larger performance gains than
> those running SATA arrays
> • Get and Write operations showed performance gains, 2i query
> operations (leveldb iterators) were unchanged
> • Not recommended for servers with less than 8 CPUs (go no lower
> than +S 4:4)
>
> Performance improvements were as high as 80% over extended, heavily loaded
> intervals on servers with SSD arrays and 64 vnodes. No test resulted in
> worse performance due to the addition of +S x:x.
>
> The +S x:x configuration change does not have to be implemented
> simultaneously to an entire Riak cluster. The change may be applied to a
> single server for verification. Steps: update the vm.args file, then
> restart the Riak node. Erlang command line changes to schedules were
> ineffective.
>
> This configuration change has been running in at least one large,
> multi-datacenter production environment for several months.
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-13 Thread Matthew Dawson
On August 13, 2013 10:20:48 PM Brady Wetherington wrote:
> One thing that I *think* I've figured out is that the number of "how many
> replicas can you lose and stay up" is actually n-w for writes, and n-r for
> reads -
> 
> So with n=3 and r=2 and w=2, the loss of two replicas due to AZ failure
> means that I still *have* my data ("durability") but I might lose _access_
> to it ("availability") for a little bit. And with that weird feature that
> Riak has (the feature's name escapes me for now?) I might even be able to
> write new data if my cluster figures out that the downed nodes are actually
> down; I think it just stores the writes on the remaining boxen, and
> eventually it gets distributed back once the nodes come back. Neat stuff.
> 
Actually, that is sort of true.  If you lose two nodes, when you request a 
read, it will initially fail as it can only preform the read against one node, 
and the two fall over vnodes won't have the data.  However, the cluster will 
recognize that there is data missing in the fallover vnodes, and initiate 
read-repair.  So the next read will in fact work just fine.  If you build your 
app to assume reads may transiently fail, then you shouldn't have an issue.
Write will also continue to work in the same way (like you mentioned).
-- 
Matthew

smime.p7s
Description: S/MIME cryptographic signature
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com