Re: Create Bucket failed

2016-07-12 Thread Luke Bakken
What tool are you using to create buckets? If you can provide debug
output, it looks as though the message sent to Riak CS is bad ("error,
malformed_xml")

--
Luke Bakken
Engineer
lbak...@basho.com


On Sat, Jul 9, 2016 at 1:11 AM, s251251251  wrote:
> Hello
> after some day  after riak-cs Installation, I can not create bucket. Server
> Error is:
>
> 2016-07-09 12:36:15.401 [error] <0.796.0> Webmachine error at path
> "/buckets/test" :
> {error,{error,{badmatch,{error,malformed_xml}},[{riak_cs_s3_response,xml_error_code,1,[{file,"src/riak_cs_s3_response.erl"},{line,396}]},{riak_cs_s3_response,error_response,1,[{file,"src/riak_cs_s3_response.erl"},{line,273}]},{riak_cs_wm_bucket,accept_body,2,[{file,"src/riak_cs_wm_bucket.erl"},{line,130}]},{riak_cs_wm_common,accept_body,2,[{file,"src/riak_cs_wm_common.erl"},{line,342}]},{webmachine_resource,resource_call,3,[{file,"src/webmachine_resource.erl"},{line,186}]},{webmachine_resource,...},...]}}
> in riak_cs_s3_response:xml_error_code/1 line 396
>
> however i can get and put files. stanchion started.
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Create-Bucket-failed-tp4034449.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Java client question

2016-07-12 Thread Luke Bakken
Hi Guido,

I see that you opened up this PR, thanks -
https://github.com/basho/riak-java-client/pull/631

Would you mind filing these questions in an issue on GitHub to
continue the discussion there?

https://github.com/basho/riak-java-client/issues/new

Thanks!

--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Jun 29, 2016 at 7:02 AM, Guido Medina  wrote:
> Hi,
>
> Are there any plans on releasing a Riak Java client with Netty-4.1.x?
>
> The reasoning for this is that some projects like Vert.x 3.3.0 for example
> are already on Netty-4.1.x and AFAIK Netty's 4.1.x isn't just a drop in
> replacement for 4.0.x
>
> Would it make sense to support another Riak Java client, say version 2.1.x
> with Netty-4.1.x as a way to move forward?
>
> Or maybe Riak 2.0.x works with Netty 4.1.x? but I doubt it.
>
> Best regards,
>
> Guido.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak on Solaris/OmniOS/illumos

2016-07-12 Thread Luke Bakken
Hi Henrik,

Sorry for the delay in responding to you via this channel. Solaris /
Illumos support will be dropped in a future Riak release, as will
FreeBSD. However, this does not preclude the community continuing
support for these platforms. All the necessary code to build
platform-specific Riak packages is in this repository:

https://github.com/basho/node_package

Maintaining support for a platform basically entails that the build
for that platform continues to work on that platform's supported
versions.

If you'd like to contribute, please give building the packages for
your platform a try. If you have difficulty or find issues, file them
on GitHub, or better yet, send in a PR.

Thanks

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Jun 13, 2016 at 5:30 AM, Henrik Johansson  wrote:
> Hi,
>
> I've recently been told that Riak will no longer be supported on 
> Solaris/illumos based distributions. At the same time ZFS was recommended 
> which I find a bit strange since ZFS comes from Solaris/illumos and is still 
> the most tested of the platform. There are also Riak probes for DTrace.
>
> Can someone confirm this and/or give me some background to this?
>
> Regards
> Henrik
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Recovering Riak data if it can no longer load in memory

2016-07-12 Thread Vikram Lalit
Hi - I've been testing a Riak cluster (of 3 nodes) with an ejabberd
messaging cluster in front of it that writes data to the Riak nodes. Whilst
load testing the platform (by creating 0.5 million ejabberd users via
Tsung), I found that the Riak nodes suddenly crashed. My question is how do
we recover from such a situation if it were to occur in production?

To provide further context / details, the leveldb log files storing the
data suddenly became too huge, thus making the AWS Riak instances not able
to load them in memory anymore. So we get a core dump if 'riak start' is
fired on those instances. I had an n_val = 2, and all 3 nodes went down
almost simultaneously, so in such a scenario, we cannot even rely on a 2nd
copy of the data. One way to of course prevent it in the first place would
be to use auto-scaling, but I'm wondering is there a ex post facto / post
the event recovery that can be performed in such a scenario? Is it possible
to simply copy the leveldb data to a larger memory instance, or to curtail
the data further to allow loading in the same instance?

Appreciate if you can provide inputs - a tad concerned as to how we could
recover from such a situation if it were to happen in production (apart
from leveraging auto-scaling as a preventive measure).

Thanks!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recovering Riak data if it can no longer load in memory

2016-07-12 Thread Matthew Von-Maszewski
It would be helpful if you described the physical characteristics of the 
servers:  memory size, logical cpu count, etc.

Google created leveldb to be highly reliable in the face of crashes.  If it is 
not restarting, that suggests to me that you have a low memory condition that 
is not able to load leveldb's MANIFEST file.  That is easily fixed by moving 
the dataset to a machine with larger memory.

There is also a special flag to reduce Riak's leveldb memory foot print during 
development work.  The setting reduces the leveldb performance, but lets you 
run with less memory.

In riak.conf, set:

leveldb.limited_developer_mem = true

Matthew


> On Jul 12, 2016, at 11:56 AM, Vikram Lalit  wrote:
> 
> Hi - I've been testing a Riak cluster (of 3 nodes) with an ejabberd messaging 
> cluster in front of it that writes data to the Riak nodes. Whilst load 
> testing the platform (by creating 0.5 million ejabberd users via Tsung), I 
> found that the Riak nodes suddenly crashed. My question is how do we recover 
> from such a situation if it were to occur in production?
> 
> To provide further context / details, the leveldb log files storing the data 
> suddenly became too huge, thus making the AWS Riak instances not able to load 
> them in memory anymore. So we get a core dump if 'riak start' is fired on 
> those instances. I had an n_val = 2, and all 3 nodes went down almost 
> simultaneously, so in such a scenario, we cannot even rely on a 2nd copy of 
> the data. One way to of course prevent it in the first place would be to use 
> auto-scaling, but I'm wondering is there a ex post facto / post the event 
> recovery that can be performed in such a scenario? Is it possible to simply 
> copy the leveldb data to a larger memory instance, or to curtail the data 
> further to allow loading in the same instance?
> 
> Appreciate if you can provide inputs - a tad concerned as to how we could 
> recover from such a situation if it were to happen in production (apart from 
> leveraging auto-scaling as a preventive measure).
> 
> Thanks!
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recovering Riak data if it can no longer load in memory

2016-07-12 Thread Vikram Lalit
Thanks much Matthew. Yes the server is low-memory given only development
right now - I'm using an AWS micro instance, so 1 GB RAM and 1 vCPU.

Thanks for the tip - let me try move the manifest file to a larger instance
and see how that works. More than reducing the memory footprint in dev, my
concern was more around reacting to a possible production scenario where
the db stops responding due to memory overload. Understood now that moving
to a larger instance should be possible. Thanks again.

On Tue, Jul 12, 2016 at 12:26 PM, Matthew Von-Maszewski 
wrote:

> It would be helpful if you described the physical characteristics of the
> servers:  memory size, logical cpu count, etc.
>
> Google created leveldb to be highly reliable in the face of crashes.  If
> it is not restarting, that suggests to me that you have a low memory
> condition that is not able to load leveldb's MANIFEST file.  That is easily
> fixed by moving the dataset to a machine with larger memory.
>
> There is also a special flag to reduce Riak's leveldb memory foot print
> during development work.  The setting reduces the leveldb performance, but
> lets you run with less memory.
>
> In riak.conf, set:
>
> leveldb.limited_developer_mem = true
>
> Matthew
>
>
> > On Jul 12, 2016, at 11:56 AM, Vikram Lalit 
> wrote:
> >
> > Hi - I've been testing a Riak cluster (of 3 nodes) with an ejabberd
> messaging cluster in front of it that writes data to the Riak nodes. Whilst
> load testing the platform (by creating 0.5 million ejabberd users via
> Tsung), I found that the Riak nodes suddenly crashed. My question is how do
> we recover from such a situation if it were to occur in production?
> >
> > To provide further context / details, the leveldb log files storing the
> data suddenly became too huge, thus making the AWS Riak instances not able
> to load them in memory anymore. So we get a core dump if 'riak start' is
> fired on those instances. I had an n_val = 2, and all 3 nodes went down
> almost simultaneously, so in such a scenario, we cannot even rely on a 2nd
> copy of the data. One way to of course prevent it in the first place would
> be to use auto-scaling, but I'm wondering is there a ex post facto / post
> the event recovery that can be performed in such a scenario? Is it possible
> to simply copy the leveldb data to a larger memory instance, or to curtail
> the data further to allow loading in the same instance?
> >
> > Appreciate if you can provide inputs - a tad concerned as to how we
> could recover from such a situation if it were to happen in production
> (apart from leveraging auto-scaling as a preventive measure).
> >
> > Thanks!
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Recovering Riak data if it can no longer load in memory

2016-07-12 Thread Matthew Von-Maszewski
You can further reduce memory used by leveldb with the following setting in 
riak.conf:

leveldb.threads = 5

The value "5" needs to be a prime number.  The system defaults to 71.  Many 
Linux implementations will allocate 8Mbytes per thread for stack.  So bunches 
of threads lead to bunches of memory reserved for stack.  That is fine on 
servers with higher memory.  But probably part of your problem on a small 
memory machine.

The thread count is high to promote parallelism across vnodes on the same 
server, especially with "entropy = active".  So again, this setting is 
sacrificing performance to save memory.

Matthew

P.S.  You really want 8 CPU cores, 4 as a dirt minimum.  And review this for 
more cpu performance info:

https://github.com/basho/leveldb/wiki/riak-tuning-2



> On Jul 12, 2016, at 4:04 PM, Vikram Lalit  wrote:
> 
> Thanks much Matthew. Yes the server is low-memory given only development 
> right now - I'm using an AWS micro instance, so 1 GB RAM and 1 vCPU.
> 
> Thanks for the tip - let me try move the manifest file to a larger instance 
> and see how that works. More than reducing the memory footprint in dev, my 
> concern was more around reacting to a possible production scenario where the 
> db stops responding due to memory overload. Understood now that moving to a 
> larger instance should be possible. Thanks again.
> 
> On Tue, Jul 12, 2016 at 12:26 PM, Matthew Von-Maszewski  > wrote:
> It would be helpful if you described the physical characteristics of the 
> servers:  memory size, logical cpu count, etc.
> 
> Google created leveldb to be highly reliable in the face of crashes.  If it 
> is not restarting, that suggests to me that you have a low memory condition 
> that is not able to load leveldb's MANIFEST file.  That is easily fixed by 
> moving the dataset to a machine with larger memory.
> 
> There is also a special flag to reduce Riak's leveldb memory foot print 
> during development work.  The setting reduces the leveldb performance, but 
> lets you run with less memory.
> 
> In riak.conf, set:
> 
> leveldb.limited_developer_mem = true
> 
> Matthew
> 
> 
> > On Jul 12, 2016, at 11:56 AM, Vikram Lalit  > > wrote:
> >
> > Hi - I've been testing a Riak cluster (of 3 nodes) with an ejabberd 
> > messaging cluster in front of it that writes data to the Riak nodes. Whilst 
> > load testing the platform (by creating 0.5 million ejabberd users via 
> > Tsung), I found that the Riak nodes suddenly crashed. My question is how do 
> > we recover from such a situation if it were to occur in production?
> >
> > To provide further context / details, the leveldb log files storing the 
> > data suddenly became too huge, thus making the AWS Riak instances not able 
> > to load them in memory anymore. So we get a core dump if 'riak start' is 
> > fired on those instances. I had an n_val = 2, and all 3 nodes went down 
> > almost simultaneously, so in such a scenario, we cannot even rely on a 2nd 
> > copy of the data. One way to of course prevent it in the first place would 
> > be to use auto-scaling, but I'm wondering is there a ex post facto / post 
> > the event recovery that can be performed in such a scenario? Is it possible 
> > to simply copy the leveldb data to a larger memory instance, or to curtail 
> > the data further to allow loading in the same instance?
> >
> > Appreciate if you can provide inputs - a tad concerned as to how we could 
> > recover from such a situation if it were to happen in production (apart 
> > from leveraging auto-scaling as a preventive measure).
> >
> > Thanks!
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com 
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> > 
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com