Hi Evan,
thanks for all the infos! I adjusted the leveldb-config as suggested,
except the cache, which I reduced to 16MB, keeping this above the
default helped a lot at least during load testing. And I added +P 130072
to the vm.args. Will be applied to the riak nodes the next hours.
We have
Spent some time with the AWS folks the other day and was getting sold on
using DynamoDB for some of our large Key Value store needs. However
given the read/write economics of DynamoDB vs. Instance+Storage costs on
Riak I was wondering if anybody has done a good thinking around where
the cost in
If it's always the same three nodes it could well be same very large
object being updated each day. Is there anything else that looks
suspicious in your logs? Another sign of large objects is large_heap
(or long_gc) messages from riak_sysmon.
On Thu, Apr 4, 2013 at 3:58 AM, Ingo Rockel
wrote:
>
Hi Evan,
we added monitoring of the object sizes and there was one object on one
of the three nodes mentioned which was > 2GB!!
We just changed the application code to get the id of this object to be
able to delete it. But is does happen only about once a day.
We right now have another node
the crashing node seems to be caused by the raised +P param, after last
crash I commented the param and now the node runs just fine.
Am 04.04.2013 15:43, schrieb Ingo Rockel:
Hi Evan,
we added monitoring of the object sizes and there was one object on one
of the three nodes mentioned which was
Toby,
That particular page is talking about changing the default settings of the
backend of a bucket. In that specific case, if you want to change the
default behavior in your app.config file a restart is necessary. One
particularly important detail there is you don't need to restart *all*
nodes
Brisa,
You cannot simulate transactions, really (see also
http://aphyr.com/posts/254-burn-the-library). However, if you want to
receive notifications and take action when something happens, you can
add a postcommit hook (see
http://docs.basho.com/riak/latest/references/appendices/concepts/Commit-H
Hi everyone,
I am trying to run the Erlang m/r following the Riak Handbook, and got the
following error:
(riak@127.0.0.1)4> ExtractTweet = fun(RObject, _, _) ->
(riak@127.0.0.1)4> {struct, Obj} = mochijson2:decode(
(riak@127.0.0.1)4>riak_object:get_value(RObject)),
(riak@127.0.0.1)4> [propl
A grep for "too many processes" didn't reveal anything. The process got
killed by the oom-killer.
Am 04.04.2013 16:12, schrieb Evan Vigil-McClanahan:
That's odd. It was getting killed by the OOM killer, or crashing
because it couldn't allocate more memory? That's suggestive of
something else
At least on two-phase commit enabled environment you can implement the
rollback to "undo" your action, you expect things to go right and a very
small % to go wrong, so implementing a rollback policy isn't such a bad
idea, I had to do the same years ago for a payment client, when things
went wro
Also, you might not need a notification when the Riak operation succeed
if you set a default high N value, let's say, for 5 nodes, N value = 3
should give you a good safe bet, meaning, Riak client will return
successfully when the key was written to at least 3 nodes, so it only
leaves with the
Hi Jared,
I don't see these patches, which I have applied to our installation of 1.3,
explicitly mentioned in the Release Notes:
Fix bug where stats endpoints were calculating _all_ riak_kv stats:
https://github.com/basho/riak_kv/blob/9be3405e53acf680928faa6c70d265e86c75a22c/src/riak_kv_s
One last note for 1.3. Please make sure that the following line is in
your vm.args:
-env ERL_MAX_ETS_TABLES 819
This is a good idea for all systems but is especially important for
people with large rings.
Were there any other messages? Riak constantly spawns new processes,
but they don't tend t
I can't speak to the costing issues, as that isn't something I am
terribly familiar with, but at the moment, riak still has some
overhead issues with very small values. There are upcoming
optimizations in the next major (1.4) release that should help. What
issues did you run into?
On Thu, Apr 4
As of 1.3 the old client:mapreduce is deprecated, please use
`riak_kv_mrc_pipe:mapred` instead.
On Thu, Apr 4, 2013 at 9:07 AM, Tom Zeng wrote:
> Hi everyone,
>
> I am trying to run the Erlang m/r following the Riak Handbook, and got the
> following error:
>
> (riak@127.0.0.1)4> ExtractTweet = fu
Major error on my part here!
> your vm.args:
> -env ERL_MAX_ETS_TABLES 819
This should be
-env ERL_MAX_ETS_TABLES 8192
Sorry for the sloppy cut and paste. Please do not do the former
thing, or it will be very bad.
> This is a good idea for all systems but is especially important for
> people
thanks, but it was a very obvious c&p error :) and we already have the
ERL_MAX_ETS_TABLES set to 8192 as it is in the default vm.args.
The only other messages were about a lot of handoff going on.
Maybe the node was getting some data concerning the 2GB object?
Ingo
Am 04.04.2013 17:25, schrie
Possible, but would need more information to make a guess. I'd keep a
close eye on that node.
On Thu, Apr 4, 2013 at 10:34 AM, Ingo Rockel
wrote:
> thanks, but it was a very obvious c&p error :) and we already have the
> ERL_MAX_ETS_TABLES set to 8192 as it is in the default vm.args.
>
> The onl
Hi Dave,
The stats calculation was fixed in 1.3.1, but the read-repair with
Last-write-wins=true was not backported. That one will make it to 1.4,
which is scheduled in the near future. I hope that helps.
--
Engel Sanchez
On Thu, Apr 4, 2013 at 11:05 AM, Dave Brady wrote:
> Hi Jared,
>
> I do
Hi Dave,
Building on what Engel said, the stats change was merged as part of a
larger squashed commit [1].
Jordan
[1]
https://github.com/basho/riak_kv/commit/fd2e527378a7fa284605b131c4d02ee5c28d229d
On Thu, Apr 4, 2013 at 9:00 AM, Engel Sanchez wrote:
> Hi Dave,
>
> The stats calculation was
Thanks a lot for pointing into the right direction (the huge object),
would have taken a lot longer for me to find out myself!
Am 04.04.2013 17:51, schrieb Evan Vigil-McClanahan:
Possible, but would need more information to make a guess. I'd keep a
close eye on that node.
On Thu, Apr 4, 2013
Riak Users,
To keep everyone on their toes with our Riak 1.3.1 release yesterday, today
we have an update to Riak CS in the form of 1.3.1. Riak CS and Stanchion
have both been updated. There is no update to Riak CS Control at this time.
The downloads can be found on our docs page:
http://docs.b
Hi guys,
I am sure I've seen it somewhere explaining what I'd like to do but can no
longer find the link, hope someone could help? Thanks.
I have a 'user' bucket which stores a list of keys within the data,
pointing to other users as friends...
ie
For User "mathew"
data: {
"friends": [ "john",
Thanks for the feedback. I made two changes to my test setup and saw better
throughput:
1) Don't write to the same key over and over. Updating a key appears to be
a lot slower than creating a new key
2) I used parallel PUTs
The throughput I was measuring before was about 26MB/s on localhost. Wit
Just as a side note, you might want to retry the test with PBC. While I
have only did testings with < 10kb documents, my tests indicates that
PBC is twice as fast as HTTP in almost all cases.
Shuhao
On 13-04-04 04:14 PM, Matthew MacClary wrote:
Thanks for the feedback. I made two changes to m
How can i enforce a quota for each user (tennant) in riak-cs ? Thanks.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/User-quota-in-riak-cs-tp4027486.html
Sent from the Riak Users mailing list archive at Nabble.com.
__
Ok, thanks Engel and Jordan!
--
Dave Brady
- Original Message -
From: "Jordan West"
To: "Engel Sanchez"
Cc: "Dave Brady" , "Riak Users Mailing List"
Sent: Thursday, April 4, 2013 6:06:05 PM GMT +01:00 Amsterdam / Berlin / Bern /
Rome / Stockholm / Vienna
Subject: Re: [ANNC] R
On Apr 4, 2013, at 4:14 PM, Matthew MacClary
wrote:
> Thanks for the feedback. I made two changes to my test setup and saw better
> throughput:
>
> 1) Don't write to the same key over and over. Updating a key appears to be a
> lot slower than creating a new key
>
> 2) I used parallel PUTs
>
Thanks Evan, that worked.
On Thu, Apr 4, 2013 at 11:14 AM, Evan Vigil-McClanahan <
emcclana...@basho.com> wrote:
> As of 1.3 the old client:mapreduce is deprecated, please use
> `riak_kv_mrc_pipe:mapred` instead.
>
> On Thu, Apr 4, 2013 at 9:07 AM, Tom Zeng wrote:
> > Hi everyone,
> >
> > I am
On Apr 4, 2013, at 4:52 PM, minotaurus wrote:
> How can i enforce a quota for each user (tennant) in riak-cs ? Thanks.
Riak CS does not currently support quotas of any sort. You can _observe_ the
resources (i/o, storage) a user is using, but not limit it. This is something
we may implement i
I am measuring throughput by the wall clock time needed to move a few gigs
of data into Riak. I have glanced at iostat, but I was not collecting data
from that tool at this point.
-Matt
On Thu, Apr 4, 2013 at 2:45 PM, Reid Draper wrote:
>
> On Apr 4, 2013, at 4:14 PM, Matthew MacClary <
> macc
PBC is certainly something I have on my list of things to explore.
Conceptually I am not sure if the speed gains from this protocol will be
apparent with large binary payloads. I thought that main speed gains were
from 1) more compact binary representation and 2) lower interpretation
overhead. In m
Hi Jared,
I'm afraid I am still a little confused after reading your reply, so I'd
like to check something.
If I understand correctly, the reboot of nodes is only required if the
default settings in app.config are changed, and one can change anything
else on-the-fly?
So therefore, in the f
On 04/04/13 17:43, Toby Corkindale wrote:
Hi,
Can we set Bitcask's expiry_secs value to be different per backend, in a
Multi-backend scenario?
Eg.
{multi_backend, [
{<<"bitcask_short_ttl">>, riak_kv_bitcask_backend, [
{expiry_secs, 3600}, %% Expire items after one hour
Hey Toby,
Your conclusions are correct.
Tom
On Thu, Apr 4, 2013 at 10:39 PM, Toby Corkindale <
toby.corkind...@strategicdata.com.au> wrote:
> On 04/04/13 17:43, Toby Corkindale wrote:
>
>> Hi,
>> Can we set Bitcask's expiry_secs value to be different per backend, in a
>> Multi-backend scenario?
Answering my own question again, but hopefully that saves you time.
So, it appears that if a backend is changed via the JSON REST API, then
all keys from the previous backend are now inaccessible. I think this
also indicates that now the new backend is in use immediately, without
any restarts
One of possible solution to emulate quota is, to observe the space
usage of each user periodically, and when it exceeds your limits you
can "disable" the user by calling admin API.
http://docs.basho.com/riakcs/latest/cookbooks/Account-Management/#Enabling-and-Disabling-a-User-Account
On Fri, Apr 5
Disabling completely the users seems a little too much. It is possible to
only deny the user to write new files and still be able to read and delete
files from his account ?
--
View this message in context:
http://riak-users.197444.n3.nabble.com/User-quota-in-riak-cs-tp4027486p4027498.html
Se
As Reid mentioned, that is something we may implement in the future...
On Fri, Apr 5, 2013 at 1:39 PM, minotaurus wrote:
> Disabling completely the users seems a little too much. It is possible to
> only deny the user to write new files and still be able to read and delete
> files from his accou
39 matches
Mail list logo