2014-04-02 19:38 GMT+06:00 Luke Bakken :
> In your Riak /etc/riak/app.config files, please use the following value:
>
> {pb_backlog, 256},
I try even {pb_backlog, 512} - no changes.
> After changing this, you will have to restart Riak in a rolling fashion.
> Could you please run riak-debug on o
2014-04-03 1:36 GMT+06:00 Seth Thomas :
> Could you also include your riak app.config and vm.args. It seems like
> you're load balancing Riak CS but I'm curious how the underlying Riak
> topology looks as well since that will likely be where the performance
> bottlenecks are uncovered.
Config tem
> Is anyone doing Change Capture (like the Databus project from LinkedIn)
> directly out of Riak? Right now I have something hacked up using
> Ripple+ActiveModel::Dirty, but I'd like to divorce it from our app
> completely if possible. Was thinking a post-commit hook might work? End
> goal is to fe
Since this is related to my earlier question: sorry to have let you wait, Timo.
Kelly, the reason I brought up my original question was because my use case
involves delivering videos under load.
Suppose there is a cluster of 50 nodes with a replication value of three. Now
if a random node is que
Stanislov,
Could you also include your riak app.config and vm.args. It seems like
you're load balancing Riak CS but I'm curious how the underlying Riak
topology looks as well since that will likely be where the performance
bottlenecks are uncovered.
On Wed, Apr 2, 2014 at 6:38 AM, Luke Bakken w
Hi Stanislav,
In your Riak /etc/riak/app.config files, please use the following value:
{pb_backlog, 256},
After changing this, you will have to restart Riak in a rolling fashion.
Could you please run riak-debug on one node in your cluster and make the
generated archive available? (dropbox, for
You should use varnish as frontend and a minimum of 5 nodes in backend
Regards,
Le 02/04/2014 12:42, Igor Kukushkin a écrit :
Hi all.
Here's a simple scenario that we're planning to test: a cluster of 4
nodes, 2 are normal eleveldb'backend nodes and 2 are stored on RAM
(with same eleveldb back
Hi all.
Here's a simple scenario that we're planning to test: a cluster of 4
nodes, 2 are normal eleveldb'backend nodes and 2 are stored on RAM
(with same eleveldb backend).
We plan to use RAM nodes as a fast "frontend".
Question A:
RAM disks are completely volatile, so nodes will restart with
On 2 Apr 2014, at 09:21, David James wrote:
> What versions(s) of Riak have allow_mult=true by default? Which ones have
> allow_mult=false by default?
All released versions of Riak have allow_mult=false for default buckets. All
release version of Riak only have default buckets.
2.0 will have
What versions(s) of Riak have allow_mult=true by default? Which ones have
allow_mult=false by default?
I thought this was decided in the early days of Riak. Why the back and
forth now?
On Wed, Apr 2, 2014 at 4:16 AM, Eric Redmond wrote:
> This was a changed back a few weeks ago, where allow_mu
Hi David,
Sorry about the hokey-cokey on this.
In 2.0 for allow_mult=false as a default for default/untyped buckets. That is
to support legacy applications, and rolling upgrades with the least surprise.
allow_mult=true by default for typed buckets, as we think this is the correct
way to run Ria
This was a changed back a few weeks ago, where allow_mult is back to false
for buckets without a type (default), but is true for buckets with a type.
Sorry for the back and forth, but we decided it would be better to keep it
as false so as to not break existing users. However, we strongly encourag
In my tests, allow_mult defaults to false for 2.0.0pre20. This was not the
case for 2.0.0pre11; my tests behave correctly under pre11.
This according to my testing with my Clojure Riak driver, Kria:
https://github.com/bluemont/kria
My understanding is that Riak intends allow_mult to default to tr
13 matches
Mail list logo