For the mailing list's reference, this issue has been resolved by the following:
* Increase pb_backlog to 256 in the Riak app.config on all nodes
* Increase +zdbbl to 96000 in the Riak vm.args on all nodes
* Switch proxies from tengine (patched nginx) to HAProxy
* Reduce ring size from 256 to 128,
2014-04-03 22:54 GMT+06:00 Luke Bakken :
> Before you go down the path of changing proxies, could you provide logs from
> one instance of your proxy server? They may provide more insight into what's
> going on here. In addition, the config and logs from one Riak CS node would
> be helpful - the co
Hi Anton,
Thanks for letting us know that changing s3cmd settings won't work.
Before you go down the path of changing proxies, could you provide logs
from one instance of your proxy server? They may provide more insight into
what's going on here. In addition, the config and logs from one Riak CS
Luke,
it would be quite difficult to connect directly to riak cs nodes because
riak cs is set up to run in DMZ so it would take us to reconfigure the
whole network. So let's assume you are right in your thinking and it is
proxy (tengine) causing our problems. What would you recommend to use as
Anton,
You'd be connecting clients to Riak CS directly for the purposes of
testing. We believe that your load balancer is terminating connections
prematurely, so removing it from your environment is the best way to
determine if it is causing this issue.
I am suggesting to lower pb_backlog - 256 s
Hi Luke
Is it safe to allow clients to connect directly to Riak CS as your
suggestion is kinda against official docs
(http://docs.basho.com/riakcs/latest/cookbooks/configuration/Load-Balancing-and-Proxy-Configuration/)?
I also think our problem might be caused by ring_creation_size 256 while
w
Hi Stanislav,
Could you configure your clients to connect directly to Riak CS instead of
through your proxy? A colleague of mine suggested that the
{error,closed}and 104 error code in s3cmd appear to be caused by the
connection between
the client and Riak CS being closed or terminated unexpectedly
2014-04-02 19:38 GMT+06:00 Luke Bakken :
> In your Riak /etc/riak/app.config files, please use the following value:
>
> {pb_backlog, 256},
I try even {pb_backlog, 512} - no changes.
> After changing this, you will have to restart Riak in a rolling fashion.
> Could you please run riak-debug on o
2014-04-03 1:36 GMT+06:00 Seth Thomas :
> Could you also include your riak app.config and vm.args. It seems like
> you're load balancing Riak CS but I'm curious how the underlying Riak
> topology looks as well since that will likely be where the performance
> bottlenecks are uncovered.
Config tem
Stanislov,
Could you also include your riak app.config and vm.args. It seems like
you're load balancing Riak CS but I'm curious how the underlying Riak
topology looks as well since that will likely be where the performance
bottlenecks are uncovered.
On Wed, Apr 2, 2014 at 6:38 AM, Luke Bakken w
Hi Stanislav,
In your Riak /etc/riak/app.config files, please use the following value:
{pb_backlog, 256},
After changing this, you will have to restart Riak in a rolling fashion.
Could you please run riak-debug on one node in your cluster and make the
generated archive available? (dropbox, for
11 matches
Mail list logo