2014-04-03 22:54 GMT+06:00 Luke Bakken :
> Before you go down the path of changing proxies, could you provide logs from
> one instance of your proxy server? They may provide more insight into what's
> going on here. In addition, the config and logs from one Riak CS node would
> be helpful - the co
Here we go, better description of SO_LINGER … this rings a bell. Lets you set
the time allowed for clean versus abrupt connection close. The side effect is
that your connection structures can get cleaned up quicker.
Still looking for validation of Squid / Apache using this to manage poorly
be
Hi Anton,
Thanks for letting us know that changing s3cmd settings won't work.
Before you go down the path of changing proxies, could you provide logs
from one instance of your proxy server? They may provide more insight into
what's going on here. In addition, the config and logs from one Riak CS
Luke,
it would be quite difficult to connect directly to riak cs nodes because
riak cs is set up to run in DMZ so it would take us to reconfigure the
whole network. So let's assume you are right in your thinking and it is
proxy (tengine) causing our problems. What would you recommend to use as
This is likely something you can tweak with sysctl. The HTTP interface
already has SO_REUSEADDR on. This may be of help:
http://tux.hk/index.php?m=05&y=09&entry=entry090521-111844
On Thu, Apr 3, 2014 at 10:02 AM, Sean Allen wrote:
> We are using pre11 right now.
>
> When we open http connection
We are using pre11 right now.
When we open http connections, they hang around for a long time in
CLOSE_WAIT which results in really spiky performance. When the connections
close, its fast, then they build up again,
Is there something that needs to be configured w/ riak to get it to reuse
the sock
Anton,
You'd be connecting clients to Riak CS directly for the purposes of
testing. We believe that your load balancer is terminating connections
prematurely, so removing it from your environment is the best way to
determine if it is causing this issue.
I am suggesting to lower pb_backlog - 256 s
Hi Luke
Is it safe to allow clients to connect directly to Riak CS as your
suggestion is kinda against official docs
(http://docs.basho.com/riakcs/latest/cookbooks/configuration/Load-Balancing-and-Proxy-Configuration/)?
I also think our problem might be caused by ring_creation_size 256 while
w
Hi Stanislav,
Could you configure your clients to connect directly to Riak CS instead of
through your proxy? A colleague of mine suggested that the
{error,closed}and 104 error code in s3cmd appear to be caused by the
connection between
the client and Riak CS being closed or terminated unexpectedly
Hello there
We have a Riak/Riak CS cluster and want to calculate S3 storage
statistics per user on a hourly basis. As far as I understand, the
proper way to do so is to set up a schedule in riak-cs app.config
{storage_schedule, []} section
We distribute configs to our nodes from a common te
Responses inline.
On Wed, Apr 2, 2014 at 3:42 AM, Igor Kukushkin wrote:
> Hi all.
>
> Here's a simple scenario that we're planning to test: a cluster of 4
> nodes, 2 are normal eleveldb'backend nodes and 2 are stored on RAM
> (with same eleveldb backend).
>
Going to put this out here right now,
John,
Forgive the late reply but I do have something of a strong point of view on
this one.
For the case you've proposed it seems like Riak in EC2 is a great fit but
you are rightly concerned about the IO costs, especially in EC2. For me
it's about achieving predictable performance as high into y
12 matches
Mail list logo