Cluster protection, mainly. Swift's rate limiting is based on write requests
(eg PUT, POST, DELETE) per second per container. Since a large number of object
writes in a single container could cause some background processes to back up
and not service other requests, limiting the ops/sec to a con
Am curious,
Any reason why swift got in the business of ratelimiting in the first place?
-Josh
John Dickinson wrote:
Swift does rate limiting across the proxy servers ("api servers" in nava
parlance) as described at http://docs.openstack.org/developer/swift/ratelimit.html. It
uses a memcache
On Tue, 14 Jun 2016 16:10:04 +
"Kingshott, Daniel" wrote:
> We use Haproxy to load balance API requests and applied rate limiting
> there.
+1. We've also had success with rate limiting via haproxy (largely related to
Horizon and StackTask) with stick counters and rolling windows. The options
Swift does rate limiting across the proxy servers ("api servers" in nava
parlance) as described at
http://docs.openstack.org/developer/swift/ratelimit.html. It uses a memcache
pool to coordinate the rate limiting across proxy processes (local or across
machines).
Code's at
https://github.com/
Thanks for all the pointers.
Vahric, we're running into this in our lab on a compute host with 135
instances and 12 meters, 3 of which we developed.
/Bill
On Tue, Jun 14, 2016 at 2:54 PM, Vahric Muhtaryan
wrote:
> Hello Bill
>
> Possible to share how many instance and how many meter per instan
+1 also SSL
On Tue, Jun 14, 2016 at 4:58 PM, Russell Bryant wrote:
> This is the most common approach I've heard of (doing rate limiting in
> your load balancer).
>
> On Tue, Jun 14, 2016 at 12:10 PM, Kingshott, Daniel <
> daniel.kingsh...@bestbuy.com> wrote:
>
>> We use Haproxy to load balance
This is the most common approach I've heard of (doing rate limiting in your
load balancer).
On Tue, Jun 14, 2016 at 12:10 PM, Kingshott, Daniel <
daniel.kingsh...@bestbuy.com> wrote:
> We use Haproxy to load balance API requests and applied rate limiting
> there.
>
>
>
> On Tue, Jun 14, 2016 at 9
Hi All,
Our bi-weekly meeting will occur tomorrow, Weds at 15:00 UTC in
openstack-meeting-4. Please note that we have changed times to 15:00
UTC based on a recent meeting and a doodle poll [1]. Same day, same
IRC room, just a different time.
I've added a couple items to the agenda, and feel free
Hello Bill
Possible to share how many instance and how many meter per instance you
collecting and getting this error ?
I guess for scaling purpose , you are talking about this , right
http://docs.openstack.org/ha-guide/controller-ha-telemetry.html
Regards
VM
From: Bill Jones
Date: Tuesday 1
On 14/06/16 18:28, "Edgar Magana" wrote:
>Second that one! Feels like one of the best options, we are moving towards
>that direction.
>
>Edgar
>
For completeness, Rackspace had a project called Repose which did rate
limiting. Core is at https://github.com/rackerlabs/repose
tim
>On 6/14/16
On 14/06/16 18:00, "Matt Riedemann" wrote:
>On 6/14/2016 10:14 AM, Kris G. Lindgren wrote:
>> Cern is running ceilometer at scale with many thousands of compute
>> nodes. I think their blog goes into some detail about it [1], but I
>> don’t have a direct link to it.
>>
>>
>> [1] - http://openst
Personally I run undercloud on a vm (kvm) and snapshot it before messing
with the heat stack :)
Regards
On Tue, Jun 14, 2016 at 6:16 PM, Charles Short wrote:
> Well I just tested this
>
> Tried to create a snapshot of the heat stack overcloud (from a new clean
> state).
> The snapshot is st
Very Cool thanks Piet!
I¹m very much looking forward to participating in this, we just went
through a big nova upgrade so this is ideal timing.
Thanks,
Dan
From: "Kruithof Jr, Pieter"
Date: Tuesday, June 14, 2016 at 10:19 AM
To: "openstack-operators@lists.openstack.org"
Cc: "danielle.m
Hi Operators,
Danielle Mundle will be contributing to upstream to help conduct user research
on behalf of the OpenStack community. In past lives, Danielle has provided
user research/usability consulting services to companies like Dell, Bose and
Citibank.
One of her priorities is to begin inve
Well I just tested this
Tried to create a snapshot of the heat stack overcloud (from a new clean
state).
The snapshot is stuck IN PROGRESS (for over an hour). I cannot remove it.
Perhaps this is not such a good/reliable method.
I will revert to my CloneZilla bare metal imaging to restore ba
Second that one! Feels like one of the best options, we are moving towards that
direction.
Edgar
On 6/14/16, 9:10 AM, "Kingshott, Daniel" wrote:
>We use Haproxy to load balance API requests and applied rate limiting
>there.
>
>
>
>On Tue, Jun 14, 2016 at 9:02 AM, Matt Riedemann
> wrote:
>
>A q
On 6/14/2016 10:56 AM, Kevin Bringard (kevinbri) wrote:
+1 to this +1.
As pointed out, it’s never really worked anyway, and I think only serves to
confuse and frustrate people. API rate limiting should probably be happening
higher up on the stack where connections are concentrated.
_
Chris,
Awesome locations! Looking forward to have the final one and the date to do the
booking.
Edgar
From: Chris Morgan
Date: Tuesday, June 14, 2016 at 8:09 AM
To: OpenStack Operators
Subject: [Openstack-operators] Mid-Cycle Ops Meetup venue choice - please make
your voice heard!
[DISCLAIM
We use Haproxy to load balance API requests and applied rate limiting
there.
On Tue, Jun 14, 2016 at 9:02 AM, Matt Riedemann
wrote:
A question came up in the nova IRC channel this morning about the
api_rate_limit config option in nova which was only for the v2 API.
Sean Dague explained that i
On 6/14/2016 10:14 AM, Kris G. Lindgren wrote:
Cern is running ceilometer at scale with many thousands of compute
nodes. I think their blog goes into some detail about it [1], but I
don’t have a direct link to it.
[1] - http://openstack-in-production.blogspot.com/
_
Hi,
TripleO stable Mitaka
I am testing expanding my stack by adding more compute nodes. The first
update failed, leaving the overcloud stack in a failed state.
Is it best practice to create a snapshot of the overcloud heat template
before updating the stack?
You could then roll back and try t
On 6/14/16, 9:44 AM, "Matt Fischer" wrote:
>On Tue, Jun 14, 2016 at 9:37 AM, Sean Dague
> wrote:
>
>On 06/14/2016 11:02 AM, Matt Riedemann wrote:
>> A question came up in the nova IRC channel this morning about the
>> api_rate_limit config option in nova which was only for the v2 API.
>>
>> Se
Matt Fischer wrote:
On Tue, Jun 14, 2016 at 9:37 AM, Sean Dague mailto:s...@dague.net>> wrote:
On 06/14/2016 11:02 AM, Matt Riedemann wrote:
> A question came up in the nova IRC channel this morning about the
> api_rate_limit config option in nova which was only for the v2 API.
On Tue, Jun 14, 2016 at 9:37 AM, Sean Dague wrote:
> On 06/14/2016 11:02 AM, Matt Riedemann wrote:
> > A question came up in the nova IRC channel this morning about the
> > api_rate_limit config option in nova which was only for the v2 API.
> >
> > Sean Dague explained that it never really worked
On 06/14/2016 11:02 AM, Matt Riedemann wrote:
> A question came up in the nova IRC channel this morning about the
> api_rate_limit config option in nova which was only for the v2 API.
>
> Sean Dague explained that it never really worked because it was per API
> server so if you had more than one A
I will posit that anyone who is interested in rate limiting is probably
already load balancing their API servers. We've been looking into rate
limiting at the load balancers, but have not needed to implement it yet.
That will likely be our solution when its finally implemented.
Question: If there
On Tue, Jun 14, 2016 at 9:02 AM, Matt Riedemann
wrote:
> A question came up in the nova IRC channel this morning about the
> api_rate_limit config option in nova which was only for the v2 API.
>
> Sean Dague explained that it never really worked because it was per API
> server so if you had more
Cern is running ceilometer at scale with many thousands of compute nodes. I
think their blog goes into some detail about it [1], but I don’t have a direct
link to it.
[1] - http://openstack-in-production.blogspot.com/
___
Kris Lin
[DISCLAIMER AT BOTTOM OF EMAIL]
Hello Everyone,
There are two possible venues for the next OpenStack Operators Mid-Cycle
meetup. They both seem suitable and the details are listed here :
https://etherpad.openstack.org/p/ops-meetup-venue-discuss
To guide the decision making process and since ti
See below.
On Mon, 2016-06-13 at 22:12 -0400, Adam Young wrote:
> On 06/13/2016 07:08 PM, Marc Heckmann wrote:
> >
> > Hi,
> >
> > I currently have a lab setup using SAML2 federation with Microsoft
> > ADFS.
> >
> > The federation part itself works wonderfully. However, I'm also
> > trying
> >
A question came up in the nova IRC channel this morning about the
api_rate_limit config option in nova which was only for the v2 API.
Sean Dague explained that it never really worked because it was per API
server so if you had more than one API server it was busted. There is no
in-tree replace
Has anyone had any experience with scaling ceilometer compute agents?
We're starting to see messages like this in logs for some of our compute
agents:
WARNING ceilometer.openstack.common.loopingcall [-] task run outlasted interval by 293.25 sec
This is an indication that the compute agent faile
Folks,
today we should have the Performance working group IRC meeting, but I have
to announce that I won't be able to attend it today as well as several more
folks. Therefore let's decline this event for today and make it happen next
week due to the usual schedule.
I'm really sorry for the inconv
Nova is getting towards it's final phases of the long term arc to really
standardize the API, which includes removing the API extensions
facility. This has been a long arc that was started in Atlanta. And has
been talked about in a lot of channels, but some interactions this past
week made us reali
Hi All -
We have a Scientific WG IRC meeting on Tuesday 14 June at 2100 UTC on channel
#openstack-meeting.
The agenda is available here[1] and full IRC meeting details are here[2].
The headline agenda item is the Supercomputing 2016 conference in November in
Salt Lake City. There are several
35 matches
Mail list logo