Hi Willy,

I never replied to this mail, my apologies!

Thank you for your suggestions. Sadly, in our case, neither approach works
- but it was worth asking you first. I solved our problem  with something
that executes right at the start of our application,

Thank you!

Alex


On Wed, Nov 28, 2012 at 7:20 AM, Willy Tarreau <w...@1wt.eu> wrote:

> Hi Alex,
>
> On Tue, Nov 27, 2012 at 11:41:08PM +0000, Alex Davies wrote:
> > Hey All,
> >
> > I have an application that can only handle. to some URLs, one request
> every
> > x seconds from each session (identified by a specific cookie).
> >
> > Rather than adding logic to handle this to the application itself (which
> I
> > fear I will have to do), I would like to know if it is possible to use
> the
> > rate limiting functionality in HAProxy to delay the sending of a request
> to
> > the backend to the time the last request was sent + x fraction of a
> second?
> >
> > I see a bunch of example configs on the internet to reject connections
> over
> > x per second but nothing that queues requests in a order.
>
> If you have just a few such URLs, one thing you could do which will
> approach
> your need is to have a backend per URL (or per group of URL), in which a
> "tcp-request content" rule causes artificially long pauses when the session
> rate is too high. This can work well as long as the number of concurrent
> connections on that backend remains low and known (so that you can adjust
> the timer). If you need only one connection at a time, you could chain
> a server with maxconn 1 to such an installation.
>
> A simple example would consist in this :
>
>     frontend front
>         use_backend limited if { path_beg /foo /bar }
>
>     # limited to 2 requests per second
>     backend limited
>         tcp-request inspect-delay 500
>         tcp-request content accept if { be_sess_rate le 2 } || WAIT_END
>         server ...
>
> This can work well for printing devices or PDF generators for example.
> It won't work well at all if you don't know the number of concurrent
> users, in which case it could be done like this (even uglier) :
>
>     frontend front
>         use_backend serialize if { path_beg /foo /bar }
>
>     # serialize requests
>     backend serialize
>         server limited 127.0.0.1:1 maxconn 1 send-proxy
>
>     # limited to 2 requests per second
>     listen limited
>         bind 127.0.0.1:1 accept-proxy
>         tcp-request inspect-delay 500
>         tcp-request content accept if { be_sess_rate le 2 } || WAIT_END
>         server ...
>
> This time it will do the job whatever the number of concurrent clients.
> But it's not very pretty...
>
> Willy
>
>


-- 
Alex Davies

This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the sender immediately by e-mail and delete this e-mail permanently.

Reply via email to