Hi Max,

On Wed, Aug 14, 2024 at 06:21:39AM +0000, Moehl, Maximilian wrote:
> Hi Willy,
> 
> > > Is there a similar mechanism in HAProxy? So far I can only see the
> > > static option for the initial window size which comes with the mentioned
> > > drawbacks.
> >
> > There is nothing similar. One of the problems H2 is facing is that there
> > can be application congestion anywhere in the chain. For example, let's
> > say a POST request is sent to a server subject to a maxconn and remains
> > in the queue for a few seconds. We definitely don't want to block the
> > whole connection during this time because we've oversized the window.
> >
> > And there's also the problem of not allocating too many buffers to each
> > stream, or it becomes a trivial DoS.
> 
> That makes sense, since we had to accommodate the use-case which triggered the
> investigation we increased the window size to 512k as a middle-ground. We 
> don't
> usually see congestion on our HAProxies so we are hoping that this does not
> cause any other issues, we'll see once it hits prod.

There used to be a case where it was causing significant problems: initially
this initial window size was both for frontend and backends. If you were using
H2 on the backend as well, and two requests from different clients were sent
over the same H2 backend connection, a slow reader would be sufficient to
cause the other one to experience read timeouts. Now we've addressed this
using two points:
  - it's possible to configure the backend-side window size separately
    from the frontend-side
  - "http-reuse safe" (the default mode) makes H2 backend connections
    private, that is, they're not shared between multiple client
    connections. This results in more connections on the backend side
    but the progress on the backend matches the progress on the frontend.

The remaining issues that you may encounter with large frontend windows
is if a server sometimes pauses when consuming POST bodies, it may
occasionally prevent the client from performing requests in parallel
over the same connection. That's quite rare to meet such trouble of
course, but we want to stay on the safe side by default, which is why
we don't use a larger window by default. If you know that your servers
are "normal" and consume the bodies sent to them, there's no real
problem with increasing the window size.

A case I met a long time ago that could match a pathologic one was a
few URLs served by a partner's server behind a slow leased line. In
this case, a large POST sent over this link could occasionally face
congestion and could temporarily freeze the client connection for
the time it took to upload a large body. As you see it's not everyone
that hosts such applications.

Hoping this helps,
Willy


Reply via email to