GitHub user rsmidt created a discussion: Artifical back pressure on JDBC
projections
Hey team,
I'm currently running into an issue while rolling out a new sharded projection
(events by tag; JDBC with Postgres). It's basically processing envelopes so
fast that the DB CPU becomes saturated, resulting in other parts of the
application throttling.
I've tried the following things so far:
### 1. Artificially throttling each projector instance
I thought that I could somehow slow down the projector requesting new events
from the journal by sleeping somewhere. I can't really do it in the handler
because this would block the JDBC transaction. Therefore, I tried to do it
right before session creation, in the session creator callback:
```kotlin
return JdbcProjection.groupedWithin(
ProjectionId.of(PROJECTION_NAME, tag),
sourceProvider,
{
// The session creation is called on each run of the projector to
create the session (transaction).
// By running sleep here, we can delay the run of the projector without
blocking the DB connection/transaction.
maybeSleep()
HibernateSession(...)
},
{ },
system
)
```
Unfortunately, this only works partially because it blocks the full thread (via
`LockSupport.parkNanos()`). So other, unrelated projections are also throttled
accidentally (I noticed ever-increasing projection lag). I tried to fix that by
scheduling all projector instances on a dedicated pool, but from the config,
this does not seem possible as there is only one dispatcher for all projections
([reference](https://pekko.apache.org/docs/pekko-projection/current/jdbc.html#configuration)).
### 2. Reducing the number of concurrent projector instances
We're having > 100 tags, so I thought if I manually restricted the number of
instances to, let's say, 20 and had these 20 do more than just one tag, it
could keep the impact low. However, I'm not seeing any API to do that.
Am I maybe missing something crucial here?
GitHub link: https://github.com/apache/pekko/discussions/2431
----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]