Hi Chris,

Thanks for the KIP, this looks like a really useful addition to Kafka
Connect's log level REST APIs! I have a few questions and comments:

> If no modifications to the namespace have
> been made since the worker was started,
> they will be null

If no modifications to a logging namespace have been made, won't the
namespace itself be omitted from the response? It looks like we currently
only return loggers that have non-null log levels in the *GET
/admin/loggers* endpoint.

> Last modified timestamp

>From the proposed changes section, it isn't very clear to me how we'll be
tracking this last modified timestamp to be returned in responses for the *GET
/admin/loggers* and *GET /admin/loggers/{logger}* endpoints. Could you
please elaborate on this? Also, will we track the last modified timestamp
even for worker scoped modifications where we won't write any records to
the config topic and the requests will essentially be processed
synchronously?

> Record values will have the following format, where ${level} is the new
logging level for the namespace:

In the current synchronous implementation for the *PUT
/admin/loggers/{logger} *endpoint, we return a 404 error if the level is
invalid (i.e. not one of TRACE, DEBUG, WARN etc.). Since the new cluster
scoped variant of the endpoint will be asynchronous, can we also add a
validation to synchronously surface erroneous log levels to users?

> Workers that have not yet completed startup
> will ignore these records, including if the worker
> reads one during the read-to-end of the config
> topic that all workers perform during startup.

I'm curious to know what the rationale here is? In KIP-745, the stated
reasoning behind ignoring restart requests during worker startup was that
the worker will anyway be starting all connectors and tasks assigned to it
so the restart request is essentially meaningless. With the log level API
however, wouldn't it make more sense to apply any cluster scoped
modifications to new workers in the cluster too? This also applies to any
workers that are restarted soon after a request is made to *PUT
/admin/loggers/{logger}?scope=cluster *on another worker. Maybe we could
stack up all the cluster scoped log level modification requests during the
config topic read at worker startup and apply the latest ones per namespace
(assuming older records haven't already been compacted) after we've
finished reading to the end of the config topic?

> if you're debugging the distributed herder, you
> need all the help you can get

😂

As an aside, thanks for the impressively thorough testing plan in the KIP!


Hi Ashwin,

> isn't it better to forward the requests to the
> leader in the initial implementation ?

Would there be any downside to only introducing leader forwarding for
connector/task specific scope types in the future (rather than introducing
it at the outset here where it isn't strictly required)?

> I would also recommend an additional system
> test for Standalone herder to ensure that the
> new scope parameter is honored and the response
> contains the last modified time.

Can't this be sufficiently covered using unit and / or integration tests?
System tests are fairly expensive to run in terms of overall test runtime
and they are also not run on every PR or commit to trunk / feature branches
(unlike unit tests and integration tests).

Thanks,
Yash

On Sat, Sep 2, 2023 at 5:12 AM Chris Egerton <chr...@aiven.io.invalid>
wrote:

> Hi all,
>
> Can't imagine a worse time to publish a new KIP (it's late on a Friday and
> we're in the middle of the 3.6.0 release), but I wanted to put forth
> KIP-976 for discussion:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-976%3A+Cluster-wide+dynamic+log+adjustment+for+Kafka+Connect
>
> TL;DR: The API to dynamically adjust log levels at runtime with Connect is
> great, and I'd like to augment it with support to adjust log levels for
> every worker in the cluster (instead of just the worker that receives the
> REST request).
>
> I look forward to everyone's thoughts, but expect that this will probably
> take a bump or two once the dust has settled on 3.6.0. Huge thanks to
> everyone that's contributed to that release so far, especially our release
> manager Satish!
>
> Cheers,
>
> Chris
>

Reply via email to