Frank,

The configs are being compared after ConfigProviders have been resolved.
This is happening both as a Connector config (by
ClusterConfigState::connectorConfig) and as task configs (by
ClusterConfigState::taskConfig).
This means that two configurations that have different direct contents (the
path to a secret changed) can resolve to the same value if both paths
produce the same value after resolving the config provider.
This also means that if you change the secret on disk and re-submit the
config, the new secret will be resolved in each of the ClusterConfigState
calls, and also end up looking equivalent.

> Would capturing a new generation value within the config itself on every
submitted change be a possible fix/workaround?

This is the workaround I proposed earlier in this conversation for external
users to force updates, to add a nonce to their connector configuration.
I don't think it's reasonable for the framework to do this unconditionally,
so maybe we need to find an alternative if we want to fix this for everyone
by default.

Greg

On Thu, Feb 9, 2023 at 8:26 AM Frank Grimes <frankgrime...@yahoo.com.invalid>
wrote:

>  I'm still having trouble understanding how the configs could match in the
> code you highlighted when we change connector and/or task config values
> when no keys are being pruned by the connector implementations in
> question.Would capturing a new generation value within the config itself on
> every submitted change be a possible fix/workaround?The possible slightly
> negative consequence of that change would be that re-submitting the same
> config which would effectively be a no-op in the current implementation
> would now force task reconfigures/restarts?
>     On Wednesday, February 8, 2023, 12:47:19 PM EST, Greg Harris
> <greg.har...@aiven.io.invalid> wrote:
> > This is the condition which is causing the issue:
> https://github.com/apache/kafka/blob/6e2b86597d9cd7c8b2019cffb895522deb63c93a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1918-L1931>
> The DistributedHerder is comparing the generation 1 and generation 2
> configurations, and erroneously believes that they are equal, when in fact
> the underlying secrets have changed.> This prevents the DistributedHerder
> from writing generation 2 task configs to the KafkaConfigBackingStore
> entirely, leaving it in state (A) without realizing it.
>
>

Reply via email to