[
https://issues.apache.org/jira/browse/NIFI-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932318#comment-17932318
]
Josef Zahner commented on NIFI-14271:
-------------------------------------
[~pgrey] thanks a lot for your quick fix. We are gonna try to test this based
on your code. We have another issue
https://issues.apache.org/jira/browse/NIFI-14160 with the ConsumeKafka
processor, I've seen that you are on the "Watch" list as well. May be you can
help there as well? It seems that the properties aren't considered correctly...
> Kafka3ConnectionService not possible to add dynamic properties
> --------------------------------------------------------------
>
> Key: NIFI-14271
> URL: https://issues.apache.org/jira/browse/NIFI-14271
> Project: Apache NiFi
> Issue Type: Bug
> Components: Extensions
> Affects Versions: 2.2.0
> Reporter: Josef Zahner
> Assignee: Paul Grey
> Priority: Major
> Attachments: nifi_controller_service_error.png,
> nifi_kafka3connectionservice_dyn_prop.png
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> Based on the web documentation it should be possible to add Dynamic
> Properties to the Controller Service *Kafka3ConnectionService* to adapt the
> kafka configuration.
> We switched the bulletin level to “INFO" on the ConsumeKafka processor to get
> the potentially supported options. However, we can add whatever value (check
> reference on the bottom) we want, we are always getting the error:
> !nifi_controller_service_error.png!
>
> !nifi_kafka3connectionservice_dyn_prop.png!
> h2. _*References*_
> *NiFi Documentation: Kafka3ConnectionService - Dynamic Properties*
> _«These properties will be added on the Kafka configuration after loading any
> provided configuration properties. In the event a dynamic property represents
> a property that was already set, its value will be ignored and WARN message
> logged. For the list of available Kafka properties please refer to:
> [http://kafka.apache.org/documentation.html#configuration].»_
> *ConsumeKafka INFO Logs:*
> {code:java}
> 2025-02-12 16:53:07,718 INFO [Timer-Driven Process Thread-73]
> o.a.k.clients.consumer.ConsumerConfig ConsumerConfig values:
> allow.auto.create.topics = true
> auto.commit.interval.ms = 5000
> auto.include.jmx.reporter = true
> auto.offset.reset = earliest
> bootstrap.servers = [kafka.mydomain.com:9093]
> check.crcs = true
> client.dns.lookup = use_all_dns_ips
> client.id = consumer-test-69
> client.rack =
> connections.max.idle.ms = 540000
> default.api.timeout.ms = 60000
> enable.auto.commit = true
> enable.metrics.push = true
> exclude.internal.topics = true
> fetch.max.bytes = 52428800
> fetch.max.wait.ms = 500
> fetch.min.bytes = 1
> group.id = test
> group.instance.id = null
> group.protocol = classic
> group.remote.assignor = null
> heartbeat.interval.ms = 3000
> interceptor.classes = []
> internal.leave.group.on.close = true
> internal.throw.on.fetch.stable.offset.unsupported = false
> isolation.level = read_committed
> key.deserializer = class
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> max.partition.fetch.bytes = 1048576
> max.poll.interval.ms = 300000
> max.poll.records = 400000
> metadata.max.age.ms = 300000
> metadata.recovery.strategy = none
> metric.reporters = []
> metrics.num.samples = 2
> metrics.recording.level = INFO
> metrics.sample.window.ms = 30000
> partition.assignment.strategy = [class
> org.apache.kafka.clients.consumer.RangeAssignor, class
> org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
> receive.buffer.bytes = 65536
> reconnect.backoff.max.ms = 1000
> reconnect.backoff.ms = 50
> request.timeout.ms = 30000
> retry.backoff.max.ms = 1000
> retry.backoff.ms = 100
> sasl.client.callback.handler.class = null
> sasl.jaas.config = null
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 60000
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.login.callback.handler.class = null
> sasl.login.class = null
> sasl.login.connect.timeout.ms = null
> sasl.login.read.timeout.ms = null
> sasl.login.refresh.buffer.seconds = 300
> sasl.login.refresh.min.period.seconds = 60
> sasl.login.refresh.window.factor = 0.8
> sasl.login.refresh.window.jitter = 0.05
> sasl.login.retry.backoff.max.ms = 10000
> sasl.login.retry.backoff.ms = 100
> sasl.mechanism = GSSAPI
> sasl.oauthbearer.clock.skew.seconds = 30
> sasl.oauthbearer.expected.audience = null
> sasl.oauthbearer.expected.issuer = null
> sasl.oauthbearer.header.urlencode = false
> sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
> sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
> sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
> sasl.oauthbearer.jwks.endpoint.url = null
> sasl.oauthbearer.scope.claim.name = scope
> sasl.oauthbearer.sub.claim.name = sub
> sasl.oauthbearer.token.endpoint.url = null
> security.protocol = SSL
> security.providers = null
> send.buffer.bytes = 131072
> session.timeout.ms = 45000
> socket.connection.setup.timeout.max.ms = 30000
> socket.connection.setup.timeout.ms = 10000
> ssl.cipher.suites = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
> ssl.endpoint.identification.algorithm = https
> ssl.engine.factory.class = null
> ssl.key.password = [hidden]
> ssl.keymanager.algorithm = SunX509
> ssl.keystore.certificate.chain = null
> ssl.keystore.key = null
> ssl.keystore.location = /etc/nifi/certs/nifi2_keystore.jks
> ssl.keystore.password = [hidden]
> ssl.keystore.type = JKS
> ssl.protocol = TLSv1.3
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.certificates = null
> ssl.truststore.location = /etc/nifi/certs/truststore.jks
> ssl.truststore.password = [hidden]
> ssl.truststore.type = JKS
> value.deserializer = class
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
>
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)