Hi,
Using kafka-configs, i wanted to update the broker truststore location to the
new truststore:
C:\>kafka-configs.cmd --bootstrap-server kafka-server:9443 --entity-type
brokers --entity-name 1 --command-config kafka-brokers.properties --alter
--add-config
'listener.name.ssl.ssl.truststore.
Issue created here:
https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-9202
On 2019/11/15 15:54:51, Jorg Heymans wrote:
> Debugging into ConsoleConsumer.scala it eventually just calls this:
>
> val convertedBytes = deserializer.map(_.deserialize(topic,
> nonNullByt
Debugging into ConsoleConsumer.scala it eventually just calls this:
val convertedBytes = deserializer.map(_.deserialize(topic,
nonNullBytes).toString.
getBytes(StandardCharsets.UTF_8)).getOrElse(nonNullBytes)
See
https://github.com/apache/kafka/blob/33d06082117d971cdcddd4f01392006b543f3c
contains SSL related config.
Jorg
On 2019/11/12 15:06:02, "M. Manna" wrote:
> HI,
>
> On Tue, 12 Nov 2019 at 14:37, Jorg Heymans wrote:
>
> > Thanks for helping debugging this. You can reproduce the issue using below
> > deserializer, and invoking kafka
}
@Override
public void close() {
System.out.println("CLOSE");
}
}
On 2019/11/12 12:57:21, "M. Manna" wrote:
> HI again,
>
> On Tue, 12 Nov 2019 at 12:31, Jorg Heymans wrote:
>
> > Hi,
> >
> > The issue is not that i cannot get a custom deseriali
Hi,
The issue is not that i cannot get a custom deserializer working, it's that the
custom deserializer i provide implements the default method from the
Deserializer interface
https://github.com/apache/kafka/blob/6f0008643db6e7299658442784f1bcc6c96395ed/clients/src/main/java/org/apache/kafka/co
Manna" wrote:
> You have a typo - you mean deserializer
>
> Please try again.
>
> Regards,
>
> On Mon, 11 Nov 2019 at 14:28, Jorg Heymans wrote:
>
> > Don't think that option is available there, specifying
> > 'value.deserializer' in my co
er.ConsumerConfig)
Does there exist a description of what properties the consumer-config
properties file accepts ? I could find only a few references to it in the
documentation.
Jorg
On 2019/11/11 13:00:03, "M. Manna" wrote:
> Hi,
>
>
> On Mon, 11 Nov 2019 at 10:58, Jorg
Hi,
I have created a class implementing Deserializer, providing an implementation
for
public String deserialize(String topic, Headers headers, byte[] data)
that does some conditional processing based on headers, and then calls the
other serde method
public String deserialize(String topic, b
Thanks for the confirmation Matthias !
On 2019/09/05 23:23:01, "Matthias J. Sax" wrote:
> I must create a new instance. Only creating a `new
> WrappingValueTransformer()` won't work.
>
>
> -Matthias
>
>
>
> On 9/5/19 2:42 AM, Jorg H
nsformer it wraps. But according to ProcessorContext semantics this does not
seem necessary, as init() is called each time which guarantees correct setup of
the context on the transformer. Can anyone shed light on this ?
Regards,
Jorg Heymans
On 2019/07/04 12:41:58, Bruno Cadonna wrote:
> Hi Jorg,
>
> transform(), transformValues, and process() are not stateful if you do
> not add any state store to them. You only need to leave the
> variable-length argument empty.
doh i did not realize you could leave the statestore argument emp
Hi,
I understand that it's currently not possible to access message headers from
the filter() DSL operation, is there a semantically equivalent (stateless)
operation that can achieve the same result ? I understand that transform(),
transformValues() and process() could achieve the same result
Issue created https://issues.apache.org/jira/browse/KAFKA-8203
On 2019/04/04 18:03:33, Harsha wrote:
> Hi,
> Yes, this needs to be handled more elegantly. Can you please file a
> JIRA here
> https://issues.apache.org/jira/projects/KAFKA/issues
>
> Thanks,
> Harsha
>
Hi,
We have our brokers secured with these standard properties
listeners=SSL://a.b.c:9030
ssl.truststore.location=...
ssl.truststore.password=...
ssl.keystore.location=...
ssl.keystore.password=...
ssl.key.password=...
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2
It's a bit surprising
Hi,
Testing out some kafka consistency guarantees I have following basic producer
config:
ProducerConfig.ACKS_CONFIG=all
ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG=true
ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION=1
ProducerConfig.RETRIES_CONFIG=3
My test setup is a 3 node kafka cluster (
16 matches
Mail list logo