Hi there,
Finally we upgraded our producer configuration to ensure message order:
retries = 1
# note to ensure order enable.idempotence=true, which forcing to acks=all and
max.in.flight.requests.per.connection<=5
enable.idempotence = true
max.in.flight.requests.per.connection = 4
You can specify VALUE_FORMAT='AVRO' to use Avro serialisation and
the Schema Registry. See docs for more details
https://docs.confluent.io/current/ksql/docs/installation/server-config/avro-schema.html
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Sat, 5 Oct 201
Thanks Robin, I was able to do it after passing schema registry url to
properties file. Ksql-server.properties
Sent from my iPhone
> On Oct 7, 2019, at 9:12 AM, Robin Moffatt wrote:
>
> You can specify VALUE_FORMAT='AVRO' to use Avro serialisation and
> the Schema Registry. See docs for more d
Update - I tried Sophie's suggestion; I implemented a Transformer that
performs puts on the table's backing store. I hid the complexities behind a
kotlin extension method. So now the code looks like this (pesudocode):
KStream.commit() { // transformer is implemented here }
stream.join(table) { ev
Hi,
I am discovering the advanced features recently added to Kafka, like the
timestamping and the headers .
In the use case I am investigating, time series oriented, timestamp is
definitely something I will investigate.
However, when I investigated the timestamp, I discovered the header feature