in compacted topics for deletion, it is important to
actually translate null values in Connect to be true nulls in Kafka.
>
> Thanks again,
> David
>
>
>
>
>
>
>
> -Original Message-
> From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
> Se
rue nulls in
Kafka.
>
> Thanks again,
> David
>
>
>
>
>
>
>
> -Original Message-----
> From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
> Sent: 07 November 2016 04:35
> To: dev@kafka.apache.org
> Subject: Re: Kafka Connect key.converter and val
vember 2016 04:35
To: dev@kafka.apache.org
Subject: Re: Kafka Connect key.converter and value.converter properties for
Avro encoding
You won't be accepting/returning SpecificRecords directly when working with
Connect's API. Connect intentionally uses an interface that is different from
ly be achieved via a corresponding
> SpecificDatumReader.
>
> Does this look a reasonable approach?
>
> Many thanks if you've read this far!
>
> Regards,
> David
>
>
> -Original Message-
> From: Gwen Shapira [mailto:g...@confluent.io]
> Sent: 02 November 2
e achieved via a corresponding
SpecificDatumReader.
Does this look a reasonable approach?
Many thanks if you've read this far!
Regards,
David
-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io]
Sent: 02 November 2016 21:18
To: dev@kafka.apache.org
Subject: Re: Kafka Conn
Both the Confluent Avro Converter and the Confluent Avro Serializer use the
Schema Registry. The reason is, as Tommy Becker mentioned below, to avoid
storing the entire schema in each record (which the JSON serializer in
Apache Kafka does). It has few other benefits schema validation and such.
If
Although I can't speak to details of the Confluent packaging, anytime you're using
Avro you need the schemas for the records you're working with. In an Avro data
file the schema is included in the file itself. But when you're encoding
individual records like in Kafka, most people instead encode