Hi Yuval,

Unfortunately setting the key or timestamp (or other metadata) from the
SQL API is not supported yet. There is an ongoing discussion to support
it[1].

Right now your option would be to change the code of KafkaTableSink and
write your own version of KafkaSerializationSchema as Till mentioned.

Best,

Dawid


[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-107-Reading-table-columns-from-different-parts-of-source-records-td38277.html

On 20/08/2020 09:26, Till Rohrmann wrote:
> Hi Yuval,
>
> it looks as if the KafkaTableSink only supports writing out rows
> without a key. Pulling in Timo for verification.
>
> If you want to use a Kafka producer which writes the records out with
> a key, then please take a look atĀ KafkaSerializationSchema. It
> supports this functionality.
>
> Cheers,
> Till
>
> On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <yuva...@gmail.com
> <mailto:yuva...@gmail.com>> wrote:
>
>     Hi,
>
>     I'm running Flink 1.9.0 and I'm trying to set the key to be
>     published by the Table API's Kafka Connector. I've searched the
>     documentation by couldĀ find no reference for such an ability.
>
>     Additionally, while browsing the code of the KafkaTableSink, it
>     looks like it creates a KeyedSerializationSchemaWrapper which just
>     sets the key to null?
>
>     Would love some help.
>
>     -- 
>     Best Regards,
>     Yuval Itzchakov.
>

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to