This is indeed not optimal. Could you file a JIRA issue to add this
functionality? Thanks a lot Yuval.

Cheers,
Till

On Thu, Aug 20, 2020 at 9:47 AM Yuval Itzchakov <yuva...@gmail.com> wrote:

> Hi Till,
> KafkaSerializationSchema is only pluggable for the DataStream API, not for
> the Table API. KafkaTableSink hard codes a KeyedSerializationSchema that
> uses a null key, and this behavior can't be overridden.
>
> I have to say I was quite surprised by this behavior, as publishing events
> to Kafka using a key to keep order inside a given partition is usually a
> very common requirement.
>
> On Thu, Aug 20, 2020 at 10:26 AM Till Rohrmann <trohrm...@apache.org>
> wrote:
>
>> Hi Yuval,
>>
>> it looks as if the KafkaTableSink only supports writing out rows without
>> a key. Pulling in Timo for verification.
>>
>> If you want to use a Kafka producer which writes the records out with a
>> key, then please take a look at KafkaSerializationSchema. It supports this
>> functionality.
>>
>> Cheers,
>> Till
>>
>> On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <yuva...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I'm running Flink 1.9.0 and I'm trying to set the key to be published by
>>> the Table API's Kafka Connector. I've searched the documentation by
>>> could find no reference for such an ability.
>>>
>>> Additionally, while browsing the code of the KafkaTableSink, it looks
>>> like it creates a KeyedSerializationSchemaWrapper which just sets the key
>>> to null?
>>>
>>> Would love some help.
>>>
>>> --
>>> Best Regards,
>>> Yuval Itzchakov.
>>>
>>
>
> --
> Best Regards,
> Yuval Itzchakov.
>

Reply via email to