Thanks Shengkai for your proposal.

+1 for this feature.

> Future Work: Support bounded KTable source

I don't think it should be a future work, I think it is one of the
important concepts of this FLIP. We need to understand it now.

Intuitively, a ktable in my opinion is a bounded table rather than a
stream, so select should produce a bounded table by default.

I think we can list Kafka related knowledge, because the word `ktable` is
easy to associate with ksql related concepts. (If possible, it's better to
unify with it)

What do you think?

> value.fields-include

What about the default behavior of KSQL?

Best,
Jingsong

On Mon, Oct 19, 2020 at 4:33 PM Shengkai Fang <fskm...@gmail.com> wrote:

> Hi, devs.
>
> Jark and I want to start a new FLIP to introduce the KTable connector. The
> KTable is a shortcut of "Kafka Table", it also has the same semantics with
> the KTable notion in Kafka Stream.
>
> FLIP-149:
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-149%3A+Introduce+the+KTable+Connector
>
> Currently many users have expressed their needs for the upsert Kafka by
> mail lists and issues. The KTable connector has several benefits for users:
>
> 1. Users are able to interpret a compacted Kafka Topic as an upsert stream
> in Apache Flink. And also be able to write a changelog stream to Kafka
> (into a compacted topic).
> 2. As a part of the real time pipeline, store join or aggregate result (may
> contain updates) into a Kafka topic for further calculation;
> 3. The semantic of the KTable connector is just the same as KTable in Kafka
> Stream. So it's very handy for Kafka Stream and KSQL users. We have seen
> several questions in the mailing list asking how to model a KTable and how
> to join a KTable in Flink SQL.
>
> We hope it can expand the usage of the Flink with Kafka.
>
> I'm looking forward to your feedback.
>
> Best,
> Shengkai
>


-- 
Best, Jingsong Lee

Reply via email to