Hi everyone,
thanks for the feedback that we got so far. I will update the document
in the next couple of hours such that we can continue with the discussion.
Regarding the table type: Actually I just didn't mention it in the
document, because the table type is a SQL Client/External catalog
Hi,
It is a good question that how to avoid write to a table accidentally.
I think there are other ways to solve the problem, such as we can provide a
view instead of a table to the users or add a table constraint.
Best,
Hequn
On Fri, Oct 5, 2018 at 1:30 PM Shuyi Chen wrote:
> In the case of n
In the case of normal Flink job, I agree we can infer the table type from
the queries. However, for SQL client, the query is adhoc and not known
beforehand. In such case, we might want to enforce the table open mode at
startup time, so users won't accidentally write to a Kafka topic that is
suppose
Hi Timo,
Thanks for putting together the proposal!
I really love the idea to combining solution for historic and recent data
and left some suggestions on that part.
Regarding the table type, e.g. for kafka streams, I agree with @hequn's
idea that it should be pretty much inferable from the SQL co
Hi,
Thanks a lot for the proposal. I like the idea to unify table definitions.
I think we can drop the table type since the type can be derived from the
sql, i.e, a table be inserted can only be a sink table.
I left some minor suggestions in the document, mainly include:
- Maybe we also need to a
Thanks a lot for the proposal, Timo. I left a few comments. Also, it seems
the example in the doc does not have the table type (source, sink and both)
property anymore. Are you suggesting drop it? I think the table type
properties is still useful as it can restrict a certain connector to be
only so
Thanks for the proposal!
I like the proposed changes a lot, especially support for reading/writing key
data of systems that have a key/value split will be very nice to have.
> On 2. Oct 2018, at 11:58, Timo Walther wrote:
>
> Thanks for the feedback Fabian. I updated the document and addressed
Thanks for the feedback Fabian. I updated the document and addressed
your comments.
I agree that tables which are stored in different systems need more
discussion. I would suggest to deprecate the field mapping interfaces in
this release and remove it in the next release.
Regards,
Timo
Am
Thanks for the proposal Timo!
I've done a pass and added some comments (mostly asking for clarification,
details).
Overall, this is going into a very good direction.
I think the tables which are stored in different systems and using a format
definition to define other formats require some more dis
Hi everyone,
as some of you might have noticed, in the last two releases we aimed to
unify SQL connectors and make them more modular. The first connectors
and formats have been implemented and are usable via the SQL Client and
Java/Scala/SQL APIs.
However, after writing more connectors/examp
10 matches
Mail list logo