Hi Timo,
I think I get your point why it would be better to put Table Update Mode in
MVP. But because this is a sophisticated problem, we need to think about it
carefully and need some discussions offline. We will reach out to here when
we have a clear design.
8). Support row/map/array data type
Hi all,
I think we should discuss what we consider an MVP DDL. For me, an MVP
DDL was to just focus on a CREATE TABLE statement. It would be great to
come up with a solution that finally solves the issue of connecting
different kind of systems. One reason why we postponed DDL statements
for q
Hi all,
Here are a bunch of my thoughts:
8). support row/map/array data type
That's fine with me if we want to support them in the MVP. In my mind, we
can have the field type syntax like this:
```
filedType ::=
{
simpleType
| MAP
| ARRAY
Thanks for the summary effort @shuyi. Sorry for jumping in the discussion
so late.
As of the scope of MVP, I think we might want to consider adding "table
update mode" problem to it. I agree with @timo that might not be easily
changed in the future if the flags has to be part of the schema/column
Hi all,
I have been following this thread and it looks interesting. Can I please be
of any help, please let me know.
Thanks,
Teja
On Wed, Dec 12, 2018, 4:31 AM Kurt Young Sounds great, thanks for the effort, Shuyi.
>
> Best,
> Kurt
>
>
> On Wed, Dec 12, 2018 at 5:14 PM Shuyi Chen wrote:
>
> >
Sounds great, thanks for the effort, Shuyi.
Best,
Kurt
On Wed, Dec 12, 2018 at 5:14 PM Shuyi Chen wrote:
> Hi all,
>
> I summarize the MVP based on the features that we agreed upon. For table
> update mode and custom watermark strategy and ts extractor, I found there
> are some discussions, so
Hi all,
I summarize the MVP based on the features that we agreed upon. For table
update mode and custom watermark strategy and ts extractor, I found there
are some discussions, so I decided to leave them out for the MVP.
For row/map/array data type, I think we can add it as well if everyone
agrees
d connector API design [1] is done, we can
finalize
the
DDL
design
as
well and start creating concrete subtasks to
collaborate
on
the
implementation with the community.
Shuyi
[1]
https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidwKIZEWrrA-pznWLuvaH39Y/edit?usp=sharing
On Mon, Nov 26,
Hi all,
It's great to see we have an agreement on MVP.
4.b) Ingesting and writing timestamps to systems.
I would treat the field as a physical column not a virtual column. If we
treat it as computed column, it will be confused that the behavior is
different when it is a source or sink.
When it is
hi all,
Thanks for your valuable input!
4) Event-Time Attributes and Watermarks:
4.b) @Fabian As you mentioned using a computed columns `ts AS
SYSTEMROWTIME()`
for writing out to kafka table sink will violate the rule that computed
fields are not emitted.
Since the timestamp column in kafka's head
Hi all,
Thanks a lot for the great discussion. I think we can continue the
discussion here while carving out a MVP so that the community can start
working on. Based on the discussion so far, I try to summarize what we will
do for the MVP:
MVP
1. support CREATE TABLE
2. support exisiting da
.
I
think
once
the
unified connector API design [1] is done, we can
finalize
the
DDL
design
as
well and start creating concrete subtasks to
collaborate
on
the
implementation with the community.
Shuyi
[1]
https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidwKIZEWrrA-pznWLuvaH39Y/e
Hi all,
Thanks for the discussion.
I'd like to share my point of view as well.
4) Event-Time Attributes and Watermarks:
4.a) I agree with Lin and Jark's proposal. Declaring a watermark on an
attribute declares it as an event-time attribute.
4.b) Ingesting and writing timestamps to systems (like K
; from
> > >>>>>>>>>>>>> Alibaba.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> 1.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>&
on the two points metioned above, I think we should
> >>>>>>> combine
> >>>>>>>>> the
> >>>>>>>>>>>>> watermark strategy and rowtime field selection (i.e. which
> >>>>>>>> existing
to
collaborate
on
the
implementation with the community.
Shuyi
[1]
https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidwKIZEWrrA-pznWLuvaH39Y/edit?usp=sharing
On Mon, Nov 26, 2018 at 7:01 PM Zhang, Xuefu <
xuef...@alibaba-inc.com>
wrote:
Hi Shuyi,
I'm wondering if you
>>>>>>>>>>
> > >>>>>>>>>> Best,
> > >>>>>>>>>> Jark Wu
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> Lin Li 于2018年11月28日周三 下午6:33写道:
> > >>>>>>
[ computedColumnDefinition [,
> >>>>> computedColumnDefinition]*
> >>>>>> ]
> >>>>>>>>>>> [ tableConstraint [, tableConstraint]* ]
> >>>>>>>>>>> [ tableIndex [, tabl
>>>>>>>>>> [ computedColumnDefinition [,
> >>>>> computedColumnDefinition]*
> >>>>>> ]
> >>>>>>>>>>> [ tableConstraint [, tableConstraint]* ]
> >>>>>>>>
xuef...@alibaba-inc.com>
wrote:
Hi Shuyi,
I'm wondering if you folks still have the
bandwidth
working
on
this.
We have some dedicated resource and like to move
this
forward.
We
can
collaborate.
Thanks,
Xuefu
> > > > > > ( columnName [, columnName]* )
> > > > > > > > > > ]
> > > > > > > > > > AS queryStatement;
> > > > > > > > > >
> > > > > > > > > > CREATE FUNCTION
> > > > > > > > > >
> > > > > > > > > > CREATE FUNCTION functionName
> &
gt; > > > > > > > >
> > > > > > > > > > Thanks a lot, Timo and Xuefu. Yes, I think we can
> finalize
> > > the
> > > > > > design
> > > > > > > > doc
> > > > > > > > > > first and start implementation w/o th
> > > >
> > > > > > > > > I'll run a final pass over the design doc and finalize the
> > > design
> > > > > in
> > > > > > > the
> > > > > > > > > next few days. And we can start creating
> > > > > > > >
> > > > > > > > On Tue, Nov 27, 2018 at 7:02 AM Zhang, Xuefu <
> > > > > xuef...@alibaba-inc.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Yeah! I agree with Timo that DDL can actually proceed w/o
> > be
; > As commented in the doc, I think we can probably stick with
> > > simple
> > > > > > syntax
> > > > > > > > with general properties, without extending the syntax too
> much
> > > that
> > > > > it
> > > > > > > > mimics the descriptor API.
>
lp and collaborate. At this moment, I think we can
> > > finalize
> > > > on
> > > > > > > the proposal and then we can divide the tasks for better
> > > > collaboration.
> > > > > > >
> > > > > > > Please l
; > We can help and collaborate. At this moment, I think we can
> > > finalize
> > > > on
> > > > > > > the proposal and then we can divide the tasks for better
> > > > collaboration.
> > > > > > >
> > > > >
s. The one in the current proposal seems
> > > making
> > > > > our
> > > > > > > effort more challenging.
> > > > > > >
> > > > > > > We can help and collaborate. At this moment, I think we can
> > > finalize
> > > > on
> > > > > > > the proposal and then we can divide the tasks for better
----------------
> > > > > > Sender:Timo Walther
> > > > > > Sent at:2018 Nov 27 (Tue) 16:21
> > > > > > Recipient:dev
> > > > > > Subject:Re: [DISCUSS] Flink SQL DDL Design
&g
> > > >
> > > > > >
> > > > > >
> --------------------------
> > > > > > Sender:Timo Walther
> > > > > > Sent at:2018 Nov 27 (Tue) 16:21
> > > > > > Recipient:
API design but we can also start with the basic
> > > > > functionality now and evolve the DDL during this release and next
> > > > releases.
> > > > >
> > > > > For example, we could identify the MVP DDL syntax that skips
> defining
> > > for batch usecases, ETL, and materializing SQL queries (no time
> > > > operations like windows).
> > > >
> > > > The unified connector API is high on our priority list for the 1.8
> > > > release. I will try to update the document until mid of next week.
> > > >
> >
ther stuff for the last 2
> > > weeks,
> > > > but we are definitely interested in moving this forward. I think once
> > the
> > > > unified connector API design [1] is done, we can finalize the DDL
> > design
> > > as
> > > > well and start creating concrete s
> >
> > > Shuyi
> > >
> > > [1]
> > >
> >
> https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidwKIZEWrrA-pznWLuvaH39Y/edit?usp=sharing
> > >
> > > On Mon, Nov 26, 2018 at 7:01 PM Zhang, Xuefu
> > > wrote:
> > >
> > >> Hi Shuyi,
>
uvaH39Y/edit?usp=sharing
> >
> > On Mon, Nov 26, 2018 at 7:01 PM Zhang, Xuefu
> > wrote:
> >
> >> Hi Shuyi,
> >>
> >> I'm wondering if you folks still have the bandwidth working on this.
> >>
> >> We have some dedicated re
ce and like to move this forward. We can
>> collaborate.
>>
>> Thanks,
>>
>> Xuefu
>>
>>
>> ----------------------
>> 发件人:wenlong.lwl
>> 日 期:2018年11月05日 11:15:35
>> 收件人:
>> 主 题:Re: [DISCUSS] Flink SQL DDL Design
>>
>
15:35
收件人:
主 题:Re: [DISCUSS] Flink SQL DDL Design
Hi, Shuyi, thanks for the proposal.
I have two concerns about the table ddl:
1. how about remove the source/sink mark from the ddl, because it is not
necessary, the framework determine the table referred is a source or a sink
according to the co
t;
> We have some dedicated resource and like to move this forward. We can
> collaborate.
>
> Thanks,
>
> Xuefu
>
>
> --
> 发件人:wenlong.lwl
> 日 期:2018年11月05日 11:15:35
> 收件人:
> 主 题:Re: [DISCUSS] Flink SQL
Hi Wenlong, thanks a lot for the comments.
1) I agree we can infer the table type from the queries if the Flink job is
static. However, for SQL client cases, the query is adhoc, dynamic, and not
known beforehand. In such case, we might want to enforce the table open
mode at startup time, so users
35
收件人:
主 题:Re: [DISCUSS] Flink SQL DDL Design
Hi, Shuyi, thanks for the proposal.
I have two concerns about the table ddl:
1. how about remove the source/sink mark from the ddl, because it is not
necessary, the framework determine the table referred is a source or a sink
according to the conte
Hi, Shuyi, thanks for the proposal.
I have two concerns about the table ddl:
1. how about remove the source/sink mark from the ddl, because it is not
necessary, the framework determine the table referred is a source or a sink
according to the context of the query using the table. it will be more
+1, Thanks for the proposal.
I guess this is a long-awaited change. This can vastly increase the
functionalities of the SQL Client as it will be possible to use complex
extensions like for example those provided by Apache Bahir[1].
Best Regards,
Dom.
[1]
https://github.com/apache/bahir-flink
s
+1. Thanks for putting the proposal together Shuyi.
DDL has been brought up in a couple of times previously [1,2]. Utilizing
DDL will definitely be a great extension to the current Flink SQL to
systematically support some of the previously brought up features such as
[3]. And it will also be benef
Thanks Shuyi!
I left some comments there. I think the design of SQL DDL and Flink-Hive
integration/External catalog enhancements will work closely with each
other. Hope we are well aligned on the directions of the two designs, and I
look forward to working with you guys on both!
Bowen
On Thu, N
44 matches
Mail list logo