[jira] [Created] (FLINK-21094) Support StreamExecSink json serialization/deserialization

2021-01-22 Thread godfrey he (Jira)
godfrey he created FLINK-21094:
--

 Summary: Support StreamExecSink json serialization/deserialization
 Key: FLINK-21094
 URL: https://issues.apache.org/jira/browse/FLINK-21094
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: godfrey he
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-162: Consistent Flink SQL time function behavior

2021-01-22 Thread Kurt Young
Before jumping into technique details, let's take a step back to discuss
user experience.

The first important question is what kind of date and time will Flink
display when users call
 CURRENT_TIMESTAMP and maybe also PROCTIME (if we think they are similar).

Should it always display the date and time in UTC or in the user's time
zone? I think this part is the
reason that surprised lots of users. If we forget about the type and
internal representation of these
two methods, as a user, my instinct tells me that these two methods should
display my wall clock time.

Display time in UTC? I'm not sure, why I should care about UTC time? I want
to get my current timestamp.
For those users who have never gone abroad, they might not even be able to
realize that this is affected
by the time zone.

Best,
Kurt


On Fri, Jan 22, 2021 at 12:25 PM Leonard Xu  wrote:

> Thanks @Timo for the detailed reply, let's go on this topic on this
> discussion,  I've merged all mails to this discussion.
>
> > LOCALDATE / LOCALTIME / LOCALTIMESTAMP
> >
> > --> uses session time zone, returns DATE/TIME/TIMESTAMP
>
> >
> > CURRENT_DATE/CURRENT_TIME/CURRENT_TIMESTAMP
> >
> > --> uses session time zone, returns DATE/TIME/TIMESTAMP
> >
> > I'm very sceptical about this behavior. Almost all mature systems
> (Oracle, Postgres) and new high quality systems (Presto, Snowflake) use a
> data type with some degree of time zone information encoded. In a
> globalized world with businesses spanning different regions, I think we
> should do this as well. There should be a difference between
> CURRENT_TIMESTAMP and LOCALTIMESTAMP. And users should be able to choose
> which behavior they prefer for their pipeline.
>
>
> I know that the two series should be different at first glance, but
> different SQL engines can have their own explanations,for example,
> CURRENT_TIMESTAMP and LOCALTIMESTAMP are synonyms in Snowflake[1] and has
> no difference, and Spark only supports the later one and doesn’t support
> LOCALTIME/LOCALTIMESTAMP[2].
>
>
> > If we would design this from scatch, I would suggest the following:
> >
> > - drop CURRENT_DATE / CURRENT_TIME and let users pick LOCALDATE /
> LOCALTIME for materialized timestamp parts
>
> The function CURRENT_DATE/CURRENT_TIME is supporting in SQL standard, but
> LOCALDATE not, I don’t think it’s a good idea that dropping functions which
> SQL standard supported and introducing a replacement which SQL standard not
> reminded.
>
>
> > - CURRENT_TIMESTAMP should return a TIMESTAMP WITH TIME ZONE to
> materialize all session time information into every record. It it the most
> generic data type and allows to cast to all other timestamp data types.
> This generic ability can be used for filter predicates as well either
> through implicit or explicit casting.
>
> TIMESTAMP WITH TIME ZONE indeed contains more information to describe a
> time point, but the type TIMESTAMP  can cast to all other timestamp data
> types combining with session time zone as well, and it also can be used for
> filter predicates. For type casting between BIGINT and TIMESTAMP, I think
> the function way using TO_TIMEMTAMP()/FROM_UNIXTIMESTAMP() is more clear.
>
> > PROCTIME/ROWTIME should be time functions based on a long value. Both
> System.currentMillis() and our watermark system work on long values. Those
> should return TIMESTAMP WITH LOCAL TIME ZONE because the main calculation
> should always happen based on UTC.
> > We discussed it in a different thread, but we should allow PROCTIME
> globally. People need a way to create instances of TIMESTAMP WITH LOCAL
> TIME ZONE. This is not considered in the current design doc.
> > Many pipelines contain UTC timestamps and thus it should be easy to
> create one.
> > Also, both CURRENT_TIMESTAMP and LOCALTIMESTAMP can work with this type
> because we should remember that TIMESTAMP WITH LOCAL TIME ZONE accepts all
> timestamp data types as casting target [1]. We could allow TIMESTAMP WITH
> TIME ZONE in the future for ROWTIME.
> > In any case, windows should simply adapt their behavior to the passed
> timestamp type. And with TIMESTAMP WITH LOCAL TIME ZONE a day is defined by
> considering the current session time zone.
>
> I also agree returning  TIMESTAMP WITH LOCAL TIME ZONE for PROCTIME has
> more clear semantics, but I realized that user didn’t care the type but
> more about the expressed value they saw, and change the type from TIMESTAMP
> to TIMESTAMP WITH LOCAL TIME ZONE brings huge refactor that we need
> consider all places where the TIMESTAMP type used, and many builtin
> functions and UDFs doest not support  TIMESTAMP WITH LOCAL TIME ZONE type.
> That means both user and Flink devs need to refactor the code(UDF, builtin
> functions, sql pipeline), to be honest, I didn’t see strong motivation that
> we have to do the pretty big refactor from user’s perspective and
> developer’s perspective.
>
> In one word, both your suggestion and my proposal can resolve almost all
> user problems,the d

Re: [DISCUSS] FLIP-162: Consistent Flink SQL time function behavior

2021-01-22 Thread Kurt Young
Forgot one more thing. Continue with displaying in UTC. As a user, if Flink
want to display the timestamp
in UTC, why don't we offer something like UTC_TIMESTAMP?

Best,
Kurt


On Fri, Jan 22, 2021 at 4:33 PM Kurt Young  wrote:

> Before jumping into technique details, let's take a step back to discuss
> user experience.
>
> The first important question is what kind of date and time will Flink
> display when users call
>  CURRENT_TIMESTAMP and maybe also PROCTIME (if we think they are similar).
>
> Should it always display the date and time in UTC or in the user's time
> zone? I think this part is the
> reason that surprised lots of users. If we forget about the type and
> internal representation of these
> two methods, as a user, my instinct tells me that these two methods should
> display my wall clock time.
>
> Display time in UTC? I'm not sure, why I should care about UTC time? I
> want to get my current timestamp.
> For those users who have never gone abroad, they might not even be able to
> realize that this is affected
> by the time zone.
>
> Best,
> Kurt
>
>
> On Fri, Jan 22, 2021 at 12:25 PM Leonard Xu  wrote:
>
>> Thanks @Timo for the detailed reply, let's go on this topic on this
>> discussion,  I've merged all mails to this discussion.
>>
>> > LOCALDATE / LOCALTIME / LOCALTIMESTAMP
>> >
>> > --> uses session time zone, returns DATE/TIME/TIMESTAMP
>>
>> >
>> > CURRENT_DATE/CURRENT_TIME/CURRENT_TIMESTAMP
>> >
>> > --> uses session time zone, returns DATE/TIME/TIMESTAMP
>> >
>> > I'm very sceptical about this behavior. Almost all mature systems
>> (Oracle, Postgres) and new high quality systems (Presto, Snowflake) use a
>> data type with some degree of time zone information encoded. In a
>> globalized world with businesses spanning different regions, I think we
>> should do this as well. There should be a difference between
>> CURRENT_TIMESTAMP and LOCALTIMESTAMP. And users should be able to choose
>> which behavior they prefer for their pipeline.
>>
>>
>> I know that the two series should be different at first glance, but
>> different SQL engines can have their own explanations,for example,
>> CURRENT_TIMESTAMP and LOCALTIMESTAMP are synonyms in Snowflake[1] and has
>> no difference, and Spark only supports the later one and doesn’t support
>> LOCALTIME/LOCALTIMESTAMP[2].
>>
>>
>> > If we would design this from scatch, I would suggest the following:
>> >
>> > - drop CURRENT_DATE / CURRENT_TIME and let users pick LOCALDATE /
>> LOCALTIME for materialized timestamp parts
>>
>> The function CURRENT_DATE/CURRENT_TIME is supporting in SQL standard, but
>> LOCALDATE not, I don’t think it’s a good idea that dropping functions which
>> SQL standard supported and introducing a replacement which SQL standard not
>> reminded.
>>
>>
>> > - CURRENT_TIMESTAMP should return a TIMESTAMP WITH TIME ZONE to
>> materialize all session time information into every record. It it the most
>> generic data type and allows to cast to all other timestamp data types.
>> This generic ability can be used for filter predicates as well either
>> through implicit or explicit casting.
>>
>> TIMESTAMP WITH TIME ZONE indeed contains more information to describe a
>> time point, but the type TIMESTAMP  can cast to all other timestamp data
>> types combining with session time zone as well, and it also can be used for
>> filter predicates. For type casting between BIGINT and TIMESTAMP, I think
>> the function way using TO_TIMEMTAMP()/FROM_UNIXTIMESTAMP() is more clear.
>>
>> > PROCTIME/ROWTIME should be time functions based on a long value. Both
>> System.currentMillis() and our watermark system work on long values. Those
>> should return TIMESTAMP WITH LOCAL TIME ZONE because the main calculation
>> should always happen based on UTC.
>> > We discussed it in a different thread, but we should allow PROCTIME
>> globally. People need a way to create instances of TIMESTAMP WITH LOCAL
>> TIME ZONE. This is not considered in the current design doc.
>> > Many pipelines contain UTC timestamps and thus it should be easy to
>> create one.
>> > Also, both CURRENT_TIMESTAMP and LOCALTIMESTAMP can work with this type
>> because we should remember that TIMESTAMP WITH LOCAL TIME ZONE accepts all
>> timestamp data types as casting target [1]. We could allow TIMESTAMP WITH
>> TIME ZONE in the future for ROWTIME.
>> > In any case, windows should simply adapt their behavior to the passed
>> timestamp type. And with TIMESTAMP WITH LOCAL TIME ZONE a day is defined by
>> considering the current session time zone.
>>
>> I also agree returning  TIMESTAMP WITH LOCAL TIME ZONE for PROCTIME has
>> more clear semantics, but I realized that user didn’t care the type but
>> more about the expressed value they saw, and change the type from TIMESTAMP
>> to TIMESTAMP WITH LOCAL TIME ZONE brings huge refactor that we need
>> consider all places where the TIMESTAMP type used, and many builtin
>> functions and UDFs doest not support  TIMESTAMP WITH LOCAL TIME ZONE type.

Re: [ANNOUNCE] Welcome Guowei Ma as a new Apache Flink Committer

2021-01-22 Thread zhang hao
Congrats Guowei!

On Fri, Jan 22, 2021 at 10:40 AM Zhijiang
 wrote:

> Congrats, Guowei!
>
>
> Best,
> Zhijiang
>
>
> --
> From:Biao Liu 
> Send Time:2021年1月21日(星期四) 14:45
> To:dev 
> Subject:Re: [ANNOUNCE] Welcome Guowei Ma as a new Apache Flink Committer
>
> Congrats, Guowei!
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
>
> On Thu, 21 Jan 2021 at 09:30, Paul Lam  wrote:
>
> > Congrats, Guowei!
> >
> > Best,
> > Paul Lam
> >
> > > 2021年1月21日 07:21,Steven Wu  写道:
> > >
> > > Congrats, Guowei!
> > >
> > > On Wed, Jan 20, 2021 at 10:32 AM Seth Wiesman 
> > wrote:
> > >
> > >> Congratulations!
> > >>
> > >> On Wed, Jan 20, 2021 at 3:41 AM hailongwang <18868816...@163.com>
> > wrote:
> > >>
> > >>> Congratulations, Guowei!
> > >>>
> > >>> Best,
> > >>> Hailong
> > >>>
> > >>> 在 2021-01-20 15:55:24,"Till Rohrmann"  写道:
> >  Congrats, Guowei!
> > 
> >  Cheers,
> >  Till
> > 
> >  On Wed, Jan 20, 2021 at 8:32 AM Matthias Pohl <
> matth...@ververica.com
> > >
> >  wrote:
> > 
> > > Congrats, Guowei!
> > >
> > > On Wed, Jan 20, 2021 at 8:22 AM Congxian Qiu <
> qcx978132...@gmail.com
> > >
> > > wrote:
> > >
> > >> Congrats Guowei!
> > >>
> > >> Best,
> > >> Congxian
> > >>
> > >>
> > >> Danny Chan  于2021年1月20日周三 下午2:59写道:
> > >>
> > >>> Congratulations Guowei!
> > >>>
> > >>> Best,
> > >>> Danny
> > >>>
> > >>> Jark Wu  于2021年1月20日周三 下午2:47写道:
> > >>>
> >  Congratulations Guowei!
> > 
> >  Cheers,
> >  Jark
> > 
> >  On Wed, 20 Jan 2021 at 14:36, SHI Xiaogang <
> > >>> shixiaoga...@gmail.com>
> > >>> wrote:
> > 
> > > Congratulations MA!
> > >
> > > Regards,
> > > Xiaogang
> > >
> > > Yun Tang  于2021年1月20日周三 下午2:24写道:
> > >
> > >> Congratulations Guowei!
> > >>
> > >> Best
> > >> Yun Tang
> > >> 
> > >> From: Yang Wang 
> > >> Sent: Wednesday, January 20, 2021 13:59
> > >> To: dev 
> > >> Subject: Re: Re: [ANNOUNCE] Welcome Guowei Ma as a new
> > >> Apache
> > > Flink
> > >> Committer
> > >>
> > >> Congratulations Guowei!
> > >>
> > >>
> > >> Best,
> > >> Yang
> > >>
> > >> Yun Gao  于2021年1月20日周三
> > >>> 下午1:52写道:
> > >>
> > >>> Congratulations Guowei!
> > >>>
> > >>> Best,
> > >>>
> > 
> > >>> Yun--
> > >>> Sender:Yangze Guo
> > >>> Date:2021/01/20 13:48:52
> > >>> Recipient:dev
> > >>> Theme:Re: [ANNOUNCE] Welcome Guowei Ma as a new Apache
> > >> Flink
> >  Committer
> > >>>
> > >>> Congratulations, Guowei! Well deserved.
> > >>>
> > >>>
> > >>> Best,
> > >>> Yangze Guo
> > >>>
> > >>> On Wed, Jan 20, 2021 at 1:46 PM Xintong Song <
> > >>> tonysong...@gmail.com>
> > >>> wrote:
> > 
> >  Congratulations, Guowei~!
> > 
> > 
> >  Thank you~
> > 
> >  Xintong Song
> > 
> > 
> > 
> >  On Wed, Jan 20, 2021 at 1:42 PM Yuan Mei <
> > >> yuanmei.w...@gmail.com
> > 
> > >> wrote:
> > 
> > > Congrats Guowei :-)
> > >
> > > Best,
> > > Yuan
> > >
> > > On Wed, Jan 20, 2021 at 1:36 PM tison <
> > > wander4...@gmail.com>
> > > wrote:
> > >
> > >> Congrats Guowei!
> > >>
> > >> Best,
> > >> tison.
> > >>
> > >>
> > >> Kurt Young  于2021年1月20日周三
> > >> 下午1:34写道:
> > >>
> > >>> Hi everyone,
> > >>>
> > >>> I'm very happy to announce that Guowei Ma has
> > >>> accepted
> > >> the
> > >>> invitation
> > > to
> > >>> become a Flink committer.
> > >>>
> > >>> Guowei is a very long term Flink developer, he has
> > >>> been
> > > extremely
> > > helpful
> > >>> with
> > >>> some important runtime changes, and also been
> > >>> active
> > >> with
> > >>> answering
> > > user
> > >>> questions as well as discussing designs.
> > >>>
> > >>> Please join me in congratulating Guowei for
> > >>> becoming a
> > >>> Flink
> > >>> committer!
> > >>>
> > >>> Best,
> > >>> Kurt
> > >>>
> > >>
> > >
> > >>>
> > >>
> > >
> > 

[jira] [Created] (FLINK-21095) Remove legacy slotmanagement profile

2021-01-22 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-21095:


 Summary: Remove legacy slotmanagement profile
 Key: FLINK-21095
 URL: https://issues.apache.org/jira/browse/FLINK-21095
 Project: Flink
  Issue Type: Improvement
  Components: Build System / CI, Runtime / Coordination
Reporter: Chesnay Schepler
 Fix For: 1.14.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21096) Introduce ExecNodeGraphJsonPlanGenerator to serialize ExecNodeGraph to json plan and deserialize json plan to ExecNodeGraph

2021-01-22 Thread godfrey he (Jira)
godfrey he created FLINK-21096:
--

 Summary: Introduce ExecNodeGraphJsonPlanGenerator to serialize 
ExecNodeGraph to json plan and deserialize json plan to ExecNodeGraph
 Key: FLINK-21096
 URL: https://issues.apache.org/jira/browse/FLINK-21096
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: godfrey he
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-162: Consistent Flink SQL time function behavior

2021-01-22 Thread Timo Walther

Hi everyone,

let me answer the individual threads:

>>> I know that the two series should be different at first glance, but
>>> different SQL engines can have their own explanations,for example,
>>> CURRENT_TIMESTAMP and LOCALTIMESTAMP are synonyms in Snowflake[1] 
and has
>>> no difference, and Spark only supports the later one and doesn’t 
support

>>> LOCALTIME/LOCALTIMESTAMP[2].

Most of the mature systems have a clear difference between 
CURRENT_TIMESTAMP and LOCALTIMESTAMP. I wouldn't take Spark or Hive as a 
good example. Snowflake decided for TIMESTAMP WITH LOCAL TIME ZONE. As I 
mentioned in the last comment, I could also imagine this behavior for 
Flink. But in any case, there should be some time zone information 
considered in order to cast to all other types.


>>> The function CURRENT_DATE/CURRENT_TIME is supporting in SQL 
standard, but
>>> LOCALDATE not, I don’t think it’s a good idea that dropping 
functions which
>>> SQL standard supported and introducing a replacement which SQL 
standard not

>>> reminded.

We can still add those functions in the future. But since we don't offer 
a TIME WITH TIME ZONE, it is better to not support this function at all 
for now. And by the way, this is exactly the behavior that also 
Microsoft SQL Server does: it also just supports CURRENT_TIMESTAMP (but 
it returns TIMESTAMP without a zone which completes the confusion).


>>> I also agree returning  TIMESTAMP WITH LOCAL TIME ZONE for PROCTIME has
>>> more clear semantics, but I realized that user didn’t care the type but
>>> more about the expressed value they saw, and change the type from 
TIMESTAMP

>>> to TIMESTAMP WITH LOCAL TIME ZONE brings huge refactor that we need
>>> consider all places where the TIMESTAMP type used

From a UDF perspective, I think nothing will change. The new type 
system and type inference were designed to support all these cases. 
There is a reason why Java has adopted Joda time, because it is hard to 
come up with a good time library. That's why also we and the other 
Hadoop ecosystem folks have decided for 3 different kinds of 
LocalDateTime, ZonedDateTime, and Instance. It makes the library more 
complex, but time is a complex topic.


I also doubt that many users work with only one time zone. Take the US 
as an example, a country with 3 different timezones. Somebody working 
with US data cannot properly see the data points with just LOCAL TIME 
ZONE. But on the other hand, a lot of event data is stored using a UTC 
timestamp.



>> Before jumping into technique details, let's take a step back to discuss
>> user experience.
>>
>> The first important question is what kind of date and time will Flink
>> display when users call
>>   CURRENT_TIMESTAMP and maybe also PROCTIME (if we think they are 
similar).

>>
>> Should it always display the date and time in UTC or in the user's time
>> zone?

@Kurt: I think we all agree that the current behavior with just showing 
UTC is wrong. Also, we all agree that when calling CURRENT_TIMESTAMP or 
PROCTIME a user would like to see the time in it's current time zone.


As you said, "my wall clock time".

However, the question is what is the data type of what you "see". If you 
pass this record on to a different system, operator, or different 
cluster, should the "my" get lost or materialized into the record?


TIMESTAMP -> completely lost and could cause confusion in a different system

TIMESTAMP WITH LOCAL TIME ZONE -> at least the UTC is correct, so you 
can provide a new local time zone


TIMESTAMP WITH TIME ZONE -> also "your" location is persisted

Regards,
Timo




On 22.01.21 09:38, Kurt Young wrote:

Forgot one more thing. Continue with displaying in UTC. As a user, if Flink
want to display the timestamp
in UTC, why don't we offer something like UTC_TIMESTAMP?

Best,
Kurt


On Fri, Jan 22, 2021 at 4:33 PM Kurt Young  wrote:


Before jumping into technique details, let's take a step back to discuss
user experience.

The first important question is what kind of date and time will Flink
display when users call
  CURRENT_TIMESTAMP and maybe also PROCTIME (if we think they are similar).

Should it always display the date and time in UTC or in the user's time
zone? I think this part is the
reason that surprised lots of users. If we forget about the type and
internal representation of these
two methods, as a user, my instinct tells me that these two methods should
display my wall clock time.

Display time in UTC? I'm not sure, why I should care about UTC time? I
want to get my current timestamp.
For those users who have never gone abroad, they might not even be able to
realize that this is affected
by the time zone.

Best,
Kurt


On Fri, Jan 22, 2021 at 12:25 PM Leonard Xu  wrote:


Thanks @Timo for the detailed reply, let's go on this topic on this
discussion,  I've merged all mails to this discussion.


LOCALDATE / LOCALTIME / LOCALTIMESTAMP

--> uses session time zone, returns DATE/TIME/TIMESTAMP




CURRENT_DATE/CURRENT_TIME/CURREN

[ANNOUNCE] 1.12.1 may still produce corrupted checkpoints

2021-01-22 Thread Arvid Heise
Dear users,

Unfortunately, the bug in the unaligned checkpoint that we fixed in 1.12.1
still occurs under certain circumstances, such that we recommend to not use
unaligned checkpoints in production until 1.12.2. While the normal
processing is not affected by this bug, a recovery with corrupted
checkpoints will not succeed.

If you have used unaligned checkpoints, you can change back to aligned
checkpoint when starting from an uncorrupted unaligned checkpoint. There is
no easy way to check if a checkpoint is corrupted or not, however, the rare
corruption happens most likely when you have short checkpointing intervals
(<1s), high backpressure, and the previous checkpoint was declined for some
reason. So to be safe, before switching back, make sure that the last
handful of checkpoints all succeeded.

We have already prepared a fix that we will merge into the release branch
today, but the discussion on when to release 1.12.2 has not started yet.

Best,

Arvid


Re:Re: [ANNOUNCE] Welcome Guowei Ma as a new Apache Flink Committer

2021-01-22 Thread pez1...@163.com
Congrats Guowei!















在 2021-01-22 16:48:19,"zhang hao"  写道:
>Congrats Guowei!
>
>On Fri, Jan 22, 2021 at 10:40 AM Zhijiang
> wrote:
>
>> Congrats, Guowei!
>>
>>
>> Best,
>> Zhijiang
>>
>>
>> --
>> From:Biao Liu 
>> Send Time:2021年1月21日(星期四) 14:45
>> To:dev 
>> Subject:Re: [ANNOUNCE] Welcome Guowei Ma as a new Apache Flink Committer
>>
>> Congrats, Guowei!
>>
>> Thanks,
>> Biao /'bɪ.aʊ/
>>
>>
>>
>> On Thu, 21 Jan 2021 at 09:30, Paul Lam  wrote:
>>
>> > Congrats, Guowei!
>> >
>> > Best,
>> > Paul Lam
>> >
>> > > 2021年1月21日 07:21,Steven Wu  写道:
>> > >
>> > > Congrats, Guowei!
>> > >
>> > > On Wed, Jan 20, 2021 at 10:32 AM Seth Wiesman 
>> > wrote:
>> > >
>> > >> Congratulations!
>> > >>
>> > >> On Wed, Jan 20, 2021 at 3:41 AM hailongwang <18868816...@163.com>
>> > wrote:
>> > >>
>> > >>> Congratulations, Guowei!
>> > >>>
>> > >>> Best,
>> > >>> Hailong
>> > >>>
>> > >>> 在 2021-01-20 15:55:24,"Till Rohrmann"  写道:
>> >  Congrats, Guowei!
>> > 
>> >  Cheers,
>> >  Till
>> > 
>> >  On Wed, Jan 20, 2021 at 8:32 AM Matthias Pohl <
>> matth...@ververica.com
>> > >
>> >  wrote:
>> > 
>> > > Congrats, Guowei!
>> > >
>> > > On Wed, Jan 20, 2021 at 8:22 AM Congxian Qiu <
>> qcx978132...@gmail.com
>> > >
>> > > wrote:
>> > >
>> > >> Congrats Guowei!
>> > >>
>> > >> Best,
>> > >> Congxian
>> > >>
>> > >>
>> > >> Danny Chan  于2021年1月20日周三 下午2:59写道:
>> > >>
>> > >>> Congratulations Guowei!
>> > >>>
>> > >>> Best,
>> > >>> Danny
>> > >>>
>> > >>> Jark Wu  于2021年1月20日周三 下午2:47写道:
>> > >>>
>> >  Congratulations Guowei!
>> > 
>> >  Cheers,
>> >  Jark
>> > 
>> >  On Wed, 20 Jan 2021 at 14:36, SHI Xiaogang <
>> > >>> shixiaoga...@gmail.com>
>> > >>> wrote:
>> > 
>> > > Congratulations MA!
>> > >
>> > > Regards,
>> > > Xiaogang
>> > >
>> > > Yun Tang  于2021年1月20日周三 下午2:24写道:
>> > >
>> > >> Congratulations Guowei!
>> > >>
>> > >> Best
>> > >> Yun Tang
>> > >> 
>> > >> From: Yang Wang 
>> > >> Sent: Wednesday, January 20, 2021 13:59
>> > >> To: dev 
>> > >> Subject: Re: Re: [ANNOUNCE] Welcome Guowei Ma as a new
>> > >> Apache
>> > > Flink
>> > >> Committer
>> > >>
>> > >> Congratulations Guowei!
>> > >>
>> > >>
>> > >> Best,
>> > >> Yang
>> > >>
>> > >> Yun Gao  于2021年1月20日周三
>> > >>> 下午1:52写道:
>> > >>
>> > >>> Congratulations Guowei!
>> > >>>
>> > >>> Best,
>> > >>>
>> > 
>> > >>> Yun--
>> > >>> Sender:Yangze Guo
>> > >>> Date:2021/01/20 13:48:52
>> > >>> Recipient:dev
>> > >>> Theme:Re: [ANNOUNCE] Welcome Guowei Ma as a new Apache
>> > >> Flink
>> >  Committer
>> > >>>
>> > >>> Congratulations, Guowei! Well deserved.
>> > >>>
>> > >>>
>> > >>> Best,
>> > >>> Yangze Guo
>> > >>>
>> > >>> On Wed, Jan 20, 2021 at 1:46 PM Xintong Song <
>> > >>> tonysong...@gmail.com>
>> > >>> wrote:
>> > 
>> >  Congratulations, Guowei~!
>> > 
>> > 
>> >  Thank you~
>> > 
>> >  Xintong Song
>> > 
>> > 
>> > 
>> >  On Wed, Jan 20, 2021 at 1:42 PM Yuan Mei <
>> > >> yuanmei.w...@gmail.com
>> > 
>> > >> wrote:
>> > 
>> > > Congrats Guowei :-)
>> > >
>> > > Best,
>> > > Yuan
>> > >
>> > > On Wed, Jan 20, 2021 at 1:36 PM tison <
>> > > wander4...@gmail.com>
>> > > wrote:
>> > >
>> > >> Congrats Guowei!
>> > >>
>> > >> Best,
>> > >> tison.
>> > >>
>> > >>
>> > >> Kurt Young  于2021年1月20日周三
>> > >> 下午1:34写道:
>> > >>
>> > >>> Hi everyone,
>> > >>>
>> > >>> I'm very happy to announce that Guowei Ma has
>> > >>> accepted
>> > >> the
>> > >>> invitation
>> > > to
>> > >>> become a Flink committer.
>> > >>>
>> > >>> Guowei is a very long term Flink developer, he has
>> > >>> been
>> > > extremely
>> > > helpful
>> > >>> with
>> > >>> some important runtime changes, and also been
>> > >>> active
>> > >> with
>> > >>> answering
>> > > user
>> > >>> questions as well as discussing designs.
>> > >>>
>> > >>> Please join me in

Re: [ANNOUNCE] 1.12.1 may still produce corrupted checkpoints

2021-01-22 Thread Xintong Song
Hi Arvid,

Thanks for the announcement.

I think we'd better also update the 1.12 release notes[1] and 1.12.1
release blog post[2].
Would you have time to help prepare the warning messages?

Thank you~

Xintong Song


[1]
https://github.com/apache/flink/blob/master/docs/release-notes/flink-1.12.md

[2]
https://github.com/apache/flink-web/blob/asf-site/_posts/2021-01-19-release-1.12.1.md



On Fri, Jan 22, 2021 at 7:40 PM Arvid Heise  wrote:

> Dear users,
>
> Unfortunately, the bug in the unaligned checkpoint that we fixed in 1.12.1
> still occurs under certain circumstances, such that we recommend to not use
> unaligned checkpoints in production until 1.12.2. While the normal
> processing is not affected by this bug, a recovery with corrupted
> checkpoints will not succeed.
>
> If you have used unaligned checkpoints, you can change back to aligned
> checkpoint when starting from an uncorrupted unaligned checkpoint. There is
> no easy way to check if a checkpoint is corrupted or not, however, the rare
> corruption happens most likely when you have short checkpointing intervals
> (<1s), high backpressure, and the previous checkpoint was declined for some
> reason. So to be safe, before switching back, make sure that the last
> handful of checkpoints all succeeded.
>
> We have already prepared a fix that we will merge into the release branch
> today, but the discussion on when to release 1.12.2 has not started yet.
>
> Best,
>
> Arvid
>


Re: [ANNOUNCE] 1.12.1 may still produce corrupted checkpoints

2021-01-22 Thread Arvid Heise
Hi Xintong,

yes, I'm on it.

Best,

Arvid

On Fri, Jan 22, 2021 at 1:01 PM Xintong Song  wrote:

> Hi Arvid,
>
> Thanks for the announcement.
>
> I think we'd better also update the 1.12 release notes[1] and 1.12.1
> release blog post[2].
> Would you have time to help prepare the warning messages?
>
> Thank you~
>
> Xintong Song
>
>
> [1]
>
> https://github.com/apache/flink/blob/master/docs/release-notes/flink-1.12.md
>
> [2]
>
> https://github.com/apache/flink-web/blob/asf-site/_posts/2021-01-19-release-1.12.1.md
>
>
>
> On Fri, Jan 22, 2021 at 7:40 PM Arvid Heise  wrote:
>
> > Dear users,
> >
> > Unfortunately, the bug in the unaligned checkpoint that we fixed in
> 1.12.1
> > still occurs under certain circumstances, such that we recommend to not
> use
> > unaligned checkpoints in production until 1.12.2. While the normal
> > processing is not affected by this bug, a recovery with corrupted
> > checkpoints will not succeed.
> >
> > If you have used unaligned checkpoints, you can change back to aligned
> > checkpoint when starting from an uncorrupted unaligned checkpoint. There
> is
> > no easy way to check if a checkpoint is corrupted or not, however, the
> rare
> > corruption happens most likely when you have short checkpointing
> intervals
> > (<1s), high backpressure, and the previous checkpoint was declined for
> some
> > reason. So to be safe, before switching back, make sure that the last
> > handful of checkpoints all succeeded.
> >
> > We have already prepared a fix that we will merge into the release branch
> > today, but the discussion on when to release 1.12.2 has not started yet.
> >
> > Best,
> >
> > Arvid
> >
>


-- 

Arvid Heise | Senior Java Developer



Follow us @VervericaData

--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Toni) Cheng


Re: [ANNOUNCE] 1.12.1 may still produce corrupted checkpoints

2021-01-22 Thread Till Rohrmann
Thanks for the update Arvid. This fix warrants a quick 1.12.2 release imo.

Cheers,
Till

On Fri, Jan 22, 2021 at 1:42 PM Arvid Heise  wrote:

> Hi Xintong,
>
> yes, I'm on it.
>
> Best,
>
> Arvid
>
> On Fri, Jan 22, 2021 at 1:01 PM Xintong Song 
> wrote:
>
> > Hi Arvid,
> >
> > Thanks for the announcement.
> >
> > I think we'd better also update the 1.12 release notes[1] and 1.12.1
> > release blog post[2].
> > Would you have time to help prepare the warning messages?
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> > [1]
> >
> >
> https://github.com/apache/flink/blob/master/docs/release-notes/flink-1.12.md
> >
> > [2]
> >
> >
> https://github.com/apache/flink-web/blob/asf-site/_posts/2021-01-19-release-1.12.1.md
> >
> >
> >
> > On Fri, Jan 22, 2021 at 7:40 PM Arvid Heise  wrote:
> >
> > > Dear users,
> > >
> > > Unfortunately, the bug in the unaligned checkpoint that we fixed in
> > 1.12.1
> > > still occurs under certain circumstances, such that we recommend to not
> > use
> > > unaligned checkpoints in production until 1.12.2. While the normal
> > > processing is not affected by this bug, a recovery with corrupted
> > > checkpoints will not succeed.
> > >
> > > If you have used unaligned checkpoints, you can change back to aligned
> > > checkpoint when starting from an uncorrupted unaligned checkpoint.
> There
> > is
> > > no easy way to check if a checkpoint is corrupted or not, however, the
> > rare
> > > corruption happens most likely when you have short checkpointing
> > intervals
> > > (<1s), high backpressure, and the previous checkpoint was declined for
> > some
> > > reason. So to be safe, before switching back, make sure that the last
> > > handful of checkpoints all succeeded.
> > >
> > > We have already prepared a fix that we will merge into the release
> branch
> > > today, but the discussion on when to release 1.12.2 has not started
> yet.
> > >
> > > Best,
> > >
> > > Arvid
> > >
> >
>
>
> --
>
> Arvid Heise | Senior Java Developer
>
> 
>
> Follow us @VervericaData
>
> --
>
> Join Flink Forward  - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>
> --
>
> Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
>
> --
> Ververica GmbH
> Registered at Amtsgericht Charlottenburg: HRB 158244 B
> Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
> (Toni) Cheng
>


Re: [ANNOUNCE] 1.12.1 may still produce corrupted checkpoints

2021-01-22 Thread Arvid Heise
Hi Till,

I completely agree with you.

Best,

Arvid

On Fri, Jan 22, 2021 at 1:46 PM Till Rohrmann  wrote:

> Thanks for the update Arvid. This fix warrants a quick 1.12.2 release imo.
>
> Cheers,
> Till
>
> On Fri, Jan 22, 2021 at 1:42 PM Arvid Heise  wrote:
>
> > Hi Xintong,
> >
> > yes, I'm on it.
> >
> > Best,
> >
> > Arvid
> >
> > On Fri, Jan 22, 2021 at 1:01 PM Xintong Song 
> > wrote:
> >
> > > Hi Arvid,
> > >
> > > Thanks for the announcement.
> > >
> > > I think we'd better also update the 1.12 release notes[1] and 1.12.1
> > > release blog post[2].
> > > Would you have time to help prepare the warning messages?
> > >
> > > Thank you~
> > >
> > > Xintong Song
> > >
> > >
> > > [1]
> > >
> > >
> >
> https://github.com/apache/flink/blob/master/docs/release-notes/flink-1.12.md
> > >
> > > [2]
> > >
> > >
> >
> https://github.com/apache/flink-web/blob/asf-site/_posts/2021-01-19-release-1.12.1.md
> > >
> > >
> > >
> > > On Fri, Jan 22, 2021 at 7:40 PM Arvid Heise  wrote:
> > >
> > > > Dear users,
> > > >
> > > > Unfortunately, the bug in the unaligned checkpoint that we fixed in
> > > 1.12.1
> > > > still occurs under certain circumstances, such that we recommend to
> not
> > > use
> > > > unaligned checkpoints in production until 1.12.2. While the normal
> > > > processing is not affected by this bug, a recovery with corrupted
> > > > checkpoints will not succeed.
> > > >
> > > > If you have used unaligned checkpoints, you can change back to
> aligned
> > > > checkpoint when starting from an uncorrupted unaligned checkpoint.
> > There
> > > is
> > > > no easy way to check if a checkpoint is corrupted or not, however,
> the
> > > rare
> > > > corruption happens most likely when you have short checkpointing
> > > intervals
> > > > (<1s), high backpressure, and the previous checkpoint was declined
> for
> > > some
> > > > reason. So to be safe, before switching back, make sure that the last
> > > > handful of checkpoints all succeeded.
> > > >
> > > > We have already prepared a fix that we will merge into the release
> > branch
> > > > today, but the discussion on when to release 1.12.2 has not started
> > yet.
> > > >
> > > > Best,
> > > >
> > > > Arvid
> > > >
> > >
> >
> >
> > --
> >
> > Arvid Heise | Senior Java Developer
> >
> > 
> >
> > Follow us @VervericaData
> >
> > --
> >
> > Join Flink Forward  - The Apache Flink
> > Conference
> >
> > Stream Processing | Event Driven | Real Time
> >
> > --
> >
> > Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
> >
> > --
> > Ververica GmbH
> > Registered at Amtsgericht Charlottenburg: HRB 158244 B
> > Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
> > (Toni) Cheng
> >
>


[jira] [Created] (FLINK-21097) Introduce e2e test framework for json plan test

2021-01-22 Thread godfrey he (Jira)
godfrey he created FLINK-21097:
--

 Summary: Introduce e2e test framework for json plan test
 Key: FLINK-21097
 URL: https://issues.apache.org/jira/browse/FLINK-21097
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: godfrey he
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-161: Configuration through environment variables

2021-01-22 Thread Ufuk Celebi
LGTM. Let's see what the others think...

On Thu, Jan 21, 2021, at 11:37 AM, Ingo Bürk wrote:
> Regarding env.java.opts, what special handling is needed there? AFAICT only
> the rejected alternative of substituting values would've had an effect on
> this.

Makes sense 👍

>From the FLIP:
> This mapping is not strictly bijective, but cases with consecutive periods or 
> dashes in the key name are not considered here and should not (reasonably) be 
> allowed.

We could actually enforce such restrictions in the implementation of 
ConfigOption, avoiding any surprises down the line.

– Ufuk


Re: [DISCUSS] FLIP-161: Configuration through environment variables

2021-01-22 Thread Ingo Bürk
Thanks, Ufuk. I think that makes sense, so I moved it from a footnote to an
addition to prevent that in the future as well.

Ingo

On Fri, Jan 22, 2021 at 3:10 PM Ufuk Celebi  wrote:

> LGTM. Let's see what the others think...
>
> On Thu, Jan 21, 2021, at 11:37 AM, Ingo Bürk wrote:
> > Regarding env.java.opts, what special handling is needed there? AFAICT
> only
> > the rejected alternative of substituting values would've had an effect on
> > this.
>
> Makes sense 👍
>
> From the FLIP:
> > This mapping is not strictly bijective, but cases with consecutive
> periods or dashes in the key name are not considered here and should not
> (reasonably) be allowed.
>
> We could actually enforce such restrictions in the implementation of
> ConfigOption, avoiding any surprises down the line.
>
> – Ufuk
>


Re: [DISCUSS] FLIP-161: Configuration through environment variables

2021-01-22 Thread Till Rohrmann
The updated design LGTM as well. Nice work Ingo!

Cheers,
Till

On Fri, Jan 22, 2021 at 3:33 PM Ingo Bürk  wrote:

> Thanks, Ufuk. I think that makes sense, so I moved it from a footnote to an
> addition to prevent that in the future as well.
>
> Ingo
>
> On Fri, Jan 22, 2021 at 3:10 PM Ufuk Celebi  wrote:
>
> > LGTM. Let's see what the others think...
> >
> > On Thu, Jan 21, 2021, at 11:37 AM, Ingo Bürk wrote:
> > > Regarding env.java.opts, what special handling is needed there? AFAICT
> > only
> > > the rejected alternative of substituting values would've had an effect
> on
> > > this.
> >
> > Makes sense 👍
> >
> > From the FLIP:
> > > This mapping is not strictly bijective, but cases with consecutive
> > periods or dashes in the key name are not considered here and should not
> > (reasonably) be allowed.
> >
> > We could actually enforce such restrictions in the implementation of
> > ConfigOption, avoiding any surprises down the line.
> >
> > – Ufuk
> >
>


[DISCUSS] FLIP-160: Declarative scheduler

2021-01-22 Thread Till Rohrmann
Hi everyone,

I would like to start a discussion about adding a new type of scheduler to
Flink. The declarative scheduler will first declare the required resources
and wait for them before deciding on the actual parallelism of a job.
Thereby it can better handle situations where resources cannot be fully
fulfilled. Moreover, it will act as a building block for the reactive mode
where Flink should scale to the maximum of the currently available
resources.

Please find more details in the FLIP wiki document [1]. Looking forward to
your feedback.

[1] https://cwiki.apache.org/confluence/x/mwtRCg

Cheers,
Till


[jira] [Created] (FLINK-21099) Introduce JobType to distinguish between batch and streaming jobs

2021-01-22 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-21099:
-

 Summary: Introduce JobType to distinguish between batch and 
streaming jobs
 Key: FLINK-21099
 URL: https://issues.apache.org/jira/browse/FLINK-21099
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Till Rohrmann
 Fix For: 1.13.0


In order to distinguish between batch and streaming jobs we propose to 
introduce  an enum {{JobType}} which is set in the {{JobGraph}} when creating 
it. Using the {{JobType}} it will be possible to decide which scheduler to use 
depending on the nature of the job.

For batch jobs (from the DataSet API), setting this field is trivial (in the 
JobGraphGenerator).

For streaming jobs the situation is more complicated, since FLIP-134 introduced 
support for bounded (batch) jobs in the DataStream API. For the DataStream API, 
we rely on the result of StreamGraphGenerator#shouldExecuteInBatchMode, which 
checks if the DataStream program has unbounded sources.

Lastly, the Blink Table API / SQL Planner also generates StreamGraph instances, 
which contain batch jobs. We are tagging the StreamGraph as a batch job in the 
ExecutorUtils.setBatchProperties() method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21100) Add DeclarativeScheduler skeleton

2021-01-22 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-21100:
-

 Summary: Add DeclarativeScheduler skeleton
 Key: FLINK-21100
 URL: https://issues.apache.org/jira/browse/FLINK-21100
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Till Rohrmann
 Fix For: 1.13.0


Add the {{DeclarativeScheduler}} skeleton as proposed in 
[FLIP-160|https://cwiki.apache.org/confluence/x/mwtRCg]. The skeleton will 
contain the basic state machine and rudimentary functionality of the scheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21101) Set up cron job to run CI with declarative scheduler

2021-01-22 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-21101:
-

 Summary: Set up cron job to run CI with declarative scheduler
 Key: FLINK-21101
 URL: https://issues.apache.org/jira/browse/FLINK-21101
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Till Rohrmann
 Fix For: 1.13.0


Once the declarative scheduler has been merged, we should create a Cron job to 
run all CI profiles with this scheduler in order to find all remaining test 
failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21098) Add SlotAllocator

2021-01-22 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-21098:
-

 Summary: Add SlotAllocator
 Key: FLINK-21098
 URL: https://issues.apache.org/jira/browse/FLINK-21098
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Till Rohrmann
 Fix For: 1.13.0


We should add the {{SlotAllocator}} as defined in 
[FLIP-160|https://cwiki.apache.org/confluence/x/mwtRCg]. The {{SlotAllocator}} 
implementation should support 
* slot sharing
* respect the configured parallelism of operators
* respect the max parallelism of operators (this point might already be covered 
if the configured parallelism cannot exceed the max parallelism, we should 
double check this)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21102) Add ScaleUpController

2021-01-22 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-21102:
-

 Summary: Add ScaleUpController
 Key: FLINK-21102
 URL: https://issues.apache.org/jira/browse/FLINK-21102
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Till Rohrmann
 Fix For: 1.13.0


Add the {{ScaleUpController}} according to the definition in 
[FLIP-160|https://cwiki.apache.org/confluence/x/mwtRCg].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-161: Configuration through environment variables

2021-01-22 Thread Chesnay Schepler
The FLIP seems to disregard the existence of dynamic properties, and I'm 
wondering why that is the case.
Don't they fulfill similar roles, in that they allow config options to 
be set on the command-line?


What use-case do they currently not support?
I assume it's something along the lines of setting some environment 
variable for containers, but at least for those based on our docker 
images we already have a mechanism to support that.


In any case, wouldn't a single DYNAMIC_PROPERTIES variable suffice that 
is automatically evaluated by all scripts?

Why go through all the hassle with the environment variable names?

On 1/22/2021 3:53 PM, Till Rohrmann wrote:

The updated design LGTM as well. Nice work Ingo!

Cheers,
Till

On Fri, Jan 22, 2021 at 3:33 PM Ingo Bürk  wrote:


Thanks, Ufuk. I think that makes sense, so I moved it from a footnote to an
addition to prevent that in the future as well.

Ingo

On Fri, Jan 22, 2021 at 3:10 PM Ufuk Celebi  wrote:


LGTM. Let's see what the others think...

On Thu, Jan 21, 2021, at 11:37 AM, Ingo Bürk wrote:

Regarding env.java.opts, what special handling is needed there? AFAICT

only

the rejected alternative of substituting values would've had an effect

on

this.

Makes sense 👍

 From the FLIP:

This mapping is not strictly bijective, but cases with consecutive

periods or dashes in the key name are not considered here and should not
(reasonably) be allowed.

We could actually enforce such restrictions in the implementation of
ConfigOption, avoiding any surprises down the line.

– Ufuk





[DISCUSS] FLIP-159: Reactive Mode

2021-01-22 Thread Robert Metzger
Hi all,

Till started a discussion about FLIP-160: Declarative scheduler [1] earlier
today, the first major feature based on that effort will be FLIP-159:
Reactive Mode. It allows users to operate Flink in a way that it reactively
scales the job up or down depending on the provided resources: adding
TaskManagers will scale the job up, removing them will scale it down again.

Here's the link to the Wiki:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-159%3A+Reactive+Mode

We are very excited to hear your feedback about the proposal!

Best,
Robert

[1]
https://lists.apache.org/thread.html/r604a01f739639e2a5f093fbe7894c172125530332747ecf6990a6ce4%40%3Cdev.flink.apache.org%3E


[jira] [Created] (FLINK-21103) E2e tests time out on azure

2021-01-22 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-21103:


 Summary: E2e tests time out on azure
 Key: FLINK-21103
 URL: https://issues.apache.org/jira/browse/FLINK-21103
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines, Tests
Affects Versions: 1.13.0
Reporter: Dawid Wysakowicz
 Fix For: 1.13.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12377&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-159: Reactive Mode

2021-01-22 Thread Steven Wu
Thanks a lot for the proposal, Robert and Till.

> No fixed parallelism for any of the operators

Regarding this limitation, can the scheduler only adjust the default
parallelism? if some operators set parallelism explicitly (like always 1),
just leave them unchanged.


On Fri, Jan 22, 2021 at 8:42 AM Robert Metzger  wrote:

> Hi all,
>
> Till started a discussion about FLIP-160: Declarative scheduler [1] earlier
> today, the first major feature based on that effort will be FLIP-159:
> Reactive Mode. It allows users to operate Flink in a way that it reactively
> scales the job up or down depending on the provided resources: adding
> TaskManagers will scale the job up, removing them will scale it down again.
>
> Here's the link to the Wiki:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-159%3A+Reactive+Mode
>
> We are very excited to hear your feedback about the proposal!
>
> Best,
> Robert
>
> [1]
>
> https://lists.apache.org/thread.html/r604a01f739639e2a5f093fbe7894c172125530332747ecf6990a6ce4%40%3Cdev.flink.apache.org%3E
>


Re: [DISCUSS] FLIP-160: Declarative scheduler

2021-01-22 Thread Steven Wu
Till, thanks a lot for the proposal.

Even if the initial phase is only to support scale-up, maybe the
"ScaleUpController" interface should be called "RescaleController" so that
in the future scale-down can be added.

On Fri, Jan 22, 2021 at 7:03 AM Till Rohrmann  wrote:

> Hi everyone,
>
> I would like to start a discussion about adding a new type of scheduler to
> Flink. The declarative scheduler will first declare the required resources
> and wait for them before deciding on the actual parallelism of a job.
> Thereby it can better handle situations where resources cannot be fully
> fulfilled. Moreover, it will act as a building block for the reactive mode
> where Flink should scale to the maximum of the currently available
> resources.
>
> Please find more details in the FLIP wiki document [1]. Looking forward to
> your feedback.
>
> [1] https://cwiki.apache.org/confluence/x/mwtRCg
>
> Cheers,
> Till
>


I want to contribute to Apache Flink

2021-01-22 Thread nicygan
Hi Guys,

I want to contribute to Apache Flink.
Would you please give me the permission as a contributor?
My JIRA ID is nicygan.




| |
nicygan
|
|
read3...@163.com
|
签名由网易邮箱大师定制

[jira] [Created] (FLINK-21104) UnalignedCheckpointITCase.execute failed with "IllegalStateException"

2021-01-22 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-21104:


 Summary: UnalignedCheckpointITCase.execute failed with 
"IllegalStateException"
 Key: FLINK-21104
 URL: https://issues.apache.org/jira/browse/FLINK-21104
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.13.0
Reporter: Huang Xingbo


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12383&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0]
{code:java}
2021-01-22T15:17:34.6711152Z [ERROR] execute[Parallel union, p = 
10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
elapsed: 3.903 s  <<< ERROR!
2021-01-22T15:17:34.6711736Z 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2021-01-22T15:17:34.6712204Zat 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
2021-01-22T15:17:34.6712779Zat 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
2021-01-22T15:17:34.6713344Zat 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
2021-01-22T15:17:34.6713816Zat 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
2021-01-22T15:17:34.6714454Zat 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
2021-01-22T15:17:34.6714952Zat 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
2021-01-22T15:17:34.6715472Zat 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:238)
2021-01-22T15:17:34.6716026Zat 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
2021-01-22T15:17:34.6716631Zat 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
2021-01-22T15:17:34.6717128Zat 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
2021-01-22T15:17:34.6717616Zat 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
2021-01-22T15:17:34.6718105Zat 
org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046)
2021-01-22T15:17:34.6718596Zat 
akka.dispatch.OnComplete.internal(Future.scala:264)
2021-01-22T15:17:34.6718973Zat 
akka.dispatch.OnComplete.internal(Future.scala:261)
2021-01-22T15:17:34.6719364Zat 
akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
2021-01-22T15:17:34.6719748Zat 
akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
2021-01-22T15:17:34.6720155Zat 
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
2021-01-22T15:17:34.6720641Zat 
org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
2021-01-22T15:17:34.6721236Zat 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
2021-01-22T15:17:34.6721706Zat 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
2021-01-22T15:17:34.6722205Zat 
akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
2021-01-22T15:17:34.6722663Zat 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
2021-01-22T15:17:34.6723214Zat 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
2021-01-22T15:17:34.6723723Zat 
scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
2021-01-22T15:17:34.6724146Zat 
scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
2021-01-22T15:17:34.6724726Zat 
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
2021-01-22T15:17:34.6725198Zat 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
2021-01-22T15:17:34.6725861Zat 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
2021-01-22T15:17:34.6726525Zat 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
2021-01-22T15:17:34.6727278Zat 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
2021-01-22T15:17:34.6727773Zat 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
2021-01-22T15:17:34.6728484Zat 
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
2021-01-22T15:17:34.6728969Zat 
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
2021-01-22T15:17:34.6729666Zat 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
2021-01-22T15:17:34.6730373Zat 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
2021-01-22T15:17:34.6731022Zat 
akka.dispatch.forkjoin.ForkJoinPool$WorkQ