NPE in blink planner code gen

2019-12-15 Thread Benchao Li
hi all,

We are using 1.9.0 blink planner, and find flink will throw NPE when we use
the following SQL:

```
create table source {
  age int,
  id varchar
};
select *case when age < 20 then cast(id as bigint) else 0 end* from source;
```

After debugging the Janino generated code, I find that NPE's reason is that
`BinaryStringUtil.toLong` returns `null`, and we assign this result to a
`long` field.

Then a tried old planner, it throw a `java.lang.NumberFormatException` when
casting a blank string to int.
And also tried other illegal casting in blink, which come out to be `null`.

So, here is my question:
Obviously, this is a bug in blink planner, and we should fix that. But we
have two ways to fix this:
1, make behavior of cast behave like before, which produces `null`,
2, change the behavior of blink planner to align with old planner, which
produces `NumberFormatException`.



Benchao Li
School of Electronics Engineering and Computer Science, Peking University
Tel:+86-15650713730
Email: libenc...@gmail.com; libenc...@pku.edu.cn


[jira] [Created] (FLINK-15262) kafka connector doesn't read from beginning immediately when 'connector.startup-mode' = 'earliest-offset'

2019-12-15 Thread Bowen Li (Jira)
Bowen Li created FLINK-15262:


 Summary: kafka connector doesn't read from beginning immediately 
when 'connector.startup-mode' = 'earliest-offset' 
 Key: FLINK-15262
 URL: https://issues.apache.org/jira/browse/FLINK-15262
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.10.0
Reporter: Bowen Li
Assignee: Jiangjie Qin
 Fix For: 1.10.0, 1.11.0


I created a kafka table in Flink to read from my kakfa topic (already has 
messages in it) in earliest offset, but `select * from test` query in Flink 
doesn't start to read until a new message comes. If no new message arrives, the 
query just sit there and never produce result.

What I expect is that the query should immediate produce result on all existing 
message without having to wait for a new message to "trigger" data processing.

DDL that I used according to DDL document at 
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#kafka-connector

{code:java}
create table test(name String, age Int) with (
   'connector.type' = 'kafka',
   'connector.version' = 'universal',
   'connector.topic' = 'test',
   'connector.properties.zookeeper.connect' = 'localhost:2181',
   'connector.properties.bootstrap.servers' = 'localhost:9092',
   'connector.startup-mode' = 'earliest-offset',
   'format.type' = 'csv',
   'update-mode' = 'append'
);
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15263) add dedicated page for HiveCatalog

2019-12-15 Thread Bowen Li (Jira)
Bowen Li created FLINK-15263:


 Summary: add dedicated page for HiveCatalog
 Key: FLINK-15263
 URL: https://issues.apache.org/jira/browse/FLINK-15263
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive, Documentation
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.9.2, 1.10.0, 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Zhu Zhu
Thanks everyone for the warm welcome!
It's my honor and pleasure to improve Flink with all of you in the
community!

Thanks,
Zhu Zhu

Benchao Li  于2019年12月15日周日 下午3:54写道:

> Congratulations!:)
>
> Hequn Cheng  于2019年12月15日周日 上午11:47写道:
>
> > Congrats, Zhu Zhu!
> >
> > Best, Hequn
> >
> > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen  wrote:
> >
> > > Congratulations!
> > >
> > > On Sat, Dec 14, 2019 at 7:59 AM Rong Rong  wrote:
> > >
> > > > Congrats Zhu Zhu :-)
> > > >
> > > > --
> > > > Rong
> > > >
> > > > On Sat, Dec 14, 2019 at 4:47 AM tison  wrote:
> > > >
> > > > > Congratulations!:)
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > >
> > > > > OpenInx  于2019年12月14日周六 下午7:34写道:
> > > > >
> > > > > > Congrats Zhu Zhu!
> > > > > >
> > > > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang 
> > wrote:
> > > > > >
> > > > > > > Congrats, Zhu Zhu!
> > > > > > >
> > > > > > > Paul Lam  于2019年12月14日周六 上午10:29写道:
> > > > > > >
> > > > > > > > Congrats Zhu Zhu!
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Paul Lam
> > > > > > > >
> > > > > > > > Kurt Young  于2019年12月14日周六 上午10:22写道:
> > > > > > > >
> > > > > > > > > Congratulations Zhu Zhu!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Kurt
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > > > > > sunjincheng...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Congrats ZhuZhu and welcome on board!
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > > Jincheng
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Jark Wu  于2019年12月14日周六 上午9:55写道:
> > > > > > > > > >
> > > > > > > > > > > Congratulations, Zhu Zhu!
> > > > > > > > > > >
> > > > > > > > > > > Best,
> > > > > > > > > > > Jark
> > > > > > > > > > >
> > > > > > > > > > > On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > karma...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Congrats, ZhuZhu!
> > > > > > > > > > > >
> > > > > > > > > > > > Bowen Li  于 2019年12月14日周六
> > 上午5:37写道:
> > > > > > > > > > > >
> > > > > > > > > > > > > Congrats!
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > > > > usxu...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Congratulations, Zhu Zhu!
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > > > > > > > > > > > huangzhenqiu0...@gmail.com
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Congratulations!:)
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> <
> > > > > > > > > > > pi...@ververica.com>
> > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Congratulations! :)
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > On 13 Dec 2019, at 18:05, Fabian Hueske <
> > > > > > > > fhue...@gmail.com
> > > > > > > > > >
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Congrats Zhu Zhu and welcome on board!
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Best, Fabian
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > Till
> > > > > > > Rohrmann
> > > > > > > > <
> > > > > > > > > > > > > > > > > trohrm...@apache.org>:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >> Hi everyone,
> > > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > > >> I'm very happy to announce that Zhu Zhu
> > > accepted
> > > > > the
> > > > > > > > offer
> > > > > > > > > > of
> > > > > > > > > > > > the
> > > > > > > > > > > > > > > Flink
> > > > > > > > > > > > > > > > PMC
> > > > > > > > > > > > > > > > >> to become a committer of the Flink
> project.
> > > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > > >> Zhu Zhu has been an active community
> member
> > > for
> > > > > more
> > > > > > > > than
> > > > > > > > > a
> > > > > > > > > > > year
> > > > > > > > > > > > > > now.
> > > > > > > > > > > > > > > > Zhu
> > > > > > > > > > > > > > > > >> Zhu played an essential role in the
> > scheduler
> > > > > > > > refactoring,
> > > > > > > > > > > > helped
> > > > > > > > > > > > > > > > >> implementing fine grained recovery, drives
> > > > FLIP-53
> > > > > > and
> > > > > > > > > fixed
> > > > > > > > > > > > > various
> > > > > > > > > > > > > > > > bugs
> > > > > > > > > > > > > > > > >> in the scheduler and runtime. Zhu Zhu also
> > > > helped
> > > > > > the
> > > > > > > > > > > community
> > > > > > > > > > > > by
> > > > > > > > > > > > > > > > >> reporting issues, answering user mails and
> > > being
> > > > > > > active
> > > > > > > > on
> > > > > > > > > > the
> > > > > > > > > > > > dev
> > > > > > > > > > > > > > > > 

[jira] [Created] (FLINK-15264) Job Manager TASK_SLOTS_TOTAL metrics does not shows the Job ID

2019-12-15 Thread xiaogang zhou (Jira)
xiaogang zhou created FLINK-15264:
-

 Summary: Job Manager TASK_SLOTS_TOTAL metrics does not shows the 
Job ID
 Key: FLINK-15264
 URL: https://issues.apache.org/jira/browse/FLINK-15264
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Reporter: xiaogang zhou


I am run the Single flink Job mode on Yarn. As each Job has a Job manager 
running some where in the host of the yarn. 

 

Sometimes different Job managers can running on the same host, and the metrics 
TASK_SLOTS_TOTAL has no identification to show which job they belonging. Can we 
support a metric which can tell how many slot a job occupies?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-27: Refactor Source Interface

2019-12-15 Thread Becket Qin
Hi Dawid and Jark,

I think the discussion ultimately boils down to the question that which one
of the following two final states do we want? Once we make this decision,
everything else can be naturally derived.

*Final state 1*: Separate API for bounded / unbounded DataStream & Table.
That means any code users write will be valid at the point when they write
the code. This is similar to having type safety check at programming time.
For example,

BoundedDataStream extends DataStream {
// Operations only available for bounded data.
BoundedDataStream sort(...);

// Interaction with another BoundedStream returns a Bounded stream.
BoundedJoinedDataStream join(BoundedDataStream other)

// Interaction with another unbounded stream returns an unbounded stream.
JoinedDataStream join(DataStream other)
}

BoundedTable extends Table {
  // Bounded only operation.
BoundedTable sort(...);

// Interaction with another BoundedTable returns a BoundedTable.
BoundedTable join(BoundedTable other)

// Interaction with another unbounded table returns an unbounded table.
Table join(Table other)
}

*Final state 2*: One unified API for bounded / unbounded DataStream /
Table.
That unified API may throw exception at DAG compilation time if an invalid
operation is tried. This is what Table API currently follows.

DataStream {
// Throws exception if the DataStream is unbounded.
DataStream sort();
// Get boundedness.
Boundedness getBoundedness();
}

Table {
// Throws exception if the table has infinite rows.
Table orderBy();

// Get boundedness.
Boundedness getBoundedness();
}

>From what I understand, there is no consensus so far on this decision yet.
Whichever final state we choose, we need to make it consistent across the
entire project. We should avoid the case that Table follows one final state
while DataStream follows another. Some arguments I am aware of from both
sides so far are following:

Arguments for final state 1:
1a) Clean API with method safety check at programming time.
1b) (Counter 2b) Although SQL does not have programming time error check, SQL
is not really a "programming language" per se. So SQL can be different from
Table and DataStream.
1c)  Although final state 2 seems making it easier for SQL to use given it
is more "config based" than "parameter based", final state 1 can probably
also meet what SQL wants by wrapping the Source in TableSource /
TableSourceFactory API if needed.

Arguments for final state 2:
2a) The Source API itself seems already sort of following the unified API
pattern.
2b) There is no "programming time" method error check in SQL case, so we
cannot really achieve final state 1 across the board.
2c) It is an easier path given our current status, i.e. Table is already
following final state 2.
2d) Users can always explicitly check the boundedness if they want to.

As I mentioned earlier, my initial thought was also to have a
"configuration based" Source rather than a "parameter based" Source. So it
is completely possible that I missed some important consideration or design
principles that we want to enforce for the project. It would be good
if @Stephan
Ewen  and @Aljoscha Krettek  can
also provide more thoughts on this.


Re: Jingsong

As you said, there are some batched system source, like parquet/orc source.
> Could we have the batch emit interface to improve performance? The queue of
> per record may cause performance degradation.


The current interface does not necessarily cause performance problem in a
multi-threading case. In fact, the base implementation allows SplitReaders
to add a batch  of records to the records queue, so each element
in the records queue would be a batch . In this case, when the main
thread polls records, it will take a batch  of records  from the
shared records queue and process the records  in a batch manner.

Thanks,

Jiangjie (Becket) Qin

On Thu, Dec 12, 2019 at 1:29 PM Jingsong Li  wrote:

> Hi Becket,
>
> I also have some performance concerns too.
>
> If I understand correctly, SourceOutput will emit data per record into the
> queue? I'm worried about the multithreading performance of this queue.
>
> > One example is some batched messaging systems which only have an offset
> for the entire batch instead of individual messages in the batch.
>
> As you said, there are some batched system source, like parquet/orc source.
> Could we have the batch emit interface to improve performance? The queue of
> per record may cause performance degradation.
>
> Best,
> Jingsong Lee
>
> On Thu, Dec 12, 2019 at 9:15 AM Jark Wu  wrote:
>
> > Hi Becket,
> >
> > I think Dawid explained things clearly and makes a lot of sense.
> > I'm also in favor of #2, because #1 doesn't work for our future unified
> > envrionment.
> >
> > You can see the vision in this documentation [1]. In the future, we would
> > like to
> > drop the global streaming/batch mode in SQL (i.e.
> > EnvironmentSettings#inStreamingMode/inBatchMode).
> > A source is bounded or unbounded once defined, so queries can

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Dian Fu
Congrats Zhu Zhu!

> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> 
> Thanks everyone for the warm welcome!
> It's my honor and pleasure to improve Flink with all of you in the
> community!
> 
> Thanks,
> Zhu Zhu
> 
> Benchao Li  于2019年12月15日周日 下午3:54写道:
> 
>> Congratulations!:)
>> 
>> Hequn Cheng  于2019年12月15日周日 上午11:47写道:
>> 
>>> Congrats, Zhu Zhu!
>>> 
>>> Best, Hequn
>>> 
>>> On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen  wrote:
>>> 
 Congratulations!
 
 On Sat, Dec 14, 2019 at 7:59 AM Rong Rong  wrote:
 
> Congrats Zhu Zhu :-)
> 
> --
> Rong
> 
> On Sat, Dec 14, 2019 at 4:47 AM tison  wrote:
> 
>> Congratulations!:)
>> 
>> Best,
>> tison.
>> 
>> 
>> OpenInx  于2019年12月14日周六 下午7:34写道:
>> 
>>> Congrats Zhu Zhu!
>>> 
>>> On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang 
>>> wrote:
>>> 
 Congrats, Zhu Zhu!
 
 Paul Lam  于2019年12月14日周六 上午10:29写道:
 
> Congrats Zhu Zhu!
> 
> Best,
> Paul Lam
> 
> Kurt Young  于2019年12月14日周六 上午10:22写道:
> 
>> Congratulations Zhu Zhu!
>> 
>> Best,
>> Kurt
>> 
>> 
>> On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
 sunjincheng...@gmail.com>
>> wrote:
>> 
>>> Congrats ZhuZhu and welcome on board!
>>> 
>>> Best,
>>> Jincheng
>>> 
>>> 
>>> Jark Wu  于2019年12月14日周六 上午9:55写道:
>>> 
 Congratulations, Zhu Zhu!
 
 Best,
 Jark
 
 On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
 karma...@gmail.com
>> 
 wrote:
 
> Congrats, ZhuZhu!
> 
> Bowen Li  于 2019年12月14日周六
>>> 上午5:37写道:
> 
>> Congrats!
>> 
>> On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
>> usxu...@gmail.com>
>> wrote:
>> 
>>> Congratulations, Zhu Zhu!
>>> 
>>> On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> huangzhenqiu0...@gmail.com
>>> 
>>> wrote:
>>> 
 Congratulations!:)
 
 On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
>> <
 pi...@ververica.com>
 wrote:
 
> Congratulations! :)
> 
>> On 13 Dec 2019, at 18:05, Fabian Hueske <
> fhue...@gmail.com
>>> 
> wrote:
>> 
>> Congrats Zhu Zhu and welcome on board!
>> 
>> Best, Fabian
>> 
>> Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
>>> Till
 Rohrmann
> <
>> trohrm...@apache.org>:
>> 
>>> Hi everyone,
>>> 
>>> I'm very happy to announce that Zhu Zhu
 accepted
>> the
> offer
>>> of
> the
 Flink
> PMC
>>> to become a committer of the Flink
>> project.
>>> 
>>> Zhu Zhu has been an active community
>> member
 for
>> more
> than
>> a
 year
>>> now.
> Zhu
>>> Zhu played an essential role in the
>>> scheduler
> refactoring,
> helped
>>> implementing fine grained recovery, drives
> FLIP-53
>>> and
>> fixed
>> various
> bugs
>>> in the scheduler and runtime. Zhu Zhu also
> helped
>>> the
 community
> by
>>> reporting issues, answering user mails and
 being
 active
> on
>>> the
> dev
> mailing
>>> list.
>>> 
>>> Congratulations Zhu Zhu!
>>> 
>>> Best, Till
>>> (on behalf of the Flink PMC)
>>> 
> 
> 
 
>>> 
>>> 
>>> --
>>> Xuefu Zhang
>>> 
>>> "In Honey We Trust!"
>>> 
>> 
> 
 
>>> 
>> 
> 
 
 
 --
 Best Regards
 
 Jeff Zhang
 
>>> 
>> 
> 
 
>>> 
>> 
>> 
>> --
>> 
>> Benchao Li
>> School of Electronics Engineering and Computer Science, Peking University
>> Tel:+86-15650713730
>> Email: libenc...@gmail.com; libenc...@pku.edu.cn
>> 



Re: [DISCUSS] Improve documentation / tooling around security of Flink

2019-12-15 Thread Konstantin Knauf
Hi Robert,

we could also add a warning (or a general "security" section) to the
"production readiness checklist" in the documentation.

Generally, I like d) in combination with an informative log message. Do you
think this would cause a lot of friction?

Cheers,

Konstantin

On Fri, Dec 13, 2019 at 2:06 PM Chesnay Schepler  wrote:

> Another proposal that was brought up was to provide a script for
> generating an SSL certificate with the distribution.
>
> On 12/12/2019 17:45, Robert Metzger wrote:
> > Hi all,
> >
> > There was recently a private report to the Flink PMC, as well as publicly
> > [1] about Flink's ability to execute arbitrary code. In scenarios where
> > Flink is accessible by somebody unauthorized, this can lead to issues.
> > The PMC received a similar report in November 2018.
> >
> > I believe it would be good to warn our users a bit more prominently about
> > the risks of accidentally opening up Flink to the public internet, or
> other
> > unauthorized entities.
> >
> > I have collected the following potential solutions discussed so far:
> >
> > a) Add a check-security.sh script, or a check into the frontend if the
> > JobManager can be reached on the public internet
> > b) Add a prominent warning to the download page
> > c) add an opt-out warning to the Flink logs / UI that can be disabled via
> > the config.
> > d) Bind the REST endpoint to localhost only, by default
> >
> >
> > I'm curious to hear if others have other ideas what to do.
> > I personally like to kick things off with b).
> >
> >
> > Best,
> > Robert
> >
> >
> > [1] https://twitter.com/pyn3rd/status/1197397475897692160
> >
>
>

-- 

Konstantin Knauf | Solutions Architect

+49 160 91394525


Follow us @VervericaData Ververica 


--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Tony) Cheng


[ANNOUNCE] Weekly Community Update 2019/50

2019-12-15 Thread Konstantin Knauf
Dear community,

happy to share this week's brief community digest with updates on Flink
1.8.3 and Flink 1.10, a discussion on how to facilitate easier Flink/Hive
setups, a couple of blog posts and a bit more.

*Personal Note:* Thank you for reading these updates since I started them
early this year. I will take a three week Christmas break and will be back
with a Holiday season community update on the 12th of January.

Flink Development
==

* [releases] Apache Flink 1.8.3 was released on Wednesday. [1,2]

* [releases] The feature freeze for Apache Flink took place on Monday. The
community is now working on testing, bug fixes and improving the
documentation in order to create a first release candidate soon. [3]

* [development process] Seth has revived the discussion on a past PR by
Marta, which added a documentation style guide to the contributor guide.
Please check it [4] out, if you are contributing documentation to Apache
Flink. [5]

* [security] Following a recent report to the Flink PMC of "exploiting" the
Flink Web UI for remote code execution, Robert has started a discussion on
how to improve the tooling/documentation to make users aware of this
possibility and recommend securing this interface in production setups. [6]

* [sql] Bowen has started a discussion on how to simplify the Flink-Hive
setup for new users as currently users need to add some additional
dependencies to the classpath manually. The discussion seems to conclude
towards providing a single additional hive-uber jar, which contains all the
required dependencies. [7]

[1] https://flink.apache.org/news/2019/12/11/release-1.8.3.html
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Apache-Flink-1-8-3-released-tp35868.html
[3]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Feature-freeze-for-Apache-Flink-1-10-0-release-tp35139.html
[4] https://github.com/apache/flink-web/pull/240
[5]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-Docs-Style-Guide-Review-tp35758.html
[6]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Improve-documentation-tooling-around-security-of-Flink-tp35898.html
[7]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-have-separate-Flink-distributions-with-built-in-Hive-dependencies-tp35918.html

Notable Bugs
==

[FLINK-15152] [1.9.1] When a "stop" action on a job fails, because not all
tasks are in "RUNNING" state the job is not checkpointing afterwards. [8]

[8] https://issues.apache.org/jira/browse/FLINK-15152

Events, Blog Posts, Misc
===

* Zhu Zhu is now an Apache Flink Comitter. Congratulations! [9]

* Gerred Dillon has published a blog post on the Apache Flink blog on how
to run Flink on Kubernetes with a KUDO Flink operator. [10]

* In this blog post Apache Flink PMC Sun Jincheng outlines the reasons and
motivation for his and his colleague's work to provide a world-class Python
support for Apache Flink's Table API. [11]

* Upcoming Meetups
* On December 17th there will be the second Apache Flink meetup in
Seoul. [12] *Dongwon* has shared a detailed agenda in last weeks community
update. [13]
* On December 18th Alexander Fedulov will talk about Stateful Stream
Processing with Apache Flink at the Java Professionals Meetup in Minsk. [14]

[9]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Zhu-Zhu-becomes-a-Flink-committer-tp35944.html
[10] https://flink.apache.org/news/2019/12/09/flink-kubernetes-kudo.html
[11]
https://developpaper.com/why-will-apache-flink-1-9-0-support-the-python-api/
[12] https://www.meetup.com/Seoul-Apache-Flink-Meetup/events/266824815/
[13]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2019-48-td35423.html
[14] https://www.meetup.com/Apache-Flink-Meetup-Minsk/events/267134296/

Cheers,

Konstantin (@snntrable)

-- 

Konstantin Knauf | Solutions Architect

+49 160 91394525


Follow us @VervericaData Ververica 


--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Tony) Cheng


[jira] [Created] (FLINK-15265) Remove "-executor" suffix from executor names

2019-12-15 Thread Aljoscha Krettek (Jira)
Aljoscha Krettek created FLINK-15265:


 Summary: Remove "-executor" suffix from executor names
 Key: FLINK-15265
 URL: https://issues.apache.org/jira/browse/FLINK-15265
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek


The executor names always have "-executor" as a suffix, this is reduntant. 
Currently, the executor name is also used to retrieve a {{ClusterClient}}, 
where it is unfortunate that the name has executor as a suffix. In the future 
we might provide something like a {{FlinkClient}} that offers a programmatic 
API for the functionality of {{bin/flink}}, here we would also use the same 
names.

In reality, the "executor names" are not names of executors but deployment 
targets. That's why the current naming seems a bit unnatural.

This is a simple search-and-replace job, no new functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Becket Qin
Congrats, Zhu Zhu!

On Sun, Dec 15, 2019 at 10:26 PM Dian Fu  wrote:

> Congrats Zhu Zhu!
>
> > 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> >
> > Thanks everyone for the warm welcome!
> > It's my honor and pleasure to improve Flink with all of you in the
> > community!
> >
> > Thanks,
> > Zhu Zhu
> >
> > Benchao Li  于2019年12月15日周日 下午3:54写道:
> >
> >> Congratulations!:)
> >>
> >> Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> >>
> >>> Congrats, Zhu Zhu!
> >>>
> >>> Best, Hequn
> >>>
> >>> On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen  wrote:
> >>>
>  Congratulations!
> 
>  On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> wrote:
> 
> > Congrats Zhu Zhu :-)
> >
> > --
> > Rong
> >
> > On Sat, Dec 14, 2019 at 4:47 AM tison  wrote:
> >
> >> Congratulations!:)
> >>
> >> Best,
> >> tison.
> >>
> >>
> >> OpenInx  于2019年12月14日周六 下午7:34写道:
> >>
> >>> Congrats Zhu Zhu!
> >>>
> >>> On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang 
> >>> wrote:
> >>>
>  Congrats, Zhu Zhu!
> 
>  Paul Lam  于2019年12月14日周六 上午10:29写道:
> 
> > Congrats Zhu Zhu!
> >
> > Best,
> > Paul Lam
> >
> > Kurt Young  于2019年12月14日周六 上午10:22写道:
> >
> >> Congratulations Zhu Zhu!
> >>
> >> Best,
> >> Kurt
> >>
> >>
> >> On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
>  sunjincheng...@gmail.com>
> >> wrote:
> >>
> >>> Congrats ZhuZhu and welcome on board!
> >>>
> >>> Best,
> >>> Jincheng
> >>>
> >>>
> >>> Jark Wu  于2019年12月14日周六 上午9:55写道:
> >>>
>  Congratulations, Zhu Zhu!
> 
>  Best,
>  Jark
> 
>  On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
>  karma...@gmail.com
> >>
>  wrote:
> 
> > Congrats, ZhuZhu!
> >
> > Bowen Li  于 2019年12月14日周六
> >>> 上午5:37写道:
> >
> >> Congrats!
> >>
> >> On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> >> usxu...@gmail.com>
> >> wrote:
> >>
> >>> Congratulations, Zhu Zhu!
> >>>
> >>> On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > huangzhenqiu0...@gmail.com
> >>>
> >>> wrote:
> >>>
>  Congratulations!:)
> 
>  On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> >> <
>  pi...@ververica.com>
>  wrote:
> 
> > Congratulations! :)
> >
> >> On 13 Dec 2019, at 18:05, Fabian Hueske <
> > fhue...@gmail.com
> >>>
> > wrote:
> >>
> >> Congrats Zhu Zhu and welcome on board!
> >>
> >> Best, Fabian
> >>
> >> Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> >>> Till
>  Rohrmann
> > <
> >> trohrm...@apache.org>:
> >>
> >>> Hi everyone,
> >>>
> >>> I'm very happy to announce that Zhu Zhu
>  accepted
> >> the
> > offer
> >>> of
> > the
>  Flink
> > PMC
> >>> to become a committer of the Flink
> >> project.
> >>>
> >>> Zhu Zhu has been an active community
> >> member
>  for
> >> more
> > than
> >> a
>  year
> >>> now.
> > Zhu
> >>> Zhu played an essential role in the
> >>> scheduler
> > refactoring,
> > helped
> >>> implementing fine grained recovery, drives
> > FLIP-53
> >>> and
> >> fixed
> >> various
> > bugs
> >>> in the scheduler and runtime. Zhu Zhu also
> > helped
> >>> the
>  community
> > by
> >>> reporting issues, answering user mails and
>  being
>  active
> > on
> >>> the
> > dev
> > mailing
> >>> list.
> >>>
> >>> Congratulations Zhu Zhu!
> >>>
> >>> Best, Till
> >>> (on behalf of the Flink PMC)
> >>>
> >
> >
> 
> >>>
> >>>
> >>> --
> >>> Xuefu Zhang
> >>>
> >>> "In Honey We Trust!"
> >>>
> >>
> >
> 
> >>>
> >>
> >
> 
> 
>  --
>  Best Regards
> 
>  Jeff Zhang
> 
> >>>
> >

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Leonard Xu
Congratulations, Zhu Zhu ! !

Best,
Leonard Xu

> On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> 
> Congrats, Zhu Zhu!
> 
> On Sun, Dec 15, 2019 at 10:26 PM Dian Fu  wrote:
> 
>> Congrats Zhu Zhu!
>> 
>>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
>>> 
>>> Thanks everyone for the warm welcome!
>>> It's my honor and pleasure to improve Flink with all of you in the
>>> community!
>>> 
>>> Thanks,
>>> Zhu Zhu
>>> 
>>> Benchao Li  于2019年12月15日周日 下午3:54写道:
>>> 
 Congratulations!:)
 
 Hequn Cheng  于2019年12月15日周日 上午11:47写道:
 
> Congrats, Zhu Zhu!
> 
> Best, Hequn
> 
> On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen  wrote:
> 
>> Congratulations!
>> 
>> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
>> wrote:
>> 
>>> Congrats Zhu Zhu :-)
>>> 
>>> --
>>> Rong
>>> 
>>> On Sat, Dec 14, 2019 at 4:47 AM tison  wrote:
>>> 
 Congratulations!:)
 
 Best,
 tison.
 
 
 OpenInx  于2019年12月14日周六 下午7:34写道:
 
> Congrats Zhu Zhu!
> 
> On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang 
> wrote:
> 
>> Congrats, Zhu Zhu!
>> 
>> Paul Lam  于2019年12月14日周六 上午10:29写道:
>> 
>>> Congrats Zhu Zhu!
>>> 
>>> Best,
>>> Paul Lam
>>> 
>>> Kurt Young  于2019年12月14日周六 上午10:22写道:
>>> 
 Congratulations Zhu Zhu!
 
 Best,
 Kurt
 
 
 On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
>> sunjincheng...@gmail.com>
 wrote:
 
> Congrats ZhuZhu and welcome on board!
> 
> Best,
> Jincheng
> 
> 
> Jark Wu  于2019年12月14日周六 上午9:55写道:
> 
>> Congratulations, Zhu Zhu!
>> 
>> Best,
>> Jark
>> 
>> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
>> karma...@gmail.com
 
>> wrote:
>> 
>>> Congrats, ZhuZhu!
>>> 
>>> Bowen Li  于 2019年12月14日周六
> 上午5:37写道:
>>> 
 Congrats!
 
 On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
 usxu...@gmail.com>
 wrote:
 
> Congratulations, Zhu Zhu!
> 
> On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
>>> huangzhenqiu0...@gmail.com
> 
> wrote:
> 
>> Congratulations!:)
>> 
>> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
 <
>> pi...@ververica.com>
>> wrote:
>> 
>>> Congratulations! :)
>>> 
 On 13 Dec 2019, at 18:05, Fabian Hueske <
>>> fhue...@gmail.com
> 
>>> wrote:
 
 Congrats Zhu Zhu and welcome on board!
 
 Best, Fabian
 
 Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> Till
>> Rohrmann
>>> <
 trohrm...@apache.org>:
 
> Hi everyone,
> 
> I'm very happy to announce that Zhu Zhu
>> accepted
 the
>>> offer
> of
>>> the
>> Flink
>>> PMC
> to become a committer of the Flink
 project.
> 
> Zhu Zhu has been an active community
 member
>> for
 more
>>> than
 a
>> year
> now.
>>> Zhu
> Zhu played an essential role in the
> scheduler
>>> refactoring,
>>> helped
> implementing fine grained recovery, drives
>>> FLIP-53
> and
 fixed
 various
>>> bugs
> in the scheduler and runtime. Zhu Zhu also
>>> helped
> the
>> community
>>> by
> reporting issues, answering user mails and
>> being
>> active
>>> on
> the
>>> dev
>>> mailing
> list.
> 
> Congratulations Zhu Zhu!
> 
> Best, Till
> (on behalf of the Flink PMC)
> 
>>> 
>>> 
>> 
> 
> 
> --
> Xuefu Zhang
> 
> "In Honey We Trust!"
> 
 
>>>

Re: NPE in blink planner code gen

2019-12-15 Thread Leonard Xu
Hi, Benchao, thank you for your report.
It looks egacy planner and blink planner have different behavior. 
Could you create an issue in https://issues.apache.org/jira/browse/FLINK 
 ?



> On Dec 15, 2019, at 16:17, Benchao Li  wrote:
> 
> hi all,
> 
> We are using 1.9.0 blink planner, and find flink will throw NPE when we use
> the following SQL:
> 
> ```
> create table source {
>  age int,
>  id varchar
> };
> select *case when age < 20 then cast(id as bigint) else 0 end* from source;
> ```
> 
> After debugging the Janino generated code, I find that NPE's reason is that
> `BinaryStringUtil.toLong` returns `null`, and we assign this result to a
> `long` field.
> 
> Then a tried old planner, it throw a `java.lang.NumberFormatException` when
> casting a blank string to int.
> And also tried other illegal casting in blink, which come out to be `null`.
> 
> So, here is my question:
> Obviously, this is a bug in blink planner, and we should fix that. But we
> have two ways to fix this:
> 1, make behavior of cast behave like before, which produces `null`,
> 2, change the behavior of blink planner to align with old planner, which
> produces `NumberFormatException`.
> 
> 
> 
> Benchao Li
> School of Electronics Engineering and Computer Science, Peking University
> Tel:+86-15650713730
> Email: libenc...@gmail.com; libenc...@pku.edu.cn



Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread wenlong.lwl
Congratulations, Zhu Zhu!

On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:

> Congratulations, Zhu Zhu ! !
>
> Best,
> Leonard Xu
>
> > On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> >
> > Congrats, Zhu Zhu!
> >
> > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu  wrote:
> >
> >> Congrats Zhu Zhu!
> >>
> >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> >>>
> >>> Thanks everyone for the warm welcome!
> >>> It's my honor and pleasure to improve Flink with all of you in the
> >>> community!
> >>>
> >>> Thanks,
> >>> Zhu Zhu
> >>>
> >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> >>>
>  Congratulations!:)
> 
>  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> 
> > Congrats, Zhu Zhu!
> >
> > Best, Hequn
> >
> > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
> wrote:
> >
> >> Congratulations!
> >>
> >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> >> wrote:
> >>
> >>> Congrats Zhu Zhu :-)
> >>>
> >>> --
> >>> Rong
> >>>
> >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
> wrote:
> >>>
>  Congratulations!:)
> 
>  Best,
>  tison.
> 
> 
>  OpenInx  于2019年12月14日周六 下午7:34写道:
> 
> > Congrats Zhu Zhu!
> >
> > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang 
> > wrote:
> >
> >> Congrats, Zhu Zhu!
> >>
> >> Paul Lam  于2019年12月14日周六 上午10:29写道:
> >>
> >>> Congrats Zhu Zhu!
> >>>
> >>> Best,
> >>> Paul Lam
> >>>
> >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> >>>
>  Congratulations Zhu Zhu!
> 
>  Best,
>  Kurt
> 
> 
>  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> >> sunjincheng...@gmail.com>
>  wrote:
> 
> > Congrats ZhuZhu and welcome on board!
> >
> > Best,
> > Jincheng
> >
> >
> > Jark Wu  于2019年12月14日周六 上午9:55写道:
> >
> >> Congratulations, Zhu Zhu!
> >>
> >> Best,
> >> Jark
> >>
> >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> >> karma...@gmail.com
> 
> >> wrote:
> >>
> >>> Congrats, ZhuZhu!
> >>>
> >>> Bowen Li  于 2019年12月14日周六
> > 上午5:37写道:
> >>>
>  Congrats!
> 
>  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
>  usxu...@gmail.com>
>  wrote:
> 
> > Congratulations, Zhu Zhu!
> >
> > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> >>> huangzhenqiu0...@gmail.com
> >
> > wrote:
> >
> >> Congratulations!:)
> >>
> >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
>  <
> >> pi...@ververica.com>
> >> wrote:
> >>
> >>> Congratulations! :)
> >>>
>  On 13 Dec 2019, at 18:05, Fabian Hueske <
> >>> fhue...@gmail.com
> >
> >>> wrote:
> 
>  Congrats Zhu Zhu and welcome on board!
> 
>  Best, Fabian
> 
>  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > Till
> >> Rohrmann
> >>> <
>  trohrm...@apache.org>:
> 
> > Hi everyone,
> >
> > I'm very happy to announce that Zhu Zhu
> >> accepted
>  the
> >>> offer
> > of
> >>> the
> >> Flink
> >>> PMC
> > to become a committer of the Flink
>  project.
> >
> > Zhu Zhu has been an active community
>  member
> >> for
>  more
> >>> than
>  a
> >> year
> > now.
> >>> Zhu
> > Zhu played an essential role in the
> > scheduler
> >>> refactoring,
> >>> helped
> > implementing fine grained recovery, drives
> >>> FLIP-53
> > and
>  fixed
>  various
> >>> bugs
> > in the scheduler and runtime. Zhu Zhu also
> >>> helped
> > the
> >> community
> >>> by
> > reporting issues, answering user mails and
> >> being
> >> active
> >>> on
> > the
> >>> dev
> >>> mailing
> > list.
> >
> > Congratulations Zh

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Yang Wang
Congratulations, Zhu Zhu!

wenlong.lwl  于2019年12月16日周一 上午9:56写道:

> Congratulations, Zhu Zhu!
>
> On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
>
> > Congratulations, Zhu Zhu ! !
> >
> > Best,
> > Leonard Xu
> >
> > > On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> > >
> > > Congrats, Zhu Zhu!
> > >
> > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> wrote:
> > >
> > >> Congrats Zhu Zhu!
> > >>
> > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > >>>
> > >>> Thanks everyone for the warm welcome!
> > >>> It's my honor and pleasure to improve Flink with all of you in the
> > >>> community!
> > >>>
> > >>> Thanks,
> > >>> Zhu Zhu
> > >>>
> > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > >>>
> >  Congratulations!:)
> > 
> >  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> > 
> > > Congrats, Zhu Zhu!
> > >
> > > Best, Hequn
> > >
> > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
> > wrote:
> > >
> > >> Congratulations!
> > >>
> > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> > >> wrote:
> > >>
> > >>> Congrats Zhu Zhu :-)
> > >>>
> > >>> --
> > >>> Rong
> > >>>
> > >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
> > wrote:
> > >>>
> >  Congratulations!:)
> > 
> >  Best,
> >  tison.
> > 
> > 
> >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > 
> > > Congrats Zhu Zhu!
> > >
> > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang 
> > > wrote:
> > >
> > >> Congrats, Zhu Zhu!
> > >>
> > >> Paul Lam  于2019年12月14日周六 上午10:29写道:
> > >>
> > >>> Congrats Zhu Zhu!
> > >>>
> > >>> Best,
> > >>> Paul Lam
> > >>>
> > >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> > >>>
> >  Congratulations Zhu Zhu!
> > 
> >  Best,
> >  Kurt
> > 
> > 
> >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > >> sunjincheng...@gmail.com>
> >  wrote:
> > 
> > > Congrats ZhuZhu and welcome on board!
> > >
> > > Best,
> > > Jincheng
> > >
> > >
> > > Jark Wu  于2019年12月14日周六 上午9:55写道:
> > >
> > >> Congratulations, Zhu Zhu!
> > >>
> > >> Best,
> > >> Jark
> > >>
> > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > >> karma...@gmail.com
> > 
> > >> wrote:
> > >>
> > >>> Congrats, ZhuZhu!
> > >>>
> > >>> Bowen Li  于 2019年12月14日周六
> > > 上午5:37写道:
> > >>>
> >  Congrats!
> > 
> >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> >  usxu...@gmail.com>
> >  wrote:
> > 
> > > Congratulations, Zhu Zhu!
> > >
> > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > >>> huangzhenqiu0...@gmail.com
> > >
> > > wrote:
> > >
> > >> Congratulations!:)
> > >>
> > >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> >  <
> > >> pi...@ververica.com>
> > >> wrote:
> > >>
> > >>> Congratulations! :)
> > >>>
> >  On 13 Dec 2019, at 18:05, Fabian Hueske <
> > >>> fhue...@gmail.com
> > >
> > >>> wrote:
> > 
> >  Congrats Zhu Zhu and welcome on board!
> > 
> >  Best, Fabian
> > 
> >  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > > Till
> > >> Rohrmann
> > >>> <
> >  trohrm...@apache.org>:
> > 
> > > Hi everyone,
> > >
> > > I'm very happy to announce that Zhu Zhu
> > >> accepted
> >  the
> > >>> offer
> > > of
> > >>> the
> > >> Flink
> > >>> PMC
> > > to become a committer of the Flink
> >  project.
> > >
> > > Zhu Zhu has been an active community
> >  member
> > >> for
> >  more
> > >>> than
> >  a
> > >> year
> > > now.
> > >>> Zhu
> > > Zhu played an essential role in the
> > > scheduler
> > >>> refactoring,
> > >>> helped
> > > implementing fine grained recovery, drives
> > >>> FLIP-53
> > > and
> >  fixed
> >  various
> > >>> bugs
> > >

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread vino yang
Congratulations, Zhu Zhu!

Best,
Vino

Yang Wang  于2019年12月16日周一 上午10:01写道:

> Congratulations, Zhu Zhu!
>
> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
>
> > Congratulations, Zhu Zhu!
> >
> > On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
> >
> > > Congratulations, Zhu Zhu ! !
> > >
> > > Best,
> > > Leonard Xu
> > >
> > > > On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> > > >
> > > > Congrats, Zhu Zhu!
> > > >
> > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> > wrote:
> > > >
> > > >> Congrats Zhu Zhu!
> > > >>
> > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > > >>>
> > > >>> Thanks everyone for the warm welcome!
> > > >>> It's my honor and pleasure to improve Flink with all of you in the
> > > >>> community!
> > > >>>
> > > >>> Thanks,
> > > >>> Zhu Zhu
> > > >>>
> > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > > >>>
> > >  Congratulations!:)
> > > 
> > >  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> > > 
> > > > Congrats, Zhu Zhu!
> > > >
> > > > Best, Hequn
> > > >
> > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
> > > wrote:
> > > >
> > > >> Congratulations!
> > > >>
> > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> > > >> wrote:
> > > >>
> > > >>> Congrats Zhu Zhu :-)
> > > >>>
> > > >>> --
> > > >>> Rong
> > > >>>
> > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
> > > wrote:
> > > >>>
> > >  Congratulations!:)
> > > 
> > >  Best,
> > >  tison.
> > > 
> > > 
> > >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > > 
> > > > Congrats Zhu Zhu!
> > > >
> > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang  >
> > > > wrote:
> > > >
> > > >> Congrats, Zhu Zhu!
> > > >>
> > > >> Paul Lam  于2019年12月14日周六 上午10:29写道:
> > > >>
> > > >>> Congrats Zhu Zhu!
> > > >>>
> > > >>> Best,
> > > >>> Paul Lam
> > > >>>
> > > >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> > > >>>
> > >  Congratulations Zhu Zhu!
> > > 
> > >  Best,
> > >  Kurt
> > > 
> > > 
> > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > >> sunjincheng...@gmail.com>
> > >  wrote:
> > > 
> > > > Congrats ZhuZhu and welcome on board!
> > > >
> > > > Best,
> > > > Jincheng
> > > >
> > > >
> > > > Jark Wu  于2019年12月14日周六 上午9:55写道:
> > > >
> > > >> Congratulations, Zhu Zhu!
> > > >>
> > > >> Best,
> > > >> Jark
> > > >>
> > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > >> karma...@gmail.com
> > > 
> > > >> wrote:
> > > >>
> > > >>> Congrats, ZhuZhu!
> > > >>>
> > > >>> Bowen Li  于 2019年12月14日周六
> > > > 上午5:37写道:
> > > >>>
> > >  Congrats!
> > > 
> > >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > >  usxu...@gmail.com>
> > >  wrote:
> > > 
> > > > Congratulations, Zhu Zhu!
> > > >
> > > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > > >>> huangzhenqiu0...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Congratulations!:)
> > > >>
> > > >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> > >  <
> > > >> pi...@ververica.com>
> > > >> wrote:
> > > >>
> > > >>> Congratulations! :)
> > > >>>
> > >  On 13 Dec 2019, at 18:05, Fabian Hueske <
> > > >>> fhue...@gmail.com
> > > >
> > > >>> wrote:
> > > 
> > >  Congrats Zhu Zhu and welcome on board!
> > > 
> > >  Best, Fabian
> > > 
> > >  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > > > Till
> > > >> Rohrmann
> > > >>> <
> > >  trohrm...@apache.org>:
> > > 
> > > > Hi everyone,
> > > >
> > > > I'm very happy to announce that Zhu Zhu
> > > >> accepted
> > >  the
> > > >>> offer
> > > > of
> > > >>> the
> > > >> Flink
> > > >>> PMC
> > > > to become a committer of the Flink
> > >  project.
> > > >
> > > > Zhu Zhu has been an active community
> > >  member
> > > >> for
> > >  more
> > > >>> than
> > >  a
>

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Jingsong Li
Congratulations Zhu Zhu!

Best,
Jingsong Lee

On Mon, Dec 16, 2019 at 10:01 AM Yang Wang  wrote:

> Congratulations, Zhu Zhu!
>
> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
>
> > Congratulations, Zhu Zhu!
> >
> > On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
> >
> > > Congratulations, Zhu Zhu ! !
> > >
> > > Best,
> > > Leonard Xu
> > >
> > > > On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> > > >
> > > > Congrats, Zhu Zhu!
> > > >
> > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> > wrote:
> > > >
> > > >> Congrats Zhu Zhu!
> > > >>
> > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > > >>>
> > > >>> Thanks everyone for the warm welcome!
> > > >>> It's my honor and pleasure to improve Flink with all of you in the
> > > >>> community!
> > > >>>
> > > >>> Thanks,
> > > >>> Zhu Zhu
> > > >>>
> > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > > >>>
> > >  Congratulations!:)
> > > 
> > >  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> > > 
> > > > Congrats, Zhu Zhu!
> > > >
> > > > Best, Hequn
> > > >
> > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
> > > wrote:
> > > >
> > > >> Congratulations!
> > > >>
> > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> > > >> wrote:
> > > >>
> > > >>> Congrats Zhu Zhu :-)
> > > >>>
> > > >>> --
> > > >>> Rong
> > > >>>
> > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
> > > wrote:
> > > >>>
> > >  Congratulations!:)
> > > 
> > >  Best,
> > >  tison.
> > > 
> > > 
> > >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > > 
> > > > Congrats Zhu Zhu!
> > > >
> > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang  >
> > > > wrote:
> > > >
> > > >> Congrats, Zhu Zhu!
> > > >>
> > > >> Paul Lam  于2019年12月14日周六 上午10:29写道:
> > > >>
> > > >>> Congrats Zhu Zhu!
> > > >>>
> > > >>> Best,
> > > >>> Paul Lam
> > > >>>
> > > >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> > > >>>
> > >  Congratulations Zhu Zhu!
> > > 
> > >  Best,
> > >  Kurt
> > > 
> > > 
> > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > >> sunjincheng...@gmail.com>
> > >  wrote:
> > > 
> > > > Congrats ZhuZhu and welcome on board!
> > > >
> > > > Best,
> > > > Jincheng
> > > >
> > > >
> > > > Jark Wu  于2019年12月14日周六 上午9:55写道:
> > > >
> > > >> Congratulations, Zhu Zhu!
> > > >>
> > > >> Best,
> > > >> Jark
> > > >>
> > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > >> karma...@gmail.com
> > > 
> > > >> wrote:
> > > >>
> > > >>> Congrats, ZhuZhu!
> > > >>>
> > > >>> Bowen Li  于 2019年12月14日周六
> > > > 上午5:37写道:
> > > >>>
> > >  Congrats!
> > > 
> > >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > >  usxu...@gmail.com>
> > >  wrote:
> > > 
> > > > Congratulations, Zhu Zhu!
> > > >
> > > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > > >>> huangzhenqiu0...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Congratulations!:)
> > > >>
> > > >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> > >  <
> > > >> pi...@ververica.com>
> > > >> wrote:
> > > >>
> > > >>> Congratulations! :)
> > > >>>
> > >  On 13 Dec 2019, at 18:05, Fabian Hueske <
> > > >>> fhue...@gmail.com
> > > >
> > > >>> wrote:
> > > 
> > >  Congrats Zhu Zhu and welcome on board!
> > > 
> > >  Best, Fabian
> > > 
> > >  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > > > Till
> > > >> Rohrmann
> > > >>> <
> > >  trohrm...@apache.org>:
> > > 
> > > > Hi everyone,
> > > >
> > > > I'm very happy to announce that Zhu Zhu
> > > >> accepted
> > >  the
> > > >>> offer
> > > > of
> > > >>> the
> > > >> Flink
> > > >>> PMC
> > > > to become a committer of the Flink
> > >  project.
> > > >
> > > > Zhu Zhu has been an active community
> > >  member
> > > >> for
> > >  more
> > > >>> than
>

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread aihua li
Congratulations, zhuzhu!

> 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> 
> Congratulations Zhu Zhu!
> 
> Best,
> Jingsong Lee
> 
> On Mon, Dec 16, 2019 at 10:01 AM Yang Wang  wrote:
> 
>> Congratulations, Zhu Zhu!
>> 
>> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
>> 
>>> Congratulations, Zhu Zhu!
>>> 
>>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
>>> 
 Congratulations, Zhu Zhu ! !
 
 Best,
 Leonard Xu
 
> On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> 
> Congrats, Zhu Zhu!
> 
> On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
>>> wrote:
> 
>> Congrats Zhu Zhu!
>> 
>>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
>>> 
>>> Thanks everyone for the warm welcome!
>>> It's my honor and pleasure to improve Flink with all of you in the
>>> community!
>>> 
>>> Thanks,
>>> Zhu Zhu
>>> 
>>> Benchao Li  于2019年12月15日周日 下午3:54写道:
>>> 
 Congratulations!:)
 
 Hequn Cheng  于2019年12月15日周日 上午11:47写道:
 
> Congrats, Zhu Zhu!
> 
> Best, Hequn
> 
> On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
 wrote:
> 
>> Congratulations!
>> 
>> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
>> wrote:
>> 
>>> Congrats Zhu Zhu :-)
>>> 
>>> --
>>> Rong
>>> 
>>> On Sat, Dec 14, 2019 at 4:47 AM tison 
 wrote:
>>> 
 Congratulations!:)
 
 Best,
 tison.
 
 
 OpenInx  于2019年12月14日周六 下午7:34写道:
 
> Congrats Zhu Zhu!
> 
> On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang >> 
> wrote:
> 
>> Congrats, Zhu Zhu!
>> 
>> Paul Lam  于2019年12月14日周六 上午10:29写道:
>> 
>>> Congrats Zhu Zhu!
>>> 
>>> Best,
>>> Paul Lam
>>> 
>>> Kurt Young  于2019年12月14日周六 上午10:22写道:
>>> 
 Congratulations Zhu Zhu!
 
 Best,
 Kurt
 
 
 On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
>> sunjincheng...@gmail.com>
 wrote:
 
> Congrats ZhuZhu and welcome on board!
> 
> Best,
> Jincheng
> 
> 
> Jark Wu  于2019年12月14日周六 上午9:55写道:
> 
>> Congratulations, Zhu Zhu!
>> 
>> Best,
>> Jark
>> 
>> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
>> karma...@gmail.com
 
>> wrote:
>> 
>>> Congrats, ZhuZhu!
>>> 
>>> Bowen Li  于 2019年12月14日周六
> 上午5:37写道:
>>> 
 Congrats!
 
 On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
 usxu...@gmail.com>
 wrote:
 
> Congratulations, Zhu Zhu!
> 
> On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
>>> huangzhenqiu0...@gmail.com
> 
> wrote:
> 
>> Congratulations!:)
>> 
>> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
 <
>> pi...@ververica.com>
>> wrote:
>> 
>>> Congratulations! :)
>>> 
 On 13 Dec 2019, at 18:05, Fabian Hueske <
>>> fhue...@gmail.com
> 
>>> wrote:
 
 Congrats Zhu Zhu and welcome on board!
 
 Best, Fabian
 
 Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> Till
>> Rohrmann
>>> <
 trohrm...@apache.org>:
 
> Hi everyone,
> 
> I'm very happy to announce that Zhu Zhu
>> accepted
 the
>>> offer
> of
>>> the
>> Flink
>>> PMC
> to become a committer of the Flink
 project.
> 
> Zhu Zhu has been an active community
 member
>> for
 more
>>> than
 a
>> year
> now.
>>> Zhu
> Zhu played an essential role in the
>

Re: NPE in blink planner code gen

2019-12-15 Thread Jingsong Li
Hi Benchao,

Thanks for your reporting.
As Leonard said, you can create an issue in JIRA. We can go on discussing
in JIRA.
The answer is #1, Blink's behavior ensures that the job runs as much as
possible without interrupting it, so null is returned here to make it
possible to continue running.

In JIRA, you can describe more detailed, like input data, program detail.
And you can verify master or 1.10 code too.

Best,
Jingsong Lee

On Mon, Dec 16, 2019 at 9:23 AM Leonard Xu  wrote:

> Hi, Benchao, thank you for your report.
> It looks egacy planner and blink planner have different behavior.
> Could you create an issue in https://issues.apache.org/jira/browse/FLINK ?
>
>
>
> On Dec 15, 2019, at 16:17, Benchao Li  wrote:
>
> hi all,
>
> We are using 1.9.0 blink planner, and find flink will throw NPE when we use
> the following SQL:
>
> ```
> create table source {
>  age int,
>  id varchar
> };
> select *case when age < 20 then cast(id as bigint) else 0 end* from source;
> ```
>
> After debugging the Janino generated code, I find that NPE's reason is that
> `BinaryStringUtil.toLong` returns `null`, and we assign this result to a
> `long` field.
>
> Then a tried old planner, it throw a `java.lang.NumberFormatException` when
> casting a blank string to int.
> And also tried other illegal casting in blink, which come out to be `null`.
>
> So, here is my question:
> Obviously, this is a bug in blink planner, and we should fix that. But we
> have two ways to fix this:
> 1, make behavior of cast behave like before, which produces `null`,
> 2, change the behavior of blink planner to align with old planner, which
> produces `NumberFormatException`.
>
>
>
> Benchao Li
> School of Electronics Engineering and Computer Science, Peking University
> Tel:+86-15650713730
> Email: libenc...@gmail.com ; libenc...@pku.edu.cn
>
>
>

-- 
Best, Jingsong Lee


[jira] [Created] (FLINK-15266) NPE in blink planner code gen

2019-12-15 Thread Benchao Li (Jira)
Benchao Li created FLINK-15266:
--

 Summary: NPE in blink planner code gen
 Key: FLINK-15266
 URL: https://issues.apache.org/jira/browse/FLINK-15266
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.9.1
Reporter: Benchao Li


`cast` function in blink planner and old planner are different:

in legacy planner:
cast('' as int)  ->  throw NumberFormatException
cast(null as int) -> throw NullPointerException
cast('abc' as int)  -> throw NumberFormatException

but in blink planner:
cast('' as int)  ->  return null
cast(null as int) -> return null
cast('abc' as int)  -> return null

A step forward:
```
create table source {
  age int,
  id varchar
};
select case when age < 20 then cast(id as bigint) else 0 end from source;
```
queries like above will throw NPE because we will try assign a `null` to a 
`long` field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: NPE in blink planner code gen

2019-12-15 Thread Benchao Li
Hi Jingsong, Leonard,

Thanks for your response, I'll created an issue (
https://issues.apache.org/jira/browse/FLINK-15266) to track this.
And further discussions can be moved to JIRA.

Jingsong Li  于2019年12月16日周一 上午10:17写道:

> Hi Benchao,
>
> Thanks for your reporting.
> As Leonard said, you can create an issue in JIRA. We can go on discussing
> in JIRA.
> The answer is #1, Blink's behavior ensures that the job runs as much as
> possible without interrupting it, so null is returned here to make it
> possible to continue running.
>
> In JIRA, you can describe more detailed, like input data, program detail.
> And you can verify master or 1.10 code too.
>
> Best,
> Jingsong Lee
>
> On Mon, Dec 16, 2019 at 9:23 AM Leonard Xu  wrote:
>
>> Hi, Benchao, thank you for your report.
>> It looks egacy planner and blink planner have different behavior.
>> Could you create an issue in https://issues.apache.org/jira/browse/FLINK
>>  ?
>>
>>
>>
>> On Dec 15, 2019, at 16:17, Benchao Li  wrote:
>>
>> hi all,
>>
>> We are using 1.9.0 blink planner, and find flink will throw NPE when we
>> use
>> the following SQL:
>>
>> ```
>> create table source {
>>  age int,
>>  id varchar
>> };
>> select *case when age < 20 then cast(id as bigint) else 0 end* from
>> source;
>> ```
>>
>> After debugging the Janino generated code, I find that NPE's reason is
>> that
>> `BinaryStringUtil.toLong` returns `null`, and we assign this result to a
>> `long` field.
>>
>> Then a tried old planner, it throw a `java.lang.NumberFormatException`
>> when
>> casting a blank string to int.
>> And also tried other illegal casting in blink, which come out to be
>> `null`.
>>
>> So, here is my question:
>> Obviously, this is a bug in blink planner, and we should fix that. But we
>> have two ways to fix this:
>> 1, make behavior of cast behave like before, which produces `null`,
>> 2, change the behavior of blink planner to align with old planner, which
>> produces `NumberFormatException`.
>>
>>
>>
>> Benchao Li
>> School of Electronics Engineering and Computer Science, Peking University
>> Tel:+86-15650713730
>> Email: libenc...@gmail.com ; libenc...@pku.edu.cn
>>
>>
>>
>
> --
> Best, Jingsong Lee
>


-- 

Benchao Li
School of Electronics Engineering and Computer Science, Peking University
Tel:+86-15650713730
Email: libenc...@gmail.com; libenc...@pku.edu.cn


Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Congxian Qiu
Congrats, Zhu Zhu!

Best,
Congxian


aihua li  于2019年12月16日周一 上午10:16写道:

> Congratulations, zhuzhu!
>
> > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> >
> > Congratulations Zhu Zhu!
> >
> > Best,
> > Jingsong Lee
> >
> > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> wrote:
> >
> >> Congratulations, Zhu Zhu!
> >>
> >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> >>
> >>> Congratulations, Zhu Zhu!
> >>>
> >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
> >>>
>  Congratulations, Zhu Zhu ! !
> 
>  Best,
>  Leonard Xu
> 
> > On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> >
> > Congrats, Zhu Zhu!
> >
> > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> >>> wrote:
> >
> >> Congrats Zhu Zhu!
> >>
> >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> >>>
> >>> Thanks everyone for the warm welcome!
> >>> It's my honor and pleasure to improve Flink with all of you in the
> >>> community!
> >>>
> >>> Thanks,
> >>> Zhu Zhu
> >>>
> >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> >>>
>  Congratulations!:)
> 
>  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> 
> > Congrats, Zhu Zhu!
> >
> > Best, Hequn
> >
> > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
>  wrote:
> >
> >> Congratulations!
> >>
> >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> >> wrote:
> >>
> >>> Congrats Zhu Zhu :-)
> >>>
> >>> --
> >>> Rong
> >>>
> >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
>  wrote:
> >>>
>  Congratulations!:)
> 
>  Best,
>  tison.
> 
> 
>  OpenInx  于2019年12月14日周六 下午7:34写道:
> 
> > Congrats Zhu Zhu!
> >
> > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang  >>>
> > wrote:
> >
> >> Congrats, Zhu Zhu!
> >>
> >> Paul Lam  于2019年12月14日周六 上午10:29写道:
> >>
> >>> Congrats Zhu Zhu!
> >>>
> >>> Best,
> >>> Paul Lam
> >>>
> >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> >>>
>  Congratulations Zhu Zhu!
> 
>  Best,
>  Kurt
> 
> 
>  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> >> sunjincheng...@gmail.com>
>  wrote:
> 
> > Congrats ZhuZhu and welcome on board!
> >
> > Best,
> > Jincheng
> >
> >
> > Jark Wu  于2019年12月14日周六 上午9:55写道:
> >
> >> Congratulations, Zhu Zhu!
> >>
> >> Best,
> >> Jark
> >>
> >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> >> karma...@gmail.com
> 
> >> wrote:
> >>
> >>> Congrats, ZhuZhu!
> >>>
> >>> Bowen Li  于 2019年12月14日周六
> > 上午5:37写道:
> >>>
>  Congrats!
> 
>  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
>  usxu...@gmail.com>
>  wrote:
> 
> > Congratulations, Zhu Zhu!
> >
> > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> >>> huangzhenqiu0...@gmail.com
> >
> > wrote:
> >
> >> Congratulations!:)
> >>
> >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
>  <
> >> pi...@ververica.com>
> >> wrote:
> >>
> >>> Congratulations! :)
> >>>
>  On 13 Dec 2019, at 18:05, Fabian Hueske <
> >>> fhue...@gmail.com
> >
> >>> wrote:
> 
>  Congrats Zhu Zhu and welcome on board!
> 
>  Best, Fabian
> 
>  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > Till
> >> Rohrmann
> >>> <
>  trohrm...@apache.org>:
> 
> > Hi everyone,
> >
> > I'm very happy to announce that Zhu Zhu
> >> accepted
>  the
> >>> offer
> > of
> >>> the
> >> Flink
> >>> PMC
> > to become a committ

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Zhijiang
Congratulations Zhu Zhu!


--
From:Congxian Qiu 
Send Time:2019 Dec. 16 (Mon.) 10:23
To:dev@flink.apache.org 
Subject:Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

Congrats, Zhu Zhu!

Best,
Congxian


aihua li  于2019年12月16日周一 上午10:16写道:

> Congratulations, zhuzhu!
>
> > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> >
> > Congratulations Zhu Zhu!
> >
> > Best,
> > Jingsong Lee
> >
> > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> wrote:
> >
> >> Congratulations, Zhu Zhu!
> >>
> >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> >>
> >>> Congratulations, Zhu Zhu!
> >>>
> >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
> >>>
>  Congratulations, Zhu Zhu ! !
> 
>  Best,
>  Leonard Xu
> 
> > On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> >
> > Congrats, Zhu Zhu!
> >
> > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> >>> wrote:
> >
> >> Congrats Zhu Zhu!
> >>
> >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> >>>
> >>> Thanks everyone for the warm welcome!
> >>> It's my honor and pleasure to improve Flink with all of you in the
> >>> community!
> >>>
> >>> Thanks,
> >>> Zhu Zhu
> >>>
> >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> >>>
>  Congratulations!:)
> 
>  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> 
> > Congrats, Zhu Zhu!
> >
> > Best, Hequn
> >
> > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
>  wrote:
> >
> >> Congratulations!
> >>
> >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
> >> wrote:
> >>
> >>> Congrats Zhu Zhu :-)
> >>>
> >>> --
> >>> Rong
> >>>
> >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
>  wrote:
> >>>
>  Congratulations!:)
> 
>  Best,
>  tison.
> 
> 
>  OpenInx  于2019年12月14日周六 下午7:34写道:
> 
> > Congrats Zhu Zhu!
> >
> > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang  >>>
> > wrote:
> >
> >> Congrats, Zhu Zhu!
> >>
> >> Paul Lam  于2019年12月14日周六 上午10:29写道:
> >>
> >>> Congrats Zhu Zhu!
> >>>
> >>> Best,
> >>> Paul Lam
> >>>
> >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> >>>
>  Congratulations Zhu Zhu!
> 
>  Best,
>  Kurt
> 
> 
>  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> >> sunjincheng...@gmail.com>
>  wrote:
> 
> > Congrats ZhuZhu and welcome on board!
> >
> > Best,
> > Jincheng
> >
> >
> > Jark Wu  于2019年12月14日周六 上午9:55写道:
> >
> >> Congratulations, Zhu Zhu!
> >>
> >> Best,
> >> Jark
> >>
> >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> >> karma...@gmail.com
> 
> >> wrote:
> >>
> >>> Congrats, ZhuZhu!
> >>>
> >>> Bowen Li  于 2019年12月14日周六
> > 上午5:37写道:
> >>>
>  Congrats!
> 
>  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
>  usxu...@gmail.com>
>  wrote:
> 
> > Congratulations, Zhu Zhu!
> >
> > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> >>> huangzhenqiu0...@gmail.com
> >
> > wrote:
> >
> >> Congratulations!:)
> >>
> >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
>  <
> >> pi...@ververica.com>
> >> wrote:
> >>
> >>> Congratulations! :)
> >>>
>  On 13 Dec 2019, at 18:05, Fabian Hueske <
> >>> fhue...@gmail.com
> >
> >>> wrote:
> 
>  Congrats Zhu Zhu and welcome on board!
> 
>  Best, Fabian
> 
>  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > Till
> >> Rohrmann
> >>> <
>  trohrm...@apache.org>:
> 
> > Hi everyone,
> >
> > I'm very happy to announce tha

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Biao Liu
Congrats Zhu Zhu!

Thanks,
Biao /'bɪ.aʊ/



On Mon, 16 Dec 2019 at 10:23, Congxian Qiu  wrote:

> Congrats, Zhu Zhu!
>
> Best,
> Congxian
>
>
> aihua li  于2019年12月16日周一 上午10:16写道:
>
> > Congratulations, zhuzhu!
> >
> > > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> > >
> > > Congratulations Zhu Zhu!
> > >
> > > Best,
> > > Jingsong Lee
> > >
> > > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> > wrote:
> > >
> > >> Congratulations, Zhu Zhu!
> > >>
> > >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> > >>
> > >>> Congratulations, Zhu Zhu!
> > >>>
> > >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
> > >>>
> >  Congratulations, Zhu Zhu ! !
> > 
> >  Best,
> >  Leonard Xu
> > 
> > > On Dec 16, 2019, at 07:53, Becket Qin 
> wrote:
> > >
> > > Congrats, Zhu Zhu!
> > >
> > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> > >>> wrote:
> > >
> > >> Congrats Zhu Zhu!
> > >>
> > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > >>>
> > >>> Thanks everyone for the warm welcome!
> > >>> It's my honor and pleasure to improve Flink with all of you in
> the
> > >>> community!
> > >>>
> > >>> Thanks,
> > >>> Zhu Zhu
> > >>>
> > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > >>>
> >  Congratulations!:)
> > 
> >  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> > 
> > > Congrats, Zhu Zhu!
> > >
> > > Best, Hequn
> > >
> > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen  >
> >  wrote:
> > >
> > >> Congratulations!
> > >>
> > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong <
> walter...@gmail.com>
> > >> wrote:
> > >>
> > >>> Congrats Zhu Zhu :-)
> > >>>
> > >>> --
> > >>> Rong
> > >>>
> > >>> On Sat, Dec 14, 2019 at 4:47 AM tison 
> >  wrote:
> > >>>
> >  Congratulations!:)
> > 
> >  Best,
> >  tison.
> > 
> > 
> >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > 
> > > Congrats Zhu Zhu!
> > >
> > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang <
> zjf...@gmail.com
> > >>>
> > > wrote:
> > >
> > >> Congrats, Zhu Zhu!
> > >>
> > >> Paul Lam  于2019年12月14日周六
> 上午10:29写道:
> > >>
> > >>> Congrats Zhu Zhu!
> > >>>
> > >>> Best,
> > >>> Paul Lam
> > >>>
> > >>> Kurt Young  于2019年12月14日周六 上午10:22写道:
> > >>>
> >  Congratulations Zhu Zhu!
> > 
> >  Best,
> >  Kurt
> > 
> > 
> >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > >> sunjincheng...@gmail.com>
> >  wrote:
> > 
> > > Congrats ZhuZhu and welcome on board!
> > >
> > > Best,
> > > Jincheng
> > >
> > >
> > > Jark Wu  于2019年12月14日周六 上午9:55写道:
> > >
> > >> Congratulations, Zhu Zhu!
> > >>
> > >> Best,
> > >> Jark
> > >>
> > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > >> karma...@gmail.com
> > 
> > >> wrote:
> > >>
> > >>> Congrats, ZhuZhu!
> > >>>
> > >>> Bowen Li  于 2019年12月14日周六
> > > 上午5:37写道:
> > >>>
> >  Congrats!
> > 
> >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> >  usxu...@gmail.com>
> >  wrote:
> > 
> > > Congratulations, Zhu Zhu!
> > >
> > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > >>> huangzhenqiu0...@gmail.com
> > >
> > > wrote:
> > >
> > >> Congratulations!:)
> > >>
> > >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> >  <
> > >> pi...@ververica.com>
> > >> wrote:
> > >>
> > >>> Congratulations! :)
> > >>>
> >  On 13 Dec 2019, at 18:05, Fabian Hueske <
> > >>> fhue...@gmail.com
> > >
> > >>> wrote:
> > 
> >  Congrats Zhu Zhu and welcome on board!
> > 
> >  Best, Fabian
> > 
> >  Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> > > Till

Re: Worst-case optimal join processing on Streams

2019-12-15 Thread Kurt Young
Hi Laurens,

Good to hear that you are interested with optimizing Flink's join strategy.
If you want
to learn more about the lifecycle of a query in Flink, I would
recommend you to read
the original design doc of Flink Table&SQL module [1], hope it helps.

Best,
Kurt

[1]
https://docs.google.com/document/d/1TLayJNOTBle_-m1rQfgA6Ouj1oYsfqRjPcp1h2TVqdI/


On Sat, Dec 14, 2019 at 10:52 PM Laurens VIJNCK 
wrote:

> Dear folks,
>
> DISCLAIMER: With this mail, my sole intention is to establish contact with
> the community and trade ideas on how to realize the goal described below.
>
> I'm a starting PhD researcher in distributed systems and databases who is
> particularly interested in worst-case optimal (multiway) join processing on
> streams. I have performed preliminary tests with a new join algorithm that
> shows rather promising results. However, the limitation is that the
> algorithm operates in a centralized fashion. My goal is to extend the
> capabilities of the algorithm to operate in a distributed environment. To
> showcase my results, I want to implement a proof-of-concept in Apache
> Flink. I know this is a rather ambitious project, hence why I am reaching
> out to the community.
>
> I have traversed most of the application development documentation on the
> website (e.g., [1, 2, 3, 4]) but I am now eager the learn more about the
> internals thereof. Specifically, I want to gain some more insights in the
> lifecycle of a query in Flink. Is there some additional documentation
> available on this subject?
>
> Thanks in advance.
>
> [1] https://flink.apache.org/news/2015/04/13/release-0.9.0-milestone1.html
> [2]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/dynamic_tables.html
> [3]
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/streaming/joins.html
> [4] https://cwiki.apache.org/confluence/display/FLINK/Optimizer+Internals
>
> Kind regards,
>
> Laurens Vijnck
>


Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Zhenghua Gao
Congrats!

*Best Regards,*
*Zhenghua Gao*


On Mon, Dec 16, 2019 at 10:36 AM Biao Liu  wrote:

> Congrats Zhu Zhu!
>
> Thanks,
> Biao /'bɪ.aʊ/
>
>
>
> On Mon, 16 Dec 2019 at 10:23, Congxian Qiu  wrote:
>
> > Congrats, Zhu Zhu!
> >
> > Best,
> > Congxian
> >
> >
> > aihua li  于2019年12月16日周一 上午10:16写道:
> >
> > > Congratulations, zhuzhu!
> > >
> > > > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> > > >
> > > > Congratulations Zhu Zhu!
> > > >
> > > > Best,
> > > > Jingsong Lee
> > > >
> > > > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> > > wrote:
> > > >
> > > >> Congratulations, Zhu Zhu!
> > > >>
> > > >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> > > >>
> > > >>> Congratulations, Zhu Zhu!
> > > >>>
> > > >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu 
> wrote:
> > > >>>
> > >  Congratulations, Zhu Zhu ! !
> > > 
> > >  Best,
> > >  Leonard Xu
> > > 
> > > > On Dec 16, 2019, at 07:53, Becket Qin 
> > wrote:
> > > >
> > > > Congrats, Zhu Zhu!
> > > >
> > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
> > > >>> wrote:
> > > >
> > > >> Congrats Zhu Zhu!
> > > >>
> > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > > >>>
> > > >>> Thanks everyone for the warm welcome!
> > > >>> It's my honor and pleasure to improve Flink with all of you in
> > the
> > > >>> community!
> > > >>>
> > > >>> Thanks,
> > > >>> Zhu Zhu
> > > >>>
> > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > > >>>
> > >  Congratulations!:)
> > > 
> > >  Hequn Cheng  于2019年12月15日周日 上午11:47写道:
> > > 
> > > > Congrats, Zhu Zhu!
> > > >
> > > > Best, Hequn
> > > >
> > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen <
> suez1...@gmail.com
> > >
> > >  wrote:
> > > >
> > > >> Congratulations!
> > > >>
> > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong <
> > walter...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> Congrats Zhu Zhu :-)
> > > >>>
> > > >>> --
> > > >>> Rong
> > > >>>
> > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison <
> wander4...@gmail.com>
> > >  wrote:
> > > >>>
> > >  Congratulations!:)
> > > 
> > >  Best,
> > >  tison.
> > > 
> > > 
> > >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > > 
> > > > Congrats Zhu Zhu!
> > > >
> > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang <
> > zjf...@gmail.com
> > > >>>
> > > > wrote:
> > > >
> > > >> Congrats, Zhu Zhu!
> > > >>
> > > >> Paul Lam  于2019年12月14日周六
> > 上午10:29写道:
> > > >>
> > > >>> Congrats Zhu Zhu!
> > > >>>
> > > >>> Best,
> > > >>> Paul Lam
> > > >>>
> > > >>> Kurt Young  于2019年12月14日周六
> 上午10:22写道:
> > > >>>
> > >  Congratulations Zhu Zhu!
> > > 
> > >  Best,
> > >  Kurt
> > > 
> > > 
> > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > >> sunjincheng...@gmail.com>
> > >  wrote:
> > > 
> > > > Congrats ZhuZhu and welcome on board!
> > > >
> > > > Best,
> > > > Jincheng
> > > >
> > > >
> > > > Jark Wu  于2019年12月14日周六 上午9:55写道:
> > > >
> > > >> Congratulations, Zhu Zhu!
> > > >>
> > > >> Best,
> > > >> Jark
> > > >>
> > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > >> karma...@gmail.com
> > > 
> > > >> wrote:
> > > >>
> > > >>> Congrats, ZhuZhu!
> > > >>>
> > > >>> Bowen Li  于 2019年12月14日周六
> > > > 上午5:37写道:
> > > >>>
> > >  Congrats!
> > > 
> > >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > >  usxu...@gmail.com>
> > >  wrote:
> > > 
> > > > Congratulations, Zhu Zhu!
> > > >
> > > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > > >>> huangzhenqiu0...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Congratulations!:)
> > > >>
> > > >> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
> > >  <
> > > >> pi...@ververica.com>
> > > >> wrote:
> > > >>
> > > >>> Congratulatio

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Guowei Ma
Congrats Zhuzhu!
Best,
Guowei


Zhenghua Gao  于2019年12月16日周一 上午10:47写道:

> Congrats!
>
> *Best Regards,*
> *Zhenghua Gao*
>
>
> On Mon, Dec 16, 2019 at 10:36 AM Biao Liu  wrote:
>
> > Congrats Zhu Zhu!
> >
> > Thanks,
> > Biao /'bɪ.aʊ/
> >
> >
> >
> > On Mon, 16 Dec 2019 at 10:23, Congxian Qiu 
> wrote:
> >
> > > Congrats, Zhu Zhu!
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > aihua li  于2019年12月16日周一 上午10:16写道:
> > >
> > > > Congratulations, zhuzhu!
> > > >
> > > > > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> > > > >
> > > > > Congratulations Zhu Zhu!
> > > > >
> > > > > Best,
> > > > > Jingsong Lee
> > > > >
> > > > > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> > > > wrote:
> > > > >
> > > > >> Congratulations, Zhu Zhu!
> > > > >>
> > > > >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> > > > >>
> > > > >>> Congratulations, Zhu Zhu!
> > > > >>>
> > > > >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu 
> > wrote:
> > > > >>>
> > > >  Congratulations, Zhu Zhu ! !
> > > > 
> > > >  Best,
> > > >  Leonard Xu
> > > > 
> > > > > On Dec 16, 2019, at 07:53, Becket Qin 
> > > wrote:
> > > > >
> > > > > Congrats, Zhu Zhu!
> > > > >
> > > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu <
> dian0511...@gmail.com>
> > > > >>> wrote:
> > > > >
> > > > >> Congrats Zhu Zhu!
> > > > >>
> > > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > > > >>>
> > > > >>> Thanks everyone for the warm welcome!
> > > > >>> It's my honor and pleasure to improve Flink with all of you
> in
> > > the
> > > > >>> community!
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Zhu Zhu
> > > > >>>
> > > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > > > >>>
> > > >  Congratulations!:)
> > > > 
> > > >  Hequn Cheng  于2019年12月15日周日
> 上午11:47写道:
> > > > 
> > > > > Congrats, Zhu Zhu!
> > > > >
> > > > > Best, Hequn
> > > > >
> > > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen <
> > suez1...@gmail.com
> > > >
> > > >  wrote:
> > > > >
> > > > >> Congratulations!
> > > > >>
> > > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong <
> > > walter...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats Zhu Zhu :-)
> > > > >>>
> > > > >>> --
> > > > >>> Rong
> > > > >>>
> > > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison <
> > wander4...@gmail.com>
> > > >  wrote:
> > > > >>>
> > > >  Congratulations!:)
> > > > 
> > > >  Best,
> > > >  tison.
> > > > 
> > > > 
> > > >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > > > 
> > > > > Congrats Zhu Zhu!
> > > > >
> > > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang <
> > > zjf...@gmail.com
> > > > >>>
> > > > > wrote:
> > > > >
> > > > >> Congrats, Zhu Zhu!
> > > > >>
> > > > >> Paul Lam  于2019年12月14日周六
> > > 上午10:29写道:
> > > > >>
> > > > >>> Congrats Zhu Zhu!
> > > > >>>
> > > > >>> Best,
> > > > >>> Paul Lam
> > > > >>>
> > > > >>> Kurt Young  于2019年12月14日周六
> > 上午10:22写道:
> > > > >>>
> > > >  Congratulations Zhu Zhu!
> > > > 
> > > >  Best,
> > > >  Kurt
> > > > 
> > > > 
> > > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > > >> sunjincheng...@gmail.com>
> > > >  wrote:
> > > > 
> > > > > Congrats ZhuZhu and welcome on board!
> > > > >
> > > > > Best,
> > > > > Jincheng
> > > > >
> > > > >
> > > > > Jark Wu  于2019年12月14日周六
> 上午9:55写道:
> > > > >
> > > > >> Congratulations, Zhu Zhu!
> > > > >>
> > > > >> Best,
> > > > >> Jark
> > > > >>
> > > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > > >> karma...@gmail.com
> > > > 
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats, ZhuZhu!
> > > > >>>
> > > > >>> Bowen Li  于 2019年12月14日周六
> > > > > 上午5:37写道:
> > > > >>>
> > > >  Congrats!
> > > > 
> > > >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > > >  usxu...@gmail.com>
> > > >  wrote:
> > > > 
> > > > > Congratulations, Zhu Zhu!
> > > > >
> > > > > On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
> > > > >>>

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Yun Gao
Congratulations Zhuzhu!

Best,
Yun


--
From:Guowei Ma 
Send Time:2019 Dec. 16 (Mon.) 11:16
To:dev 
Subject:Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

Congrats Zhuzhu!
Best,
Guowei


Zhenghua Gao  于2019年12月16日周一 上午10:47写道:

> Congrats!
>
> *Best Regards,*
> *Zhenghua Gao*
>
>
> On Mon, Dec 16, 2019 at 10:36 AM Biao Liu  wrote:
>
> > Congrats Zhu Zhu!
> >
> > Thanks,
> > Biao /'bɪ.aʊ/
> >
> >
> >
> > On Mon, 16 Dec 2019 at 10:23, Congxian Qiu 
> wrote:
> >
> > > Congrats, Zhu Zhu!
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > aihua li  于2019年12月16日周一 上午10:16写道:
> > >
> > > > Congratulations, zhuzhu!
> > > >
> > > > > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> > > > >
> > > > > Congratulations Zhu Zhu!
> > > > >
> > > > > Best,
> > > > > Jingsong Lee
> > > > >
> > > > > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> > > > wrote:
> > > > >
> > > > >> Congratulations, Zhu Zhu!
> > > > >>
> > > > >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> > > > >>
> > > > >>> Congratulations, Zhu Zhu!
> > > > >>>
> > > > >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu 
> > wrote:
> > > > >>>
> > > >  Congratulations, Zhu Zhu ! !
> > > > 
> > > >  Best,
> > > >  Leonard Xu
> > > > 
> > > > > On Dec 16, 2019, at 07:53, Becket Qin 
> > > wrote:
> > > > >
> > > > > Congrats, Zhu Zhu!
> > > > >
> > > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu <
> dian0511...@gmail.com>
> > > > >>> wrote:
> > > > >
> > > > >> Congrats Zhu Zhu!
> > > > >>
> > > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > > > >>>
> > > > >>> Thanks everyone for the warm welcome!
> > > > >>> It's my honor and pleasure to improve Flink with all of you
> in
> > > the
> > > > >>> community!
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Zhu Zhu
> > > > >>>
> > > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > > > >>>
> > > >  Congratulations!:)
> > > > 
> > > >  Hequn Cheng  于2019年12月15日周日
> 上午11:47写道:
> > > > 
> > > > > Congrats, Zhu Zhu!
> > > > >
> > > > > Best, Hequn
> > > > >
> > > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen <
> > suez1...@gmail.com
> > > >
> > > >  wrote:
> > > > >
> > > > >> Congratulations!
> > > > >>
> > > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong <
> > > walter...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats Zhu Zhu :-)
> > > > >>>
> > > > >>> --
> > > > >>> Rong
> > > > >>>
> > > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison <
> > wander4...@gmail.com>
> > > >  wrote:
> > > > >>>
> > > >  Congratulations!:)
> > > > 
> > > >  Best,
> > > >  tison.
> > > > 
> > > > 
> > > >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > > > 
> > > > > Congrats Zhu Zhu!
> > > > >
> > > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang <
> > > zjf...@gmail.com
> > > > >>>
> > > > > wrote:
> > > > >
> > > > >> Congrats, Zhu Zhu!
> > > > >>
> > > > >> Paul Lam  于2019年12月14日周六
> > > 上午10:29写道:
> > > > >>
> > > > >>> Congrats Zhu Zhu!
> > > > >>>
> > > > >>> Best,
> > > > >>> Paul Lam
> > > > >>>
> > > > >>> Kurt Young  于2019年12月14日周六
> > 上午10:22写道:
> > > > >>>
> > > >  Congratulations Zhu Zhu!
> > > > 
> > > >  Best,
> > > >  Kurt
> > > > 
> > > > 
> > > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > > >> sunjincheng...@gmail.com>
> > > >  wrote:
> > > > 
> > > > > Congrats ZhuZhu and welcome on board!
> > > > >
> > > > > Best,
> > > > > Jincheng
> > > > >
> > > > >
> > > > > Jark Wu  于2019年12月14日周六
> 上午9:55写道:
> > > > >
> > > > >> Congratulations, Zhu Zhu!
> > > > >>
> > > > >> Best,
> > > > >> Jark
> > > > >>
> > > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > > >> karma...@gmail.com
> > > > 
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats, ZhuZhu!
> > > > >>>
> > > > >>> Bowen Li  于 2019年12月14日周六
> > > > > 上午5:37写道:
> > > > >>>
> > > >  Congrats!
> > > > 
> > > >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > > >  usxu...@gmail.com>
> > > > >

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Yun Tang
Congratulations ZZ

Best
Yun Tang

From: Guowei Ma 
Sent: Monday, December 16, 2019 11:15
To: dev 
Subject: Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

Congrats Zhuzhu!
Best,
Guowei


Zhenghua Gao  于2019年12月16日周一 上午10:47写道:

> Congrats!
>
> *Best Regards,*
> *Zhenghua Gao*
>
>
> On Mon, Dec 16, 2019 at 10:36 AM Biao Liu  wrote:
>
> > Congrats Zhu Zhu!
> >
> > Thanks,
> > Biao /'bɪ.aʊ/
> >
> >
> >
> > On Mon, 16 Dec 2019 at 10:23, Congxian Qiu 
> wrote:
> >
> > > Congrats, Zhu Zhu!
> > >
> > > Best,
> > > Congxian
> > >
> > >
> > > aihua li  于2019年12月16日周一 上午10:16写道:
> > >
> > > > Congratulations, zhuzhu!
> > > >
> > > > > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> > > > >
> > > > > Congratulations Zhu Zhu!
> > > > >
> > > > > Best,
> > > > > Jingsong Lee
> > > > >
> > > > > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
> > > > wrote:
> > > > >
> > > > >> Congratulations, Zhu Zhu!
> > > > >>
> > > > >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
> > > > >>
> > > > >>> Congratulations, Zhu Zhu!
> > > > >>>
> > > > >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu 
> > wrote:
> > > > >>>
> > > >  Congratulations, Zhu Zhu ! !
> > > > 
> > > >  Best,
> > > >  Leonard Xu
> > > > 
> > > > > On Dec 16, 2019, at 07:53, Becket Qin 
> > > wrote:
> > > > >
> > > > > Congrats, Zhu Zhu!
> > > > >
> > > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu <
> dian0511...@gmail.com>
> > > > >>> wrote:
> > > > >
> > > > >> Congrats Zhu Zhu!
> > > > >>
> > > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
> > > > >>>
> > > > >>> Thanks everyone for the warm welcome!
> > > > >>> It's my honor and pleasure to improve Flink with all of you
> in
> > > the
> > > > >>> community!
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Zhu Zhu
> > > > >>>
> > > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
> > > > >>>
> > > >  Congratulations!:)
> > > > 
> > > >  Hequn Cheng  于2019年12月15日周日
> 上午11:47写道:
> > > > 
> > > > > Congrats, Zhu Zhu!
> > > > >
> > > > > Best, Hequn
> > > > >
> > > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen <
> > suez1...@gmail.com
> > > >
> > > >  wrote:
> > > > >
> > > > >> Congratulations!
> > > > >>
> > > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong <
> > > walter...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats Zhu Zhu :-)
> > > > >>>
> > > > >>> --
> > > > >>> Rong
> > > > >>>
> > > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison <
> > wander4...@gmail.com>
> > > >  wrote:
> > > > >>>
> > > >  Congratulations!:)
> > > > 
> > > >  Best,
> > > >  tison.
> > > > 
> > > > 
> > > >  OpenInx  于2019年12月14日周六 下午7:34写道:
> > > > 
> > > > > Congrats Zhu Zhu!
> > > > >
> > > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang <
> > > zjf...@gmail.com
> > > > >>>
> > > > > wrote:
> > > > >
> > > > >> Congrats, Zhu Zhu!
> > > > >>
> > > > >> Paul Lam  于2019年12月14日周六
> > > 上午10:29写道:
> > > > >>
> > > > >>> Congrats Zhu Zhu!
> > > > >>>
> > > > >>> Best,
> > > > >>> Paul Lam
> > > > >>>
> > > > >>> Kurt Young  于2019年12月14日周六
> > 上午10:22写道:
> > > > >>>
> > > >  Congratulations Zhu Zhu!
> > > > 
> > > >  Best,
> > > >  Kurt
> > > > 
> > > > 
> > > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
> > > > >> sunjincheng...@gmail.com>
> > > >  wrote:
> > > > 
> > > > > Congrats ZhuZhu and welcome on board!
> > > > >
> > > > > Best,
> > > > > Jincheng
> > > > >
> > > > >
> > > > > Jark Wu  于2019年12月14日周六
> 上午9:55写道:
> > > > >
> > > > >> Congratulations, Zhu Zhu!
> > > > >>
> > > > >> Best,
> > > > >> Jark
> > > > >>
> > > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
> > > > >> karma...@gmail.com
> > > > 
> > > > >> wrote:
> > > > >>
> > > > >>> Congrats, ZhuZhu!
> > > > >>>
> > > > >>> Bowen Li  于 2019年12月14日周六
> > > > > 上午5:37写道:
> > > > >>>
> > > >  Congrats!
> > > > 
> > > >  On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
> > > >  usxu...@gmail.com>
> > > >  wrote:
> > > > >

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Haibo Sun
Congratulations, Zhu Zhu!


Best,
Haibo
在 2019-12-16 11:16:55,"Yun Tang"  写道:
>Congratulations ZZ
>
>Best
>Yun Tang
>
>From: Guowei Ma 
>Sent: Monday, December 16, 2019 11:15
>To: dev 
>Subject: Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer
>
>Congrats Zhuzhu!
>Best,
>Guowei
>
>
>Zhenghua Gao  于2019年12月16日周一 上午10:47写道:
>
>> Congrats!
>>
>> *Best Regards,*
>> *Zhenghua Gao*
>>
>>
>> On Mon, Dec 16, 2019 at 10:36 AM Biao Liu  wrote:
>>
>> > Congrats Zhu Zhu!
>> >
>> > Thanks,
>> > Biao /'bɪ.aʊ/
>> >
>> >
>> >
>> > On Mon, 16 Dec 2019 at 10:23, Congxian Qiu 
>> wrote:
>> >
>> > > Congrats, Zhu Zhu!
>> > >
>> > > Best,
>> > > Congxian
>> > >
>> > >
>> > > aihua li  于2019年12月16日周一 上午10:16写道:
>> > >
>> > > > Congratulations, zhuzhu!
>> > > >
>> > > > > 在 2019年12月16日,上午10:04,Jingsong Li  写道:
>> > > > >
>> > > > > Congratulations Zhu Zhu!
>> > > > >
>> > > > > Best,
>> > > > > Jingsong Lee
>> > > > >
>> > > > > On Mon, Dec 16, 2019 at 10:01 AM Yang Wang 
>> > > > wrote:
>> > > > >
>> > > > >> Congratulations, Zhu Zhu!
>> > > > >>
>> > > > >> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
>> > > > >>
>> > > > >>> Congratulations, Zhu Zhu!
>> > > > >>>
>> > > > >>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu 
>> > wrote:
>> > > > >>>
>> > > >  Congratulations, Zhu Zhu ! !
>> > > > 
>> > > >  Best,
>> > > >  Leonard Xu
>> > > > 
>> > > > > On Dec 16, 2019, at 07:53, Becket Qin 
>> > > wrote:
>> > > > >
>> > > > > Congrats, Zhu Zhu!
>> > > > >
>> > > > > On Sun, Dec 15, 2019 at 10:26 PM Dian Fu <
>> dian0511...@gmail.com>
>> > > > >>> wrote:
>> > > > >
>> > > > >> Congrats Zhu Zhu!
>> > > > >>
>> > > > >>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
>> > > > >>>
>> > > > >>> Thanks everyone for the warm welcome!
>> > > > >>> It's my honor and pleasure to improve Flink with all of you
>> in
>> > > the
>> > > > >>> community!
>> > > > >>>
>> > > > >>> Thanks,
>> > > > >>> Zhu Zhu
>> > > > >>>
>> > > > >>> Benchao Li  于2019年12月15日周日 下午3:54写道:
>> > > > >>>
>> > > >  Congratulations!:)
>> > > > 
>> > > >  Hequn Cheng  于2019年12月15日周日
>> 上午11:47写道:
>> > > > 
>> > > > > Congrats, Zhu Zhu!
>> > > > >
>> > > > > Best, Hequn
>> > > > >
>> > > > > On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen <
>> > suez1...@gmail.com
>> > > >
>> > > >  wrote:
>> > > > >
>> > > > >> Congratulations!
>> > > > >>
>> > > > >> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong <
>> > > walter...@gmail.com>
>> > > > >> wrote:
>> > > > >>
>> > > > >>> Congrats Zhu Zhu :-)
>> > > > >>>
>> > > > >>> --
>> > > > >>> Rong
>> > > > >>>
>> > > > >>> On Sat, Dec 14, 2019 at 4:47 AM tison <
>> > wander4...@gmail.com>
>> > > >  wrote:
>> > > > >>>
>> > > >  Congratulations!:)
>> > > > 
>> > > >  Best,
>> > > >  tison.
>> > > > 
>> > > > 
>> > > >  OpenInx  于2019年12月14日周六 下午7:34写道:
>> > > > 
>> > > > > Congrats Zhu Zhu!
>> > > > >
>> > > > > On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang <
>> > > zjf...@gmail.com
>> > > > >>>
>> > > > > wrote:
>> > > > >
>> > > > >> Congrats, Zhu Zhu!
>> > > > >>
>> > > > >> Paul Lam  于2019年12月14日周六
>> > > 上午10:29写道:
>> > > > >>
>> > > > >>> Congrats Zhu Zhu!
>> > > > >>>
>> > > > >>> Best,
>> > > > >>> Paul Lam
>> > > > >>>
>> > > > >>> Kurt Young  于2019年12月14日周六
>> > 上午10:22写道:
>> > > > >>>
>> > > >  Congratulations Zhu Zhu!
>> > > > 
>> > > >  Best,
>> > > >  Kurt
>> > > > 
>> > > > 
>> > > >  On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
>> > > > >> sunjincheng...@gmail.com>
>> > > >  wrote:
>> > > > 
>> > > > > Congrats ZhuZhu and welcome on board!
>> > > > >
>> > > > > Best,
>> > > > > Jincheng
>> > > > >
>> > > > >
>> > > > > Jark Wu  于2019年12月14日周六
>> 上午9:55写道:
>> > > > >
>> > > > >> Congratulations, Zhu Zhu!
>> > > > >>
>> > > > >> Best,
>> > > > >> Jark
>> > > > >>
>> > > > >> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
>> > > > >> karma...@gmail.com
>> > > > 
>> > > > >> wrote:
>> > > > >>
>> > > > >>> Congrats, ZhuZhu!
>> > > > >>>
>> > > > >>> Bowen Li  于 2019年12月14日周六
>> > > > > 上午5:37写道:
>> >

Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Danny Chan
Congrats Zhu Zhu!

Best,
Danny Chan
在 2019年12月14日 +0800 AM12:51,dev@flink.apache.org,写道:
>
> Congrats Zhu Zhu and welcome on board!


Re: [ANNOUNCE] Weekly Community Update 2019/50

2019-12-15 Thread Hequn Cheng
Hi Konstantin,

Happy holidays and thanks a lot for your great job on the updates
continuously.
With the updates, it is easier for us to catch up with what's going on in
the community, which I think is quite helpful.

I'm wondering if I can do some help and cover this during your vocation. :)

Best,
Hequn

On Sun, Dec 15, 2019 at 11:36 PM Konstantin Knauf 
wrote:

> Dear community,
>
> happy to share this week's brief community digest with updates on Flink
> 1.8.3 and Flink 1.10, a discussion on how to facilitate easier Flink/Hive
> setups, a couple of blog posts and a bit more.
>
> *Personal Note:* Thank you for reading these updates since I started them
> early this year. I will take a three week Christmas break and will be back
> with a Holiday season community update on the 12th of January.
>
> Flink Development
> ==
>
> * [releases] Apache Flink 1.8.3 was released on Wednesday. [1,2]
>
> * [releases] The feature freeze for Apache Flink took place on Monday. The
> community is now working on testing, bug fixes and improving the
> documentation in order to create a first release candidate soon. [3]
>
> * [development process] Seth has revived the discussion on a past PR by
> Marta, which added a documentation style guide to the contributor guide.
> Please check it [4] out, if you are contributing documentation to Apache
> Flink. [5]
>
> * [security] Following a recent report to the Flink PMC of "exploiting"
> the Flink Web UI for remote code execution, Robert has started a discussion
> on how to improve the tooling/documentation to make users aware of this
> possibility and recommend securing this interface in production setups. [6]
>
> * [sql] Bowen has started a discussion on how to simplify the Flink-Hive
> setup for new users as currently users need to add some additional
> dependencies to the classpath manually. The discussion seems to conclude
> towards providing a single additional hive-uber jar, which contains all the
> required dependencies. [7]
>
> [1] https://flink.apache.org/news/2019/12/11/release-1.8.3.html
> [2]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Apache-Flink-1-8-3-released-tp35868.html
> [3]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Feature-freeze-for-Apache-Flink-1-10-0-release-tp35139.html
> [4] https://github.com/apache/flink-web/pull/240
> [5]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-Docs-Style-Guide-Review-tp35758.html
> [6]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Improve-documentation-tooling-around-security-of-Flink-tp35898.html
> [7]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-have-separate-Flink-distributions-with-built-in-Hive-dependencies-tp35918.html
>
> Notable Bugs
> ==
>
> [FLINK-15152] [1.9.1] When a "stop" action on a job fails, because not all
> tasks are in "RUNNING" state the job is not checkpointing afterwards. [8]
>
> [8] https://issues.apache.org/jira/browse/FLINK-15152
>
> Events, Blog Posts, Misc
> ===
>
> * Zhu Zhu is now an Apache Flink Comitter. Congratulations! [9]
>
> * Gerred Dillon has published a blog post on the Apache Flink blog on how
> to run Flink on Kubernetes with a KUDO Flink operator. [10]
>
> * In this blog post Apache Flink PMC Sun Jincheng outlines the reasons and
> motivation for his and his colleague's work to provide a world-class Python
> support for Apache Flink's Table API. [11]
>
> * Upcoming Meetups
> * On December 17th there will be the second Apache Flink meetup in
> Seoul. [12] *Dongwon* has shared a detailed agenda in last weeks
> community update. [13]
> * On December 18th Alexander Fedulov will talk about Stateful Stream
> Processing with Apache Flink at the Java Professionals Meetup in Minsk. [14]
>
> [9]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Zhu-Zhu-becomes-a-Flink-committer-tp35944.html
> [10] https://flink.apache.org/news/2019/12/09/flink-kubernetes-kudo.html
> [11]
> https://developpaper.com/why-will-apache-flink-1-9-0-support-the-python-api/
> [12] https://www.meetup.com/Seoul-Apache-Flink-Meetup/events/266824815/
> [13]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2019-48-td35423.html
> [14] https://www.meetup.com/Apache-Flink-Meetup-Minsk/events/267134296/
>
> Cheers,
>
> Konstantin (@snntrable)
>
> --
>
> Konstantin Knauf | Solutions Architect
>
> +49 160 91394525
>
>
> Follow us @VervericaData Ververica 
>
>
> --
>
> Join Flink Forward  - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>
> --
>
> Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
>
> --
> Ververica GmbH
> Registered at Amtsgericht Charlottenburg: HRB 158244 B
> Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
> (Tony) Cheng
>


Re: [DISCUSS] have separate Flink distributions with built-in Hive dependencies

2019-12-15 Thread Jingsong Li
Thanks all for explaining.

I misunderstood the original proposal.
-1 to put them in our distributions
+1 to have provide hive uber jars as Seth and Aljoscha advice

Hive is just a connector no matter how important it is.
So I totally agree that we shouldn't put them in our distributions.
We can start offering three uber jars:
- flink-sql-connector-hive-1 (uber jar with hive dependent version 1.2.1)
- flink-sql-connector-hive-2 (uber jar with hive dependent version 2.3.4)
- flink-sql-connector-hive-3 (uber jar with hive dependent version 3.1.1)
My understanding is quite enough to users.

Best,
Jingsong Lee

On Sun, Dec 15, 2019 at 12:42 PM Jark Wu  wrote:

> I agree with Seth and Aljoscha and think that is a right way to go.
> We already provided uber jars for kafka and elasticsearch for out-of-box,
> you can see the download links in this page[1].
> Users can easily to download the connectors and versions they like and drag
> to SQL CLI lib directories. The uber jars
> contains all the dependencies required and may be shaded. In this way,
> users can skip to build a uber jar themselves.
> Hive is indeed a "connector" too, and should also follow this way.
>
> Best,
> Jark
>
> [1]:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#dependencies
>
> On Sat, 14 Dec 2019 at 03:03, Aljoscha Krettek 
> wrote:
>
> > I was going to suggest the same thing as Seth. So yes, I’m against having
> > Flink distributions that contain Hive but for convenience downloads as we
> > have for Hadoop.
> >
> > Best,
> > Aljoscha
> >
> > > On 13. Dec 2019, at 18:04, Seth Wiesman  wrote:
> > >
> > > I'm also -1 on separate builds.
> > >
> > > What about publishing convenience jars that contain the dependencies
> for
> > > each version? For example, there could be a flink-hive-1.2.1-uber.jar
> > that
> > > users could just add to their lib folder that contains all the
> necessary
> > > dependencies to connect to that hive version.
> > >
> > >
> > > On Fri, Dec 13, 2019 at 8:50 AM Robert Metzger 
> > wrote:
> > >
> > >> I'm generally not opposed to convenience binaries, if a huge number of
> > >> people would benefit from them, and the overhead for the Flink project
> > is
> > >> low. I did not see a huge demand for such binaries yet (neither for
> the
> > >> Flink + Hive integration). Looking at Apache Spark, they are also only
> > >> offering convenience binaries for Hadoop only.
> > >>
> > >> Maybe we could provide a "Docker Playground" for Flink + Hive in the
> > >> documentation (and the flink-playgrounds.git repo)?
> > >> (similar to
> > >>
> > >>
> >
> https://ci.apache.org/projects/flink/flink-docs-master/getting-started/docker-playgrounds/flink-operations-playground.html
> > >> )
> > >>
> > >>
> > >>
> > >> On Fri, Dec 13, 2019 at 3:04 PM Chesnay Schepler 
> > >> wrote:
> > >>
> > >>> -1
> > >>>
> > >>> We shouldn't need to deploy additional binaries to have a feature be
> > >>> remotely usable.
> > >>> This usually points to something else being done incorrectly.
> > >>>
> > >>> If it is indeed such a hassle to setup hive on Flink, then my
> > conclusion
> > >>> would be that either
> > >>> a) the documentation needs to be improved
> > >>> b) the architecture needs to be improved
> > >>> or, if all else fails c) provide a utility script for setting it up
> > >> easier.
> > >>>
> > >>> We spent a lot of time on reducing the number of binaries in the
> hadoop
> > >>> days, and also go extra steps to prevent a separate Java 11 binary,
> and
> > >>> I see no reason why Hive should get special treatment on this matter.
> > >>>
> > >>> Regards,
> > >>> Chesnay
> > >>>
> > >>> On 13/12/2019 09:44, Bowen Li wrote:
> >  Hi all,
> > 
> >  I want to propose to have a couple separate Flink distributions with
> > >> Hive
> >  dependencies on specific Hive versions (2.3.4 and 1.2.1). The
> > >>> distributions
> >  will be provided to users on Flink download page [1].
> > 
> >  A few reasons to do this:
> > 
> >  1) Flink-Hive integration is important to many many Flink and Hive
> > >> users
> > >>> in
> >  two dimensions:
> >   a) for Flink metadata: HiveCatalog is the only persistent
> catalog
> > >>> to
> >  manage Flink tables. With Flink 1.10 supporting more DDL, the
> > >> persistent
> >  catalog would be playing even more critical role in users' workflow
> >   b) for Flink data: Hive data connector (source/sink) helps both
> > >>> Flink
> >  and Hive users to unlock new use cases in streaming,
> > >>> near-realtime/realtime
> >  data warehouse, backfill, etc.
> > 
> >  2) currently users have to go thru a *really* tedious process to get
> >  started, because it requires lots of extra jars (see [2]) that are
> > >> absent
> >  in Flink's lean distribution. We've had so many users from public
> > >> mailing
> >  list, private email, DingTalk groups who got frustrated on spending
> > >> lots
> > >>> of
> > >>

Re: [DISCUSS] Overwrite and partition inserting support in 1.10

2019-12-15 Thread Jingsong Li
Thanks all,

As your suggestion, I'd like to do:
- FLIP-63: move "CREATE TABLE ... PARTITIONED BY" and "MSCK REPAIR TABLE"
to further discussion chapter.
- Remove hive dialect limitation for supported "INSERT OVERWRITE" and
"INSERT ... PARTITION(...)".
- Limit "CREATE TABLE ... PARTITIONED BY" to hive dialect.
Thanks everyone again.

Best,
Jingsong Lee

On Sat, Dec 14, 2019 at 2:41 AM Xuefu Z  wrote:

> Thanks all for the healthy discussions. I'd just like to point out a light
> difference between standard and standard compatibility. Most of DB vendors
> meant the latter  when they claim following a sql standard. However, it
> doesn't mean they don't have any syntax beyond the standard grammar.
> Extension is very common.
>
> Therefore, I think Flink SQL can also have more freedom in extending the
> standard grammar and adopting non-standard grammars such as "INSERT
> OVERWRITE" that comes from Hive. While Hive has a lot of good cases like
> this, we are also free to reject or put for a specific dialect anything
> that's too specific or doesn't make sense in Flink. To me, "NSERT
> OVERWRITE" can be adopted, which "MSCK REPAIR TABLE" can be rejected or put
> as "Hive dailect".
>
> Lastly, if FLIP is officially accepted, I agree that we should respect and
> fix things as bugs accordingly. If there is a second thought or debate on
> anything, a vote is required to overturn it. Before that, the original FLIP
> holds.
>
> Thanks,
> Xuefu
>
>
> On Fri, Dec 13, 2019 at 1:20 AM Jingsong Li 
> wrote:
>
> > Hi Timo,
> >
> > Thanks for your feedback.
> >
> > The reason of `The DDL can like this (With hive dialect)` is:
> > The syntax of creating partition table is controversial, so we think we
> > should put it aside for the time being to make it invisible to users.
> Since
> > we implemented this syntax in 1.9, we decided to put it under hive
> dialect.
> > (Subsequent votes will be taken to determine.)
> > MSCK means metastore check, It is listed in the document mainly for the
> > integrity of the story. Of course, it can be ignored. I fully agree that
> it
> > can be discussed slowly in the future.
> >
> > >The current problem for the CLI is that it still does not use the
> > flink-sql-parser. We can support all new syntax in Flink 1.11 once the
> CLI
> > architecture has changed there. With the current architecture (using
> simple
> > regex), I'm not sure if we can support all the grammar listed in the
> FLIP.
> >
> > You are right, we can not support any DDL grammars in FLIP. we should
> wait
> > refactor of SQL-CLI in 1.11.
> > What we can do is to remove the restriction of hiveDialect for the insert
> > (overwrite and partition) syntax. That's what we've supported, and so far
> > we haven't received any objections.
> >
> > Best,
> > Jingsong Lee
> >
> > On Fri, Dec 13, 2019 at 4:51 PM Timo Walther  wrote:
> >
> > > Hi everyone,
> > >
> > > sorry, I was not aware that FLIP-63 already lists a lot of additional
> > > SQL grammar. It was accepted though an official voting process so I
> > > guess we can adopt the listed grammar for Flink SQL.
> > >
> > > The only thing that confuses me is the mentioning of `The DDL can like
> > > this (With hive dialect)`, for the next time we should make it more
> > > explicit what belongs to the Flink SQL dialect and what belongs to the
> > > Hive dialect. Also the not intuitive `MSCK REPAIR TABLE` seems strange
> > > to me. What does MSCK stand for? Maybe we should skip stuff like that
> > > for now.
> > >
> > > The current problem for the CLI is that it still does not use the
> > > flink-sql-parser. We can support all new syntax in Flink 1.11 once the
> > > CLI architecture has changed there. With the current architecture
> (using
> > > simple regex), I'm not sure if we can support all the grammar listed in
> > > the FLIP.
> > >
> > > Regards,
> > > Timo
> > >
> > > On 13.12.19 04:22, Rui Li wrote:
> > > > Hi Timo,
> > > >
> > > > I understand we need further discussion about syntax/dialect for
> 1.11.
> > > But
> > > > as Jark has pointed out, the current implementation violates the
> > accepted
> > > > design of FLIP-63, which IMO qualifies as a bug. Given that it's a
> bug
> > > and
> > > > has great impact on the usability of our Hive integration, do you
> think
> > > we
> > > > can fix it in 1.10?
> > > >
> > > > On Fri, Dec 13, 2019 at 12:24 AM Jingsong Li  >
> > > wrote:
> > > >
> > > >> Hi Timo,
> > > >>
> > > >> I am OK if you think they are not bug and they should not be
> included
> > in
> > > >> 1.10.
> > > >>
> > > >> I think they have been accepted in FLIP-63. And there is no
> objection.
> > > It
> > > >> has been more than three months since the discussion of FLIP-63.
> It's
> > > been
> > > >> six months since Flink added these two syntaxs.
> > > >>
> > > >> But I can also start discussion and vote thread for FLIP-63 again,
> to
> > > make
> > > >> sure once again that everyone is happy.
> > > >>
> > > >> Best,
> > > >> Jingsong Lee
> > > >>
> > > >> On Thu, D

[jira] [Created] (FLINK-15267) Streaming SQL end-to-end test (Blink planner) fails on travis

2019-12-15 Thread Yu Li (Jira)
Yu Li created FLINK-15267:
-

 Summary: Streaming SQL end-to-end test (Blink planner) fails on 
travis
 Key: FLINK-15267
 URL: https://issues.apache.org/jira/browse/FLINK-15267
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner, Tests
Affects Versions: 1.10.0
Reporter: Yu Li


As titled, the 'Streaming SQL end-to-end test (Blink planner)' case failed with 
below error:
{code}
The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: key not found: ts
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:146)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:671)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:989)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:989)
Caused by: java.util.NoSuchElementException: key not found: ts
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at 
org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:164)
at 
org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$6.apply(TableSourceUtil.scala:163)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.sources.TableSourceUtil$.fixPrecisionForProducedDataType(TableSourceUtil.scala:163)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:143)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:60)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:60)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54)
{code}

https://api.travis-ci.org/v3/job/625037124/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15268) Shaded Hadoop S3A end-to-end test fails on travis

2019-12-15 Thread Yu Li (Jira)
Yu Li created FLINK-15268:
-

 Summary: Shaded Hadoop S3A end-to-end test fails on travis
 Key: FLINK-15268
 URL: https://issues.apache.org/jira/browse/FLINK-15268
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem, Tests
Affects Versions: 1.10.0
Reporter: Yu Li


As titled, the 'Shaded Hadoop S3A end-to-end test' case failed with below error:
{code}
java.io.IOException: regular upload failed: java.lang.NoClassDefFoundError: 
javax/xml/bind/JAXBException
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:291)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:448)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:360)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.flink.fs.s3.common.hadoop.HadoopDataOutputStream.close(HadoopDataOutputStream.java:52)
at 
org.apache.flink.core.fs.ClosingFSDataOutputStream.close(ClosingFSDataOutputStream.java:64)
at 
java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:188)
at java.base/sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:341)
at java.base/sun.nio.cs.StreamEncoder.close(StreamEncoder.java:161)
at 
java.base/java.io.OutputStreamWriter.close(OutputStreamWriter.java:258)
at 
org.apache.flink.api.java.io.CsvOutputFormat.close(CsvOutputFormat.java:170)
at 
org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:227)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
at 
org.apache.flink.fs.s3base.shaded.com.amazonaws.util.Md5Utils.md5AsBase64(Md5Utils.java:104)
at 
org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1647)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1531)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:426)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
{code}

https://api.travis-ci.org/v3/job/625037121/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15269) Fix hive dialect limitation to overwrite and partition syntax

2019-12-15 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-15269:


 Summary: Fix hive dialect limitation to overwrite and partition 
syntax
 Key: FLINK-15269
 URL: https://issues.apache.org/jira/browse/FLINK-15269
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Jingsong Lee
 Fix For: 1.10.0


As 
[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Overwrite-and-partition-inserting-support-in-1-10-td35829.html#a35885]
 discussed.

We should:
 * Remove hive dialect limitation for supported "INSERT OVERWRITE" and "INSERT 
... PARTITION(...)".
 * Limit "CREATE TABLE ... PARTITIONED BY" to hive dialect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] have separate Flink distributions with built-in Hive dependencies

2019-12-15 Thread Danny Chan
Also -1 on separate builds.

After referencing some other BigData engines for distribution[1], i didn't find 
strong needs to publish a separate build
for just a separate Hive version, indeed there are builds for different Hadoop 
version.

Just like Seth and Aljoscha said, we could push a flink-hive-version-uber.jar 
to use as a lib of SQL-CLI or other use cases.

[1] https://spark.apache.org/downloads.html
[2] https://www.elastic.co/guide/en/elasticsearch/hadoop/current/hive.html

Best,
Danny Chan
在 2019年12月14日 +0800 AM3:03,dev@flink.apache.org,写道:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#dependencies


[jira] [Created] (FLINK-15270) Add documentation about how to specify third-party dependencies via API for Python UDFs

2019-12-15 Thread Dian Fu (Jira)
Dian Fu created FLINK-15270:
---

 Summary: Add documentation about how to specify third-party 
dependencies via API for Python UDFs
 Key: FLINK-15270
 URL: https://issues.apache.org/jira/browse/FLINK-15270
 Project: Flink
  Issue Type: Task
  Components: API / Python, Documentation
Affects Versions: 1.10.0
Reporter: Dian Fu
 Fix For: 1.10.0


Currently we have already provided APIs and command line options to allow users 
to specify third-part dependencies which may be used in Python UDFs. There are 
already documentation about how to specify third-part dependencies in the 
command line options. We should also add documentation about how to specify 
third-party dependencies via API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15271) Add documentation about the Python environment requirements

2019-12-15 Thread Dian Fu (Jira)
Dian Fu created FLINK-15271:
---

 Summary: Add documentation about the Python environment 
requirements
 Key: FLINK-15271
 URL: https://issues.apache.org/jira/browse/FLINK-15271
 Project: Flink
  Issue Type: Task
  Components: API / Python, Documentation
Affects Versions: 1.10.0
Reporter: Dian Fu
 Fix For: 1.10.0


Python UDF has specific requirements about the Python environments, such as 
Python 3.5+, Beam 2.15.0, etc.  We should add clear documentation about these 
requirements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread Xintong Song
Congratulations Zhu Zhu~

Thank you~

Xintong Song



On Mon, Dec 16, 2019 at 12:34 PM Danny Chan  wrote:

> Congrats Zhu Zhu!
>
> Best,
> Danny Chan
> 在 2019年12月14日 +0800 AM12:51,dev@flink.apache.org,写道:
> >
> > Congrats Zhu Zhu and welcome on board!
>