回复:邮件退订

2022-06-05 Thread 张立志
邮件退订



| |
zh_ha...@163.com
|
|
邮箱:zh_ha...@163.com
|




 回复的原邮件 
| 发件人 | 王飞<13465637...@163.com> |
| 日期 | 2022年06月02日 22:20 |
| 收件人 | dev |
| 抄送至 | |
| 主题 | 邮件退订 |
邮件退订




回复:邮件退订

2022-06-05 Thread 张立志
邮件退订



| |
zh_ha...@163.com
|
|
邮箱:zh_ha...@163.com
|




 回复的原邮件 
| 发件人 | 米子日匀 |
| 日期 | 2022年06月02日 22:05 |
| 收件人 | dev@flink.apache.org |
| 抄送至 | |
| 主题 | 邮件退订 |


邮件退订











[ANNOUNCE] Apache Flink Kubernetes Operator 1.0.0 released

2022-06-05 Thread Yang Wang
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.0.0.

The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.
This is the first production ready release and brings numerous improvements
and new features to almost every aspect of the operator.

Please check out the release blog post for an overview of the release:
https://flink.apache.org/news/2022/06/05/release-kubernetes-operator-1.0.0.html

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator applications can be
found at:
https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351500

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Gyula & Yang


Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.0.0 released

2022-06-05 Thread rui fan
Thanks Yang for driving the release, and thanks to
all contributors for making this release happen!

Best wishes
Rui Fan

On Sun, Jun 5, 2022 at 4:14 PM Yang Wang  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.0.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle through native k8s tooling like kubectl.
> This is the first production ready release and brings numerous
> improvements and new features to almost every aspect of the operator.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/news/2022/06/05/release-kubernetes-operator-1.0.0.html
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
>
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator applications can be
> found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351500
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Regards,
> Gyula & Yang
>


Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.0.0 released

2022-06-05 Thread tison
Congrats! Thank you all for making this release happen.

Best,
tison.


rui fan <1996fan...@gmail.com> 于2022年6月5日周日 17:19写道:

> Thanks Yang for driving the release, and thanks to
> all contributors for making this release happen!
>
> Best wishes
> Rui Fan
>
> On Sun, Jun 5, 2022 at 4:14 PM Yang Wang  wrote:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink Kubernetes Operator 1.0.0.
> >
> > The Flink Kubernetes Operator allows users to manage their Apache Flink
> > applications and their lifecycle through native k8s tooling like kubectl.
> > This is the first production ready release and brings numerous
> > improvements and new features to almost every aspect of the operator.
> >
> > Please check out the release blog post for an overview of the release:
> >
> >
> https://flink.apache.org/news/2022/06/05/release-kubernetes-operator-1.0.0.html
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Maven artifacts for Flink Kubernetes Operator can be found at:
> >
> >
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
> >
> > Official Docker image for Flink Kubernetes Operator applications can be
> > found at:
> > https://hub.docker.com/r/apache/flink-kubernetes-operator
> >
> > The full release notes are available in Jira:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351500
> >
> > We would like to thank all contributors of the Apache Flink community who
> > made this release possible!
> >
> > Regards,
> > Gyula & Yang
> >
>


Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.0.0 released

2022-06-05 Thread Jing Ge
Amazing! Thanks Yang for driving this! Thanks all for your effort!

Best regards,
Jing

On Sun, Jun 5, 2022 at 11:30 AM tison  wrote:

> Congrats! Thank you all for making this release happen.
>
> Best,
> tison.
>
>
> rui fan <1996fan...@gmail.com> 于2022年6月5日周日 17:19写道:
>
>> Thanks Yang for driving the release, and thanks to
>> all contributors for making this release happen!
>>
>> Best wishes
>> Rui Fan
>>
>> On Sun, Jun 5, 2022 at 4:14 PM Yang Wang  wrote:
>>
>> > The Apache Flink community is very happy to announce the release of
>> Apache
>> > Flink Kubernetes Operator 1.0.0.
>> >
>> > The Flink Kubernetes Operator allows users to manage their Apache Flink
>> > applications and their lifecycle through native k8s tooling like
>> kubectl.
>> > This is the first production ready release and brings numerous
>> > improvements and new features to almost every aspect of the operator.
>> >
>> > Please check out the release blog post for an overview of the release:
>> >
>> >
>> https://flink.apache.org/news/2022/06/05/release-kubernetes-operator-1.0.0.html
>> >
>> > The release is available for download at:
>> > https://flink.apache.org/downloads.html
>> >
>> > Maven artifacts for Flink Kubernetes Operator can be found at:
>> >
>> >
>> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>> >
>> > Official Docker image for Flink Kubernetes Operator applications can be
>> > found at:
>> > https://hub.docker.com/r/apache/flink-kubernetes-operator
>> >
>> > The full release notes are available in Jira:
>> >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351500
>> >
>> > We would like to thank all contributors of the Apache Flink community
>> who
>> > made this release possible!
>> >
>> > Regards,
>> > Gyula & Yang
>> >
>>
>


Re: [DISCUSS] Deprecate SourceFunction APIs

2022-06-05 Thread Jark Wu
+1 to David's point.

Usually, when we deprecate some interfaces, we should point users to use
the recommended alternatives.
However, implementing the new Source interface for some simple scenarios is
too challenging and complex.
We also found it isn't easy to push the internal connector to upgrade to
the new Source because
"FLIP-27 are hard to understand, while SourceFunction is easy".

+1 to make implementing a simple Source easier before deprecating
SourceFunction.

Best,
Jark


On Sun, 5 Jun 2022 at 07:29, Jingsong Lee  wrote:

> +1 to David and Ingo.
>
> Before deprecate and remove SourceFunction, we should have some easier APIs
> to wrap new Source, the cost to write a new Source is too high now.
>
>
>
> Ingo Bürk 于2022年6月5日 周日05:32写道:
>
> > I +1 everything David said. The new Source API raised the complexity
> > significantly. It's great to have such a rich, powerful API that can do
> > everything, but in the process we lost the ability to onboard people to
> > the APIs.
> >
> >
> > Best
> > Ingo
> >
> > On 04.06.22 21:21, David Anderson wrote:
> > > I'm in favor of this, but I think we need to make it easier to
> implement
> > > data generators and test sources. As things stand in 1.15, unless you
> can
> > > be satisfied with using a NumberSequenceSource followed by a map,
> things
> > > get quite complicated. I looked into reworking the data generators used
> > in
> > > the training exercises, and got discouraged by the amount of work
> > involved.
> > > (The sources used in the training want to be unbounded, and need
> > > watermarking in the sources, which means that using
> NumberSequenceSource
> > > isn't an option.)
> > >
> > > I think the proposed deprecation will be better received if it can be
> > > accompanied by something that makes implementing a simple Source easier
> > > than it is now. People are continuing to implement new SourceFunctions
> > > because the interfaces defined by FLIP-27 are hard to understand, while
> > > SourceFunction is easy. Alex, I believe you were looking into
> > implementing
> > > an easier-to-use building block that could be used in situations like
> > this.
> > > Can we get something like that in place first?
> > >
> > > David
> > >
> > > On Fri, Jun 3, 2022 at 4:52 PM Jing Ge  wrote:
> > >
> > >> Hi,
> > >>
> > >> Thanks Alex for driving this!
> > >>
> > >> +1 To give the Flink developers, especially Connector developers the
> > clear
> > >> signal that the new Source API is recommended according to FLIP-27, we
> > >> should mark them as deprecated.
> > >>
> > >> There are some open questions to discuss:
> > >>
> > >> 1. Do we need to mark all subinterfaces/subclasses as deprecated? e.g.
> > >> FromElementsFunction, etc. there are many. What are the replacements?
> > >> 2. Do we need to mark all subclasses that have replacement as
> > deprecated?
> > >> e.g. ExternallyInducedSource whose replacement class, if I am not
> > mistaken,
> > >> ExternallyInducedSourceReader is @Experimental
> > >> 3. Do we need to mark all related test utility classes as deprecated?
> > >>
> > >> I think it might make sense to create an umbrella ticket to cover all
> of
> > >> these with the following process:
> > >>
> > >> 1. Mark SourceFunction as deprecated asap.
> > >> 2. Mark subinterfaces and subclasses as deprecated, if there are
> > graduated
> > >> replacements. Good example is that KafkaSource replaced KafkaConsumer
> > which
> > >> has been marked as deprecated.
> > >> 3. Do not mark subinterfaces and subclasses as deprecated, if
> > replacement
> > >> classes are still experimental, check if it is time to graduate them.
> > After
> > >> graduation, go to step 2. It might take a while for graduation.
> > >> 4. Do not mark subinterfaces and subclasses as deprecated, if the
> > >> replacement classes are experimental and are too young to graduate. We
> > have
> > >> to wait. But in this case we could create new tickets under the
> umbrella
> > >> ticket.
> > >> 5. Do not mark subinterfaces and subclasses as deprecated, if there is
> > no
> > >> replacement at all. We have to create new tickets and wait until the
> new
> > >> implementation has been done and graduated. It will take a longer
> time,
> > >> roughly 1,5 years.
> > >> 6. For test classes, we could follow the same rule. But I think for
> some
> > >> cases, we could consider doing the replacement directly without going
> > >> through the deprecation phase.
> > >>
> > >> When we look back on all of these, we can realize it is a big epic
> (even
> > >> bigger than an epic). It needs someone to drive it and keep focus on
> it
> > >> continuously with support from the community and push the development
> > >> towards the new Source API of FLIP-27.
> > >>
> > >> If we could have consensus for this,  Alex and I could create the
> > umbrella
> > >> ticket to kick it off.
> > >>
> > >> Best regards,
> > >> Jing
> > >>
> > >>
> > >> On Fri, Jun 3, 2022 at 3:54 PM Alexander Fedulov <
> > alexan...@ververica.com>
> > >>

slack invite link

2022-06-05 Thread Sucheth S
Hi,

Can someone please share the slack invite link.

Regards,
Sucheth Shivakumar
website : https://sucheths.com
mobile : +1(650)-576-8050
San Mateo, United States


Re: [DISCUSS] Deprecate SourceFunction APIs

2022-06-05 Thread Piotr Nowojski
Also +1 to what David has written. But it doesn't mean we should be waiting
indefinitely to deprecate SourceFunction.

Best,
Piotrek

niedz., 5 cze 2022 o 16:46 Jark Wu  napisał(a):

> +1 to David's point.
>
> Usually, when we deprecate some interfaces, we should point users to use
> the recommended alternatives.
> However, implementing the new Source interface for some simple scenarios is
> too challenging and complex.
> We also found it isn't easy to push the internal connector to upgrade to
> the new Source because
> "FLIP-27 are hard to understand, while SourceFunction is easy".
>
> +1 to make implementing a simple Source easier before deprecating
> SourceFunction.
>
> Best,
> Jark
>
>
> On Sun, 5 Jun 2022 at 07:29, Jingsong Lee  wrote:
>
> > +1 to David and Ingo.
> >
> > Before deprecate and remove SourceFunction, we should have some easier
> APIs
> > to wrap new Source, the cost to write a new Source is too high now.
> >
> >
> >
> > Ingo Bürk 于2022年6月5日 周日05:32写道:
> >
> > > I +1 everything David said. The new Source API raised the complexity
> > > significantly. It's great to have such a rich, powerful API that can do
> > > everything, but in the process we lost the ability to onboard people to
> > > the APIs.
> > >
> > >
> > > Best
> > > Ingo
> > >
> > > On 04.06.22 21:21, David Anderson wrote:
> > > > I'm in favor of this, but I think we need to make it easier to
> > implement
> > > > data generators and test sources. As things stand in 1.15, unless you
> > can
> > > > be satisfied with using a NumberSequenceSource followed by a map,
> > things
> > > > get quite complicated. I looked into reworking the data generators
> used
> > > in
> > > > the training exercises, and got discouraged by the amount of work
> > > involved.
> > > > (The sources used in the training want to be unbounded, and need
> > > > watermarking in the sources, which means that using
> > NumberSequenceSource
> > > > isn't an option.)
> > > >
> > > > I think the proposed deprecation will be better received if it can be
> > > > accompanied by something that makes implementing a simple Source
> easier
> > > > than it is now. People are continuing to implement new
> SourceFunctions
> > > > because the interfaces defined by FLIP-27 are hard to understand,
> while
> > > > SourceFunction is easy. Alex, I believe you were looking into
> > > implementing
> > > > an easier-to-use building block that could be used in situations like
> > > this.
> > > > Can we get something like that in place first?
> > > >
> > > > David
> > > >
> > > > On Fri, Jun 3, 2022 at 4:52 PM Jing Ge  wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> Thanks Alex for driving this!
> > > >>
> > > >> +1 To give the Flink developers, especially Connector developers the
> > > clear
> > > >> signal that the new Source API is recommended according to FLIP-27,
> we
> > > >> should mark them as deprecated.
> > > >>
> > > >> There are some open questions to discuss:
> > > >>
> > > >> 1. Do we need to mark all subinterfaces/subclasses as deprecated?
> e.g.
> > > >> FromElementsFunction, etc. there are many. What are the
> replacements?
> > > >> 2. Do we need to mark all subclasses that have replacement as
> > > deprecated?
> > > >> e.g. ExternallyInducedSource whose replacement class, if I am not
> > > mistaken,
> > > >> ExternallyInducedSourceReader is @Experimental
> > > >> 3. Do we need to mark all related test utility classes as
> deprecated?
> > > >>
> > > >> I think it might make sense to create an umbrella ticket to cover
> all
> > of
> > > >> these with the following process:
> > > >>
> > > >> 1. Mark SourceFunction as deprecated asap.
> > > >> 2. Mark subinterfaces and subclasses as deprecated, if there are
> > > graduated
> > > >> replacements. Good example is that KafkaSource replaced
> KafkaConsumer
> > > which
> > > >> has been marked as deprecated.
> > > >> 3. Do not mark subinterfaces and subclasses as deprecated, if
> > > replacement
> > > >> classes are still experimental, check if it is time to graduate
> them.
> > > After
> > > >> graduation, go to step 2. It might take a while for graduation.
> > > >> 4. Do not mark subinterfaces and subclasses as deprecated, if the
> > > >> replacement classes are experimental and are too young to graduate.
> We
> > > have
> > > >> to wait. But in this case we could create new tickets under the
> > umbrella
> > > >> ticket.
> > > >> 5. Do not mark subinterfaces and subclasses as deprecated, if there
> is
> > > no
> > > >> replacement at all. We have to create new tickets and wait until the
> > new
> > > >> implementation has been done and graduated. It will take a longer
> > time,
> > > >> roughly 1,5 years.
> > > >> 6. For test classes, we could follow the same rule. But I think for
> > some
> > > >> cases, we could consider doing the replacement directly without
> going
> > > >> through the deprecation phase.
> > > >>
> > > >> When we look back on all of these, we can realize it is a big epic
> > (even
> > > >> bigger than an

Re: slack invite link

2022-06-05 Thread Martijn Visser
Hi Suceth,

Thanks for the message, I've just updated the website with a new Slack
invite link.

Best regards,

Martijn

Op zo 5 jun. 2022 om 17:26 schreef Sucheth S :

> Hi,
>
> Can someone please share the slack invite link.
>
> Regards,
> Sucheth Shivakumar
> website : https://sucheths.com
> mobile : +1(650)-576-8050
> San Mateo, United States
>


Re: slack invite link

2022-06-05 Thread Sucheth S
Thank you Martijn, It worked.

Regards,
Sucheth Shivakumar
website : https://sucheths.com
mobile : +1(650)-576-8050
San Mateo, United States


On Sun, Jun 5, 2022 at 9:31 AM Martijn Visser 
wrote:

> Hi Suceth,
>
> Thanks for the message, I've just updated the website with a new Slack
> invite link.
>
> Best regards,
>
> Martijn
>
> Op zo 5 jun. 2022 om 17:26 schreef Sucheth S :
>
> > Hi,
> >
> > Can someone please share the slack invite link.
> >
> > Regards,
> > Sucheth Shivakumar
> > website : https://sucheths.com
> > mobile : +1(650)-576-8050
> > San Mateo, United States
> >
>


Re: slack invite link

2022-06-05 Thread Gmail
Hi Martijn,

Thank you very much, The link of Apache Flink community on Slack is already 
available.

Best regards,

Xu Zhengxi

> 2022年6月6日 00:31,Martijn Visser  写道:
> 
> Hi Suceth,
> 
> Thanks for the message, I've just updated the website with a new Slack
> invite link.
> 
> Best regards,
> 
> Martijn
> 
> Op zo 5 jun. 2022 om 17:26 schreef Sucheth S :
> 
>> Hi,
>> 
>> Can someone please share the slack invite link.
>> 
>> Regards,
>> Sucheth Shivakumar
>> website : https://sucheths.com
>> mobile : +1(650)-576-8050
>> San Mateo, United States
>> 



邮件退订

2022-06-05 Thread Ball's Holy.
邮件退订

Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.0.0 released

2022-06-05 Thread Aitozi
Thanks Yang and Nice to see it happen.

Best,
Aitozi.

Yang Wang  于2022年6月5日周日 16:14写道:

> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.0.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle through native k8s tooling like kubectl.
> This is the first production ready release and brings numerous
> improvements and new features to almost every aspect of the operator.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/news/2022/06/05/release-kubernetes-operator-1.0.0.html
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
>
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator applications can be
> found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351500
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Regards,
> Gyula & Yang
>


?????? [ANNOUNCE] Apache Flink Kubernetes Operator 1.0.0 released

2022-06-05 Thread ????





--  --
??: 
   "dev"

<1996fan...@gmail.com>;
: 2022??6??5??(??) 5:19
??: "Yang Wang"https://flink.apache.org/news/2022/06/05/release-kubernetes-operator-1.0.0.html
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
>
> 
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator applications can be
> found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
>
> 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351500
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Regards,
> Gyula & Yang
>

邮件退订

2022-06-05 Thread 11




| |
11
|
|
zzhncut...@163.com
|
签名由网易邮箱大师定制



Re: 邮件退订

2022-06-05 Thread Shengkai Fang
Hi.

Please send an email to dev-unsubscr...@flink.apache.org to unsubscribe
from the dev mail list.

Best,
Shengkai

Ball's Holy. <873925...@qq.com.invalid> 于2022年6月6日周一 10:25写道:

> 邮件退订


[jira] [Created] (FLINK-27898) fix PartitionPushDown in streaming mode for hive source

2022-06-05 Thread zoucao (Jira)
zoucao created FLINK-27898:
--

 Summary: fix PartitionPushDown in streaming mode for hive source
 Key: FLINK-27898
 URL: https://issues.apache.org/jira/browse/FLINK-27898
 Project: Flink
  Issue Type: Bug
Reporter: zoucao


In hive source, the PartitionPushDown will cause some problems in 
streaming-mode, we can this test in {*}HiveTableSourceITCase{*}

{code:java}
@Test
public void testPushDown() throws Exception {
final String catalogName = "hive";
final String dbName = "source_db";
final String tblName = "stream_test";
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(10 * 1000);
StreamTableEnvironment tEnv =
HiveTestUtils.createTableEnvInStreamingMode(env, SqlDialect.HIVE);
tEnv.registerCatalog(catalogName, hiveCatalog);
tEnv.useCatalog(catalogName);
tEnv.executeSql(
"CREATE TABLE source_db.stream_test ("
+ " a INT,"
+ " b STRING"
+ ") PARTITIONED BY (ts int) TBLPROPERTIES ("
+ "'streaming-source.enable'='true',"
+ "'streaming-source.monitor-interval'='10s',"
+ "'streaming-source.consume-order'='partition-name',"
+ "'streaming-source.consume-start-offset'='ts=1'"
+ ")");

HiveTestUtils.createTextTableInserter(hiveCatalog, dbName, tblName)
.addRow(new Object[]{0, "a0"})
.addRow(new Object[]{1, "a0"})
.commit("ts=0");
HiveTestUtils.createTextTableInserter(hiveCatalog, dbName, tblName)
.addRow(new Object[]{1, "a1"})
.addRow(new Object[]{2, "a1"})
.commit("ts=1");

HiveTestUtils.createTextTableInserter(hiveCatalog, dbName, tblName)
.addRow(new Object[]{1, "a2"})
.addRow(new Object[]{2, "a2"})
.commit("ts=2");
System.out.println(tEnv.explainSql("select * from hive.source_db.stream_test 
where ts > 1"));
TableResult result = tEnv.executeSql("select * from hive.source_db.stream_test 
where ts > 1");
result.print();
)
{code}

{code:java}
++-++-+
| op |   a |  b |  ts |
++-++-+
| +I |   1 | a2 |   2 |
| +I |   2 | a2 |   2 |
| +I |   1 | a1 |   1 |
| +I |   2 | a1 |   1 |
{code}

{code:java}
== Abstract Syntax Tree ==
LogicalProject(a=[$0], b=[$1], ts=[$2])
+- LogicalFilter(condition=[>($2, 1)])
   +- LogicalTableScan(table=[[hive, source_db, stream_test]])

== Optimized Physical Plan ==
TableSourceScan(table=[[hive, source_db, stream_test, partitions=[{ts=2}]]], 
fields=[a, b, ts])

== Optimized Execution Plan ==
TableSourceScan(table=[[hive, source_db, stream_test, partitions=[{ts=2}]]], 
fields=[a, b, ts])
{code}

The PartitionPushDown rule can generate the correct partitions that need to 
consume by using the existing partition. If the partitions are pushed to the 
hive source, the filter node will be removed. But hive source will not use the 
partition info which is pushed down in streaming mode, I think it causes some 
problems.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: [DISCUSS] FLIP-223: Support HiveServer2 Endpoint

2022-06-05 Thread godfrey he
Hi Shengkai,

Thanks for driving this.

I have a few comments:

Could you give a whole architecture about the Ecosystem of HiveServers
and the SqlGateway, such as JDBC driver, Beeline, etc.
Which is more clear for users.

> Considering the different users may have different requirements to connect to 
> different meta stores,
> they can use the DDL to register the HiveCatalog that satisfies their 
> requirements.
 Could you give some examples to explain it more?

> How To Use
Could you a complete example to describe an end-to-end case?

Is the streaming sql supported? What's the behavior if I submit streaming query
or I change the dialect to 'default'?

Best,
Godfrey

Shengkai Fang  于2022年6月1日周三 21:13写道:
>
> Hi, Jingsong.
>
> Thanks for your feedback.
>
> > I've read the FLIP and it's not quite clear what the specific unsupported
> items are
>
> Yes. I have added a section named Difference with HiveServer2 and list the
> difference between the SQL Gateway with HiveServer2 endpoint and
> HiveServer2.
>
> > Support multiple metastore clients in one gateway?
>
> Yes. It may cause class conflicts when using the different versions of Hive
> Catalog at the same time. I add a section named "How to use" to remind the
> users don't use HiveCatalog with different versions together.
>
> >  Hive versions and setup
>
> Considering the HiveServer2 endpoint binds to the HiveCatalog, we will not
> introduce a new module about the HiveServer2 endpoint. The current
> dependencies in the hive connector should be enough for the HiveServer2
> Endpoint except for the hive-service-RPC(it contains the HiveServer2
> interface). In this way, the hive connector jar will contain an endpoint. I
> add a section named "Merge HiveServer2 Endpoint into Hive Connector
> Module".
>
> For usage, the user can just add the hive connector jar into the classpath
> and use the sql-gateway.sh to start the SQL Gateway with the hiveserver2
> endpoint.  You can refer to the section "How to use" for more details.
>
> Best,
> Shengkai
>
> Jingsong Li  于2022年6月1日周三 15:04写道:
>
> > Hi Shengkai,
> >
> > Thanks for driving.
> >
> > I have a few comments:
> >
> > ## Unsupported features
> >
> > I've read the FLIP and it's not quite clear what the specific unsupported
> > items are?
> > - For example, security related, is it not supported.
> > - For example, is there a loss of precision for types
> > - For example, the FetchResults are not the same
> >
> > ## Support multiple metastore clients in one gateway?
> >
> > > During the setup, the HiveServer2 tires to load the config in the
> > hive-site.xml to initialize the Hive metastore client. In the Flink, we use
> > the Catalog interface to connect to the Hive Metastore, which is allowed to
> > communicate with different Hive Metastore[1]. Therefore, we allows the user
> > to specify the path of the hive-site.xml as the endpoint parameters, which
> > will used to create the default HiveCatalog in the Flink. Considering the
> > different users may have different requirements to connect to different
> > meta stores, they can use the DDL to register the HiveCatalog that
> > satisfies their requirements.
> >
> > I understand it is difficult. You really want to support?
> >
> > ## Hive versions and setup
> >
> > I saw jark also commented, but FLIP does not seem to have been modified,
> > how should the user setup, which jar to add, which hive metastore version
> > to support? How to setup to support?
> >
> > Best,
> > Jingsong
> >
> > On Tue, May 24, 2022 at 11:57 AM Shengkai Fang  wrote:
> >
> > > Hi, all.
> > >
> > > Considering we start to vote for FLIP-91 for a while, I think we can
> > > restart the discussion about the FLIP-223.
> > >
> > > I am glad that you can give some feedback about FLIP-223.
> > >
> > > Best,
> > > Shengkai
> > >
> > >
> > > Martijn Visser  于2022年5月6日周五 19:10写道:
> > >
> > > > Hi Shengkai,
> > > >
> > > > Thanks for clarifying.
> > > >
> > > > Best regards,
> > > >
> > > > Martijn
> > > >
> > > > On Fri, 6 May 2022 at 08:40, Shengkai Fang  wrote:
> > > >
> > > > > Hi Martijn.
> > > > >
> > > > > > So this implementation would not rely in any way on Hive, only on
> > > > Thrift?
> > > > >
> > > > > Yes.  The dependency is light. We also can just copy the iface file
> > > from
> > > > > the Hive repo and maintain by ourselves.
> > > > >
> > > > > Best,
> > > > > Shengkai
> > > > >
> > > > > Martijn Visser  于2022年5月4日周三 21:44写道:
> > > > >
> > > > > > Hi Shengkai,
> > > > > >
> > > > > > > Actually we will only rely on the API in the Hive, which only
> > > > contains
> > > > > > the thrift file and the generated code
> > > > > >
> > > > > > So this implementation would not rely in any way on Hive, only on
> > > > Thrift?
> > > > > >
> > > > > > Best regards,
> > > > > >
> > > > > > Martijn Visser
> > > > > > https://twitter.com/MartijnVisser82
> > > > > > https://github.com/MartijnVisser
> > > > > >
> > > > > >
> > > > > > On Fri, 29 Apr 2022 at 05:16, Shengkai Fang 
> > > wrote:
> > > > > 

[jira] [Created] (FLINK-27899) deactivate the shade plugin doesn't take effect

2022-06-05 Thread jackylau (Jira)
jackylau created FLINK-27899:


 Summary: deactivate the shade plugin doesn't take effect
 Key: FLINK-27899
 URL: https://issues.apache.org/jira/browse/FLINK-27899
 Project: Flink
  Issue Type: Improvement
  Components: Quickstarts
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0
 Attachments: image-2022-06-06-14-35-00-438.png

{code:java}
We need to specify id



   org.apache.maven.plugins
   maven-shade-plugin
   
  
 
  
   
 {code}
logs here:

 

!image-2022-06-06-14-35-00-438.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27900) Decouple the advertisedAddress and rest.bind-address

2022-06-05 Thread Yu Wang (Jira)
Yu Wang created FLINK-27900:
---

 Summary: Decouple the advertisedAddress and rest.bind-address
 Key: FLINK-27900
 URL: https://issues.apache.org/jira/browse/FLINK-27900
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / REST
Affects Versions: 1.14.4, 1.13.6, 1.11.6, 1.12.0, 1.10.3
 Environment: Flink 1.13, 1.12, 1.11, 1.10

Deploy Flink in Kubernetes pod with a nginx sidecar for auth
Reporter: Yu Wang


Currently the Flink Rest api does not have authentication, according to the doc 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/security/security-ssl/#external--rest-connectivity]
 # We set up the Flink cluster in k8s
 # We set up a nginx sidecar to enable auth for Flink Rest api.
 # We set *rest.bind-address* to localhost to hide the original Flink address 
and port
 # We enable the ssl for the Flink Rest api

It works fine wen the client tried to call the Flink Rest api with *https* 
scheme.

But if the client using *http* scheme, the *RedirectingSslHandler* will try to 
redirect the address to the advertised url. According to the code of 
{*}RestServerEndpoint{*}, Flink will use the value of *rest.bind-address* as 
the {*}advertisedAddress{*}. So the client will be redirect to *127.0.0.1* and 
failed to connect the url.

So we hope the advertisedAddress can be decoupled with rest.bind-addres, to 
provide more flexibility to the Flink deployment.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)