Re: [DISCUSS] Release Flink 1.16.1

2022-12-21 Thread Martijn Visser
Hi Lincoln,

I'm +1 for also merging them back to 1.16.1. In the release notes we should
make it clear in which situations incompatibility issues will arise, so
that we inform the users correctly.

Best regards,

Martijn

On Wed, Dec 21, 2022 at 8:41 AM godfrey he  wrote:

> Hi Martijn,
>
> Thank you for bringing this up.
>
> About Lincoln mentioned 3 commits, +1 to pick them into 1.16.1.
> AFAIK, several users have encountered this kind of data correctness
> problem so far, they are waiting a fix release as soon as possible.
>
> Best,
> Godfrey
>
> ConradJam  于2022年12月20日周二 15:08写道:
>
> > Hi Martijn,
> >
> > FLINK-30116  After
> > merge.Flink Web Ui Configuration Can't show it,I checked the data
> returned
> > by the back end and there is no problem, but there is an error in the
> front
> > end, as shown in the picture below, can someone take a look before
> release
> > 1.16.1 ?
> >
> > [image: Pasted Graphic.png]
> >
> > [image: Pasted Graphic 1.png]
> >
> > Martijn Visser  于2022年12月16日周五 02:52写道:
> >
> >> Hi everyone,
> >>
> >> I would like to open a discussion about releasing Flink 1.16.1. We've
> >> released Flink 1.16 at the end of October, but we already have 58 fixes
> >> listed for 1.16.1, including a blocker [1] on the environment variables
> >> and
> >> a number of critical issues. Some of the critical issues are related to
> >> the
> >> bugs on the Sink API, on PyFlink and some correctness issues.
> >>
> >> There are also a number of open issues with a fixVersion set to 1.16.1,
> so
> >> it would be good to understand what the community thinks of starting a
> >> release or if there are some fixes that should be included with 1.16.1.
> >>
> >> Best regards,
> >>
> >> Martijn
> >>
> >> [1] https://issues.apache.org/jira/browse/FLINK-30116
> >>
> >
>


Re: [VOTE] Release flink-connector-pulsar, release candidate #5

2022-12-21 Thread Martijn Visser
Hi all,

I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 3 of which are binding:
* Yufan Sheng
* Dawid (binding)
* Danny (binding)
* Ahmed Hamdy
* Martijn (binding)
There are no disapproving votes.

Thanks everyone!

Best regards,

Martijn

On Tue, Dec 20, 2022 at 1:03 PM Martijn Visser 
wrote:

> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PR
>
> On Tue, Dec 20, 2022 at 10:11 AM Ahmed Hamdy  wrote:
>
>> Hello,
>> Thank you
>> +1 (non-binding)
>>
>> Verified signatures and checksums
>> Verified source artifact does not contain binaries
>> Verified tag
>> Verified NOTICE file
>> Built from source
>>
>> Best regards
>> Ahmed Hamdy
>>
>> On Mon, 19 Dec 2022 at 18:48, Danny Cranmer 
>> wrote:
>>
>>> Hello all,
>>>
>>> +1 (binding)
>>>
>>> - Verified signatures and checksums
>>> - Source artifact does not contain binaries
>>> - Reviewed web PR
>>> - Contents of Maven repo looks good
>>> - Tag is present in Github
>>> - Verified NOTICE file
>>> - Built source
>>>
>>> Thanks,
>>> Danny
>>>
>>> On Mon, Dec 19, 2022 at 1:57 PM Dawid Wysakowicz >> >
>>> wrote:
>>>
>>> > +1 (binding)
>>> >
>>> >- verified signatures and checksums
>>> >- built from sources
>>> >- checked notice files
>>> >- the PR looks good
>>> >- the artifacts to be published look fine
>>> >
>>> > Best,
>>> >
>>> > Dawid
>>> > On 15/12/2022 16:41, Martijn Visser wrote:
>>> >
>>> > Hi everyone,
>>> > Please review and vote on the release candidate #5 for the version
>>> 3.0.0,
>>> > as follows:
>>> > [ ] +1, Approve the release
>>> > [ ] -1, Do not approve the release (please provide specific comments)
>>> >
>>> >
>>> > The complete staging area is available for your review, which includes:
>>> > * JIRA release notes [1],
>>> > * the official Apache source release to be deployed to dist.apache.org
>>> [2],
>>> > which are signed with the key with fingerprint
>>> > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
>>> > * all artifacts to be deployed to the Maven Central Repository [4],
>>> > * source code tag v3.0.0-rc5 [5],
>>> > * website pull request listing the new release [6].
>>> >
>>> > The vote will be open for at least 72 hours. It is adopted by majority
>>> > approval, with at least 3 PMC affirmative votes.
>>> >
>>> > Thanks,
>>> > Release Manager
>>> >
>>> > [1]
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352588
>>> > [2]
>>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-pulsar-3.0.0-rc5
>>> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> > [4]
>>> https://repository.apache.org/content/repositories/orgapacheflink-1568/
>>> > [5] https://github.com/apache/flink-connector-
>>> > /releases/tag/v3.0.0-rc5
>>> > [6] https://github.com/apache/flink-web/pull/589
>>> >
>>> >
>>>
>>


Re: [VOTE] Release flink-connector-rabbitmq, release candidate #1

2022-12-21 Thread Martijn Visser
Hi all,

I'm happy to announce that we have unanimously approved this release.

There are 3 approving votes, 3 of which are binding:
* Danny (binding)
* Dawid (binding)
* Martijn (binding)
There are no disapproving votes.

Thanks everyone!

Best regards,

Martijn

On Tue, Dec 20, 2022 at 2:31 PM Dawid Wysakowicz 
wrote:

> +1 (binding)
>
>- Validated hashes
>- Verified signature
>- Verified that no binaries exist in the source archive
>- Built the source with Maven
>- Verified licenses
>- Verified web PR
>
> Best,
>
> Dawid
> On 20/12/2022 13:04, Martijn Visser wrote:
>
> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PR
>
> On Mon, Dec 19, 2022 at 7:31 PM Danny Cranmer  
> 
> wrote:
>
>
> Hello Martijn,
>
> +1 (binding)
>
> - Verified signatures and hashes
> - Contents of Maven repository looks good
> - Tag exists in repository
> - Release notes look good
> - Reviewed web CR
> - Source archive does not contain any binaries
> - Built from source
> - Verified NOTICE file and licenses
>
> Thanks,
>
> On Tue, Dec 13, 2022 at 12:23 PM Martijn Visser  
> 
> wrote:
>
>
> Hi everyone,
> Please review and vote on the release candidate #1 for the version 3.0.0,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.0-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
>
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352349
>
> [2]
>
>
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-rabbitmq-3.0.0-rc1
>
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]https://repository.apache.org/content/repositories/orgapacheflink-1563/
> [5]
>
>
> https://github.com/apache/flink-connector-rabbitmq/releases/tag/v3.0.0-rc1
>
> [6] https://github.com/apache/flink-web/pull/594
>
>


Re: [VOTE] Release flink-connector-jdbc, release candidate #2

2022-12-21 Thread Martijn Visser
Hi all,

I'm happy to announce that we have unanimously approved this release.

There are 4 approving votes, 3 of which are binding:
* Sergey Nuyanzin (non-binding)
* Dawid (binding)
* Martijn (binding)
* Danny (binding)

There are no disapproving votes.

Thanks everyone!

Best regards,

Martijn

On Tue, Dec 20, 2022 at 2:31 PM Danny Cranmer 
wrote:

> Hello,
>
> +1 (binding)
>
> - Release notes look good
> - Verified signatures and hashes
> - Verified there are no binaries in the source archive
> - NOTICE/Licenses look good
> - Built source
> - Tag exists in Github
> - Contents of Maven repo look good
> - Reviewed web PR
>
> Thanks,
>
> On Tue, Dec 20, 2022 at 12:04 PM Martijn Visser 
> wrote:
>
> > +1 (binding)
> >
> > - Validated hashes
> > - Verified signature
> > - Verified that no binaries exist in the source archive
> > - Build the source with Maven
> > - Verified licenses
> > - Verified web PR
> >
> > On Mon, Dec 19, 2022 at 3:53 PM Dawid Wysakowicz  >
> > wrote:
> >
> > > +1 (binding)
> > >
> > >- verified signatures and checksums
> > >- built from sources
> > >- the PR looks good
> > >- the artifacts to be published look fine
> > >- checked files diff to the 1.16.0 release tag
> > >
> > > Best,
> > >
> > > Dawid
> > >
> > > On 19/12/2022 12:17, Sergey Nuyanzin wrote:
> > >
> > > +1 (non-binding)
> > >
> > > - Validated hashes and signature
> > > - Verified that no binaries exist in the source archive
> > > - Build from sources with Maven
> > > - Verified licenses
> > >
> > > one nitpick: it seems a link to the tag is broken (number 5)
> > > I guess it should be this onehttps://
> > github.com/apache/flink-connector-jdbc/releases/tag/v3.0.0-rc2
> > >
> > > Thank you Martijn
> > >
> > > On Wed, Dec 14, 2022 at 2:16 PM Martijn Visser <
> martijnvis...@apache.org>
> > 
> > > wrote:
> > >
> > >
> > > Hi everyone,
> > > Please review and vote on the release candidate #2 for the version
> 3.0.0,
> > > as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint
> > > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v3.0.0-rc2 [5],
> > > * website pull request listing the new release [6].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Release Manager
> > >
> > > [1]
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352590
> > > [2]
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.0.0-rc2
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1565/
> > > [5] https://github.com/apache/flink-connector-
> > > /releases/tag/v3.0.0-rc2
> > > [6] https://github.com/apache/flink-web/pull/590
> > >
> > >
> >
>


Re: [VOTE] FLIP-275: Support Remote SQL Client Based on SQL Gateway

2022-12-21 Thread Jark Wu
+1 (binding)

Best,
Jark

On Wed, 21 Dec 2022 at 16:00, godfrey he  wrote:

> +1 (binding)
>
> Best,
> Godfrey
>
> Hang Ruan  于2022年12月21日周三 15:21写道:
> >
> > +1 (non-binding)
> >
> > Best,
> > Hang
> >
> > Paul Lam  于2022年12月20日周二 17:36写道:
> >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Paul Lam
> > >
> > > > 2022年12月20日 11:35,Shengkai Fang  写道:
> > > >
> > > > +1(binding)
> > > >
> > > > Best,
> > > > Shengkai
> > > >
> > > > yu zelin  于2022年12月14日周三 20:41写道:
> > > >
> > > >> Hi, all,
> > > >>
> > > >> Thanks for all your feedbacks so far. Through the discussion on this
> > > >> thread[1], I think we have came to a consensus, so I’d like to
> start a
> > > >> vote on FLIP-275[2].
> > > >>
> > > >> The vote will last for at least 72 hours (Dec 19th, 13:00 GMT,
> excluding
> > > >> weekend days) unless there is an objection or insufficient vote.
> > > >>
> > > >> Best,
> > > >> Yu Zelin
> > > >>
> > > >> [1]
> https://lists.apache.org/thread/zpx64l0z91b0sz0scv77h0g13ptj4xxo
> > > >> [2] https://cwiki.apache.org/confluence/x/T48ODg
> > >
> > >
>


[jira] [Created] (FLINK-30471) Optimize the enriching network memory process in SsgNetworkMemoryCalculationUtils

2022-12-21 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-30471:
-

 Summary: Optimize the enriching network memory process in 
SsgNetworkMemoryCalculationUtils
 Key: FLINK-30471
 URL: https://issues.apache.org/jira/browse/FLINK-30471
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Affects Versions: 1.17.0
 Environment: In SsgNetworkMemoryCalculationUtils#enrichNetworkMemory, 
getting PartitionTypes is run in a separate loop, which is not friendly to 
performance. If we want to get inputPartitionTypes, a new separate loop may be 
introduced too. 

It just looks simpler in code, but it will affect the performance. We can get 
all the results through one loop instead of multiple loops, which will be 
faster.
Reporter: Yuxin Tan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30472) Modify the default value of the max network memory config option

2022-12-21 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-30472:
-

 Summary: Modify the default value of the max network memory config 
option
 Key: FLINK-30472
 URL: https://issues.apache.org/jira/browse/FLINK-30472
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Affects Versions: 1.17.0
Reporter: Yuxin Tan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30473) Optimize the InputGate network memory management for TaskManager

2022-12-21 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-30473:
-

 Summary: Optimize the InputGate network memory management for 
TaskManager
 Key: FLINK-30473
 URL: https://issues.apache.org/jira/browse/FLINK-30473
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network
Affects Versions: 1.17.0
Reporter: Yuxin Tan


Based on the 
[FLIP-266|https://cwiki.apache.org/confluence/display/FLINK/FLIP-266%3A+Simplify+network+memory+configurations+for+TaskManager],
 this issue mainly focuses on the first issue.

This change proposes a method to control the maximum required memory buffers in 
an inputGate according to parallelism size.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[Discuss] SQL Client syntax to show a configuration

2022-12-21 Thread Mingliang Liu
Hi all,

Currently in SQL Client we can use the "SET 'key'='value'" command to set a
value to a config property key for the session. We can also list all config
properties by just calling "SET". It would be convenient to show the value
of a specific config property given its key(s). Without this, users will
need to eyeball the very one config from all config properties which may
need scrolling the screen.

I do not find a standard SQL syntax for this.
- Some use the syntax "SET 'key'" to show the value, for e.g. Spark SQL and
Hive.
- Some use the "SHOW" keyword to show session properties, for
e.g. CockroachDB. Trino and MySqL support a key pattern in the "SHOW
SESSION" statement.

I filed FLINK-30459 to track this. I also attached initial PR #21535 that
uses the "SET 'key'" syntax. I chose that because I previously used this
syntax. It seems simpler to just use one keyword to interact with
session configs. When using Hive dialect, Flink SQL also supports that.
However, it's not straightforward enough and one probably needs to check
the doc first.

As Martijn suggested, before improving the code, it's better to get it
discussed here first. So, is it a good idea to add this support? Which
syntax is preferred by our community, and are there other better ways of
supporting this?

Thanks,


[jira] [Created] (FLINK-30474) DefaultMultipleComponentLeaderElectionService triggers HA backend change even if it's not the leader

2022-12-21 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30474:
-

 Summary: DefaultMultipleComponentLeaderElectionService triggers HA 
backend change even if it's not the leader
 Key: FLINK-30474
 URL: https://issues.apache.org/jira/browse/FLINK-30474
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.3, 1.16.0, 1.17.0
Reporter: Matthias Pohl


{{DefaultMultipleComponentLeaderElectionService}} calls 
{{LeaderElectionEventHandler#onLeaderInformationChange}} in any case even 
though the contracts of that method states that it should be only called by the 
leader to update the HA backend information (see 
[JavaDoc|https://github.com/apache/flink/blob/5a2f220e31c50306a60aae8281f0ab4073fb85e1/flink-runtime/src/main/java/org/apache/flink/runtime/leaderelection/LeaderElectionEventHandler.java#L46-L50]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Discuss] SQL Client syntax to show a configuration

2022-12-21 Thread Martijn Visser
Hi Mingliang,

Thanks for opening a discussion thread on this topic, much appreciated. If
there is no standard SQL for this, then we should have a discussion on this
topic indeed.

I'm not a fan of using "SET" to show a value. For me, "SET" implies that
you are setting a value. I prefer "SHOW" since that clearly identifies that
you want to display a value. Curious what others think on this topic.

Best regards,

Martijn

On Wed, Dec 21, 2022 at 9:48 AM Mingliang Liu  wrote:

> Hi all,
>
> Currently in SQL Client we can use the "SET 'key'='value'" command to set a
> value to a config property key for the session. We can also list all config
> properties by just calling "SET". It would be convenient to show the value
> of a specific config property given its key(s). Without this, users will
> need to eyeball the very one config from all config properties which may
> need scrolling the screen.
>
> I do not find a standard SQL syntax for this.
> - Some use the syntax "SET 'key'" to show the value, for e.g. Spark SQL and
> Hive.
> - Some use the "SHOW" keyword to show session properties, for
> e.g. CockroachDB. Trino and MySqL support a key pattern in the "SHOW
> SESSION" statement.
>
> I filed FLINK-30459 to track this. I also attached initial PR #21535 that
> uses the "SET 'key'" syntax. I chose that because I previously used this
> syntax. It seems simpler to just use one keyword to interact with
> session configs. When using Hive dialect, Flink SQL also supports that.
> However, it's not straightforward enough and one probably needs to check
> the doc first.
>
> As Martijn suggested, before improving the code, it's better to get it
> discussed here first. So, is it a good idea to add this support? Which
> syntax is preferred by our community, and are there other better ways of
> supporting this?
>
> Thanks,
>


[jira] [Created] (FLINK-30475) Improved speed of RocksDBMapState clear() using rocksDB.deleteRange

2022-12-21 Thread David Hrbacek (Jira)
David Hrbacek created FLINK-30475:
-

 Summary: Improved speed of RocksDBMapState clear() using 
rocksDB.deleteRange
 Key: FLINK-30475
 URL: https://issues.apache.org/jira/browse/FLINK-30475
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Affects Versions: 1.16.0
Reporter: David Hrbacek


Currently {{RocksDBMapState#clear()}} is processed via keyRange traversing and 
inserting particular keys into BatchWrite for deletion.

RocksDb offer much faster way how to delete key range - {{deleteRange}}

This issue is follow-up for 
[FLINK-9070|https://issues.apache.org/jira/browse/FLINK-9070] where 
{{deleteRange}} was also considered. But at that time it implied slower read, 
it was buggy and not even available in the Java API of RocksDB. All of these 
problems were solved since that time (see also RocksDB [blog article for 
deleteRange|https://rocksdb.org/blog/2018/11/21/delete-range.html])

Delete range enables to clear {{RocksDBMapState}} for one key in constant 
computational complexity whereas the old solution requires O(n ).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Discuss] SQL Client syntax to show a configuration

2022-12-21 Thread yuxia
HI, Mingliang.
Thanks for bringing this up. +1 to add this support.
I think use syntax "SET 'key'" to show a signle variable is weird, but use it 
may be more convinent and more unified since we have used "SET" to show all 
variable. 
Considering many hive/spark users may be used to it,  seems use "SET 'key'" is 
also acceptable.
But I have no much strong preference, use some other thing like "SHOW" 
statement also sounds good to me.

Best regards,
Yuxia

- 原始邮件 -
发件人: "Martijn Visser" 
收件人: "dev" 
发送时间: 星期三, 2022年 12 月 21日 下午 5:44:15
主题: Re: [Discuss] SQL Client syntax to show a configuration

Hi Mingliang,

Thanks for opening a discussion thread on this topic, much appreciated. If
there is no standard SQL for this, then we should have a discussion on this
topic indeed.

I'm not a fan of using "SET" to show a value. For me, "SET" implies that
you are setting a value. I prefer "SHOW" since that clearly identifies that
you want to display a value. Curious what others think on this topic.

Best regards,

Martijn

On Wed, Dec 21, 2022 at 9:48 AM Mingliang Liu  wrote:

> Hi all,
>
> Currently in SQL Client we can use the "SET 'key'='value'" command to set a
> value to a config property key for the session. We can also list all config
> properties by just calling "SET". It would be convenient to show the value
> of a specific config property given its key(s). Without this, users will
> need to eyeball the very one config from all config properties which may
> need scrolling the screen.
>
> I do not find a standard SQL syntax for this.
> - Some use the syntax "SET 'key'" to show the value, for e.g. Spark SQL and
> Hive.
> - Some use the "SHOW" keyword to show session properties, for
> e.g. CockroachDB. Trino and MySqL support a key pattern in the "SHOW
> SESSION" statement.
>
> I filed FLINK-30459 to track this. I also attached initial PR #21535 that
> uses the "SET 'key'" syntax. I chose that because I previously used this
> syntax. It seems simpler to just use one keyword to interact with
> session configs. When using Hive dialect, Flink SQL also supports that.
> However, it's not straightforward enough and one probably needs to check
> the doc first.
>
> As Martijn suggested, before improving the code, it's better to get it
> discussed here first. So, is it a good idea to add this support? Which
> syntax is preferred by our community, and are there other better ways of
> supporting this?
>
> Thanks,
>


[RESULT][VOTE] FLIP-275: Support Remote SQL Client Based on SQL Gateway

2022-12-21 Thread yu zelin
Hi, all,

FLIP-275: Support Remote SQL Client Based on SQL Gateway[1] has been accepted.

There are 3 bindings, and 2 non-bindings as follows:

ShengKai Fang (binding),
Paul Lam (non-binding),
Hang Ruan (non-binding)
Godfrey He (binding)
Jark Wu (binding)

There are no votes against it.

Best,
Yu Zelin

[1] https://cwiki.apache.org/confluence/x/T48ODg

[jira] [Created] (FLINK-30476) TrackingFsDataInputStream batch tracking issue

2022-12-21 Thread Denis (Jira)
Denis created FLINK-30476:
-

 Summary: TrackingFsDataInputStream batch tracking issue
 Key: FLINK-30476
 URL: https://issues.apache.org/jira/browse/FLINK-30476
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.15.3, 1.15.2, 1.15.1
Reporter: Denis


{{org.apache.flink.connector.file.src.impl.StreamFormatAdapter.TrackingFsDataInputStream}}
 wraps underlying InputStream to count bytes consumed.
{{org.apache.flink.connector.file.src.impl.StreamFormatAdapter.Reader}} relies 
on this to create batches of data.
{code:java}
while (stream.hasRemainingInBatch() && (next = reader.read()) != 
null) {
result.add(next);
}
{code}
{{org.apache.flink.connector.file.src.impl.StreamFormatAdapter.TrackingFsDataInputStream#read(byte[],
 int, int)}} contains a bug that can lead to arbitrary size batches due to 
counter ({{{}remainingInBatch{}}}) underflow.
{code:java}
public int read(byte[] b, int off, int len) throws IOException {
remainingInBatch -= len;
return stream.read(b, off, len);
}
{code}
Every time we perform a {{stream.read()}} it may return less than {{len}} 
according to the javadoc.
{code:java}
Params:
b – the buffer into which the data is read. off – the start offset in array b 
at which the data is written. len – the maximum number of bytes to read.
Returns:
the total number of bytes read into the buffer, or -1 if there is no more data 
because the end of the stream has been reached.
{code}
But current implementation accounts only bytes that were requested 
({{{}{{len}}{}}}).

E.g. S3 Hadoop FS can return less than {{len}} as a result of 
{{{}stream.read(b, off, len){}}}. This is expected and readers are aware of 
this 
{{org.apache.parquet.io.DelegatingSeekableInputStream#readFully(java.io.InputStream,
 byte[], int, int)}}

As a result reading parquet file may result in underflow 
{{TrackingFsDataInputStream#read(byte[], int, int)}} because parquet reader 
tries to read the whole Row Group (large) and may execute {{read()}} multiple 
times. Underflow leads to unlimited batch size that may lead to OOM.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30477) Not properly blocking retries when timeout occurs in AsyncWaitOperator

2022-12-21 Thread lincoln lee (Jira)
lincoln lee created FLINK-30477:
---

 Summary: Not properly blocking retries when timeout occurs in 
AsyncWaitOperator
 Key: FLINK-30477
 URL: https://issues.apache.org/jira/browse/FLINK-30477
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.16.0
Reporter: lincoln lee


as user reported in ml 
https://lists.apache.org/thread/n1rqml8h9j8zkhxwc48rdvj7jrw2rjcy
there's issue in AsyncWaitOperator that it not properly blocking retries when 
timeout occurs

this happens when a retry timer is unfired and then the user function timeout 
was triggered first, the current RetryableResultHandlerDelegator doesn't take 
the timeout process properly and will cause more unexpected retries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Release Flink 1.16.1

2022-12-21 Thread Lincoln Lee
Hi Martijn,

Agree that we need to detail the specific case in the release notes. I will
update the 'Release Note' related content of the three issues before the
release.
Also update the progress of the last issue FLINK-29849, which has been
merged to master.

If no one objects, FLINK-28988 and FLINK-29849 need to be tagged with
1.16.1 and picked to the 1.16 branch (I don't have permission to do this
yet, would you like to help with this?)

Best,
Lincoln Lee


Martijn Visser  于2022年12月21日周三 16:00写道:

> Hi Lincoln,
>
> I'm +1 for also merging them back to 1.16.1. In the release notes we should
> make it clear in which situations incompatibility issues will arise, so
> that we inform the users correctly.
>
> Best regards,
>
> Martijn
>
> On Wed, Dec 21, 2022 at 8:41 AM godfrey he  wrote:
>
> > Hi Martijn,
> >
> > Thank you for bringing this up.
> >
> > About Lincoln mentioned 3 commits, +1 to pick them into 1.16.1.
> > AFAIK, several users have encountered this kind of data correctness
> > problem so far, they are waiting a fix release as soon as possible.
> >
> > Best,
> > Godfrey
> >
> > ConradJam  于2022年12月20日周二 15:08写道:
> >
> > > Hi Martijn,
> > >
> > > FLINK-30116  After
> > > merge.Flink Web Ui Configuration Can't show it,I checked the data
> > returned
> > > by the back end and there is no problem, but there is an error in the
> > front
> > > end, as shown in the picture below, can someone take a look before
> > release
> > > 1.16.1 ?
> > >
> > > [image: Pasted Graphic.png]
> > >
> > > [image: Pasted Graphic 1.png]
> > >
> > > Martijn Visser  于2022年12月16日周五 02:52写道:
> > >
> > >> Hi everyone,
> > >>
> > >> I would like to open a discussion about releasing Flink 1.16.1. We've
> > >> released Flink 1.16 at the end of October, but we already have 58
> fixes
> > >> listed for 1.16.1, including a blocker [1] on the environment
> variables
> > >> and
> > >> a number of critical issues. Some of the critical issues are related
> to
> > >> the
> > >> bugs on the Sink API, on PyFlink and some correctness issues.
> > >>
> > >> There are also a number of open issues with a fixVersion set to
> 1.16.1,
> > so
> > >> it would be good to understand what the community thinks of starting a
> > >> release or if there are some fixes that should be included with
> 1.16.1.
> > >>
> > >> Best regards,
> > >>
> > >> Martijn
> > >>
> > >> [1] https://issues.apache.org/jira/browse/FLINK-30116
> > >>
> > >
> >
>


Re: [VOTE] Release flink-connector-opensearch, release candidate #1

2022-12-21 Thread Danny Cranmer
Hi everyone,

This vote is now closed, I will announce the results in a separate email.

Thanks all.

On Wed, Dec 21, 2022 at 3:51 AM Thomas Weise  wrote:

> +1 (binding)
>
> * Checked hash and signature
> * Build from source and run tests
> * Checked licenses
>
>
>
> On Mon, Dec 19, 2022 at 1:50 PM Maximilian Michels  wrote:
>
> > +1 (binding)
> >
> > Release looks good.
> >
> > 1. Downloaded the source archive release staged at
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.0.0-rc1/
> > 2. Verified the signature
> > 3. Inspect extracted source code for binaries
> > 4. Compiled the source code
> > 5. Verified license files / headers
> >
> > -Max
> >
> > On Mon, Dec 19, 2022 at 11:16 AM Sergey Nuyanzin 
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > - Validated hashes and signature
> > > - Verified that no binaries exist in the source archive
> > > - Build from sources with Maven
> > > - Verified licenses
> > >
> > > On Sat, Dec 17, 2022 at 8:34 AM Martijn Visser <
> martijnvis...@apache.org
> > >
> > > wrote:
> > >
> > > > Hi Danny,
> > > >
> > > > +1 (binding)
> > > >
> > > > - Validated hashes
> > > > - Verified signature
> > > > - Verified that no binaries exist in the source archive
> > > > - Build the source with Maven
> > > > - Verified licenses
> > > > - Verified web PRs
> > > >
> > > > Thanks for the help!
> > > >
> > > > Best regards, Martijn
> > > >
> > > > On Fri, Dec 16, 2022 at 5:37 PM Danny Cranmer <
> dannycran...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > Apologies I messed up the link to "the official Apache source
> release
> > > to
> > > > be
> > > > > deployed to dist.apache.org" [1]
> > > > >
> > > > > Thanks,
> > > > > Danny
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.0.0-rc1
> > > > >
> > > > > On Fri, Dec 16, 2022 at 2:04 PM Ahmed Hamdy 
> > > > wrote:
> > > > >
> > > > > > Thank you Danny,
> > > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > * Hashes and Signatures look good
> > > > > > * Tag is present in Github
> > > > > > * Verified source archive does not contain any binary files
> > > > > > * Source archive builds using maven
> > > > > > * Verified Notice and Licence files
> > > > > >
> > > > > > On Fri, 16 Dec 2022 at 12:41, Danny Cranmer <
> > dannycran...@apache.org
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > > Please review and vote on the release candidate #1 for the
> > version
> > > > > 1.0.0,
> > > > > > > as follows:
> > > > > > > [ ] +1, Approve the release
> > > > > > > [ ] -1, Do not approve the release (please provide specific
> > > comments)
> > > > > > >
> > > > > > >
> > > > > > > The complete staging area is available for your review, which
> > > > includes:
> > > > > > > * JIRA release notes [1],
> > > > > > > * the official Apache source release to be deployed to
> > > > dist.apache.org
> > > > > > > [2],
> > > > > > > which are signed with the key with fingerprint 125FD8DB [3],
> > > > > > > * all artifacts to be deployed to the Maven Central Repository
> > [4],
> > > > > > > * source code tag v1.0.0-rc1 [5],
> > > > > > > * website pull request listing the new release [6].
> > > > > > > * pull request to integrate opensearch docs into the Flink docs
> > > [7].
> > > > > > >
> > > > > > > The vote will be open for at least 72 hours excluding weekends
> > > > > (Wednesday
> > > > > > > 21st December 13:00 UTC). It is adopted by majority approval,
> > with
> > > at
> > > > > > least
> > > > > > > 3 PMC affirmative votes.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Danny
> > > > > > >
> > > > > > > [1]
> > > https://issues.apache.org/jira/projects/FLINK/versions/12352293
> > > > > > > [2]
> > https://dist.apache.org/repos/dist/dev/flink/flink-connector-
> > > > > > > -${NEW_VERSION}-rc${RC_NUM}
> > > > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > > > [4]
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapacheflink-1569
> > > > > > > [5]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/flink-connector-opensearch/releases/tag/v1.0.0-rc1
> > > > > > > [6] https://github.com/apache/flink-web/pull/596
> > > > > > > [7] https://github.com/apache/flink/pull/21518
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Sergey
> > >
> >
>


[RESULT] [VOTE] flink-connector-opensearch 1.0.0, release candidate #1

2022-12-21 Thread Danny Cranmer
I'm happy to announce that we have unanimously approved this release.

There are 7 approving votes, 3 of which are binding:
* Andrey
* Ahmed
* Martijn (binding)
* Sergey
* Maxillian (binding)
* Thomas (binding)
* Julian

There are no disapproving votes.

Thanks everyone!

-- 


Best Regards
Danny Cranmer


[jira] [Created] (FLINK-30478) Don't depend on IPAddressUtil

2022-12-21 Thread Gunnar Morling (Jira)
Gunnar Morling created FLINK-30478:
--

 Summary: Don't depend on IPAddressUtil
 Key: FLINK-30478
 URL: https://issues.apache.org/jira/browse/FLINK-30478
 Project: Flink
  Issue Type: Sub-task
  Components: API / Core
Reporter: Gunnar Morling


The class \{{org.apache.flink.util.NetUtils}} uses the JDK-internal class 
\{{sun.net.util.IPAddressUtil}}. On current JDKs (16+), this causes issues as 
access to this class is prevented by default and would require an additional 
\{{--add-opens}} clause. That's undesirable in particular in cases where we 
don't control the JVM start-up arguments, e.g. when using Flink embedded into a 
custom Java application.

I suggest to replace this logic using the 
[IPAddress|https://github.com/seancfoley/IPAddress/] library (Apache License 
v2), which implements everything we need without relying on internal classes. I 
have a patch for that ready and will submit it for discussion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Question about match_recognize clause in Flink

2022-12-21 Thread Marjan Jordanovski
Hello,

I am using custom made connector to create Source table in this way:

create table Source (
ts TIMESTAMP(3),
instance STRING,
sservice STRING,
logdatetime STRING,
threadid STRING,
level STRING,
log_line STRING
) with (
'connector'='lokiquery',
'host'='',
'lokiqueryparamsstring'='query={instance="test",
service="test"}&limit=5000&start=2022-12-15T16:40:09.560Z&end=2022-12-15T16:58:09.570Z'
);

In this table I successfully store data from the specified time range from
loki. Data is coming as a batch. (not stream)

Then I want to create another table that will look for patterns in the
log_line column from the Source table. I am doing following:

SELECT *
FROM Source
MATCH_RECOGNIZE (
ORDER BY ts
MEASURES
START_ROW.ts AS start_ts,
END_ROW.ts AS end_ts
ONE ROW PER MATCH
AFTER MATCH SKIP TO LAST END_ROW
PATTERN (START_ROW{1} UNK_ROW+? MID_ROW{2} END_ROW{1})
DEFINE
START_ROW AS START_ROW.log_line SIMILAR TO
'%componentId:.{2}GridInstance_grtm_gridtemplate_headache_view_null%',
MID_ROW AS MID_ROW.log_line SIMILAR TO '%DSResponse -
DSResponse: List with%',
END_ROW AS END_ROW.log_line SIMILAR TO '%ContentRepository%'
) MR;

And when using python's pyflink, this works just fine!
But when I try the same thing in flink sql cli, I get strange error after
executing second table:

[ERROR] Could not execute SQL statement. Reason:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not
enough rules to produce a node with desired properties: convention=LOGICAL,
FlinkRelDistributionTraitDef=any, sort=[].
Missing conversion is LogicalMatch[convention: NONE -> LOGICAL]
There is 1 empty subset: rel#175:RelSubset#1.LOGICAL.any.[], the relevant
part of the original plan is as follows
167:LogicalMatch(partition=[[]], order=[[0 ASC-nulls-first]],
outputFields=[[start_ts, end_ts]], allRows=[false], after=[SKIP TO
LAST(_UTF-16LE'END_ROW')],
pattern=[(((PATTERN_QUANTIFIER(_UTF-16LE'START_ROW', 1, 1, false),
PATTERN_QUANTIFIER(_UTF-16LE'UNK_ROW', 1, -1, true)),
PATTERN_QUANTIFIER(_UTF-16LE'MID_ROW', 2, 2, false)),
PATTERN_QUANTIFIER(_UTF-16LE'END_ROW', 1, 1, false))],
isStrictStarts=[false], isStrictEnds=[false], subsets=[[]],
patternDefinitions=[[SIMILAR TO(PREV(START_ROW.$6, 0),
_UTF-16LE'%componentId:.{2}GridInstance_grtm_gridtemplate_headache_view_null%'),
SIMILAR TO(PREV(MID_ROW.$6, 0), _UTF-16LE'%DSResponse - DSResponse: List
with%'), SIMILAR TO(PREV(END_ROW.$6, 0), _UTF-16LE'%ContentRepository%')]],
inputFields=[[ts, instance, service, logdatetime, threadid, level,
log_line]])
  1:LogicalTableScan(subset=[rel#166:RelSubset#0.NONE.any.[]],
table=[[default_catalog, default_database, Source]])

In python, where this works, these are only configs that I use for table
environment (of course I also include jar for my custom connector) :
env_settings = EnvironmentSettings.in_batch_mode()
t_env = TableEnvironment.create(env_settings)
t_env.get_config().get_configuration().set_string("parallelism.default",
"1")

Therefore I set these values in flink sql table:
SET 'execution.runtime-mode' = 'batch';
SET 'parallelism.default' = '1';

But it didn't help. Does anyone have any idea what could be causing this
issue?

Thank you,
Marjan


[ANNOUNCE] Apache flink-connector-pulsar 3.0.0 released

2022-12-21 Thread Martijn Visser
The Apache Flink community is very happy to announce the release of Apache
flink-connector-pulsar 3.0.0

This release marks the first time we have released this connector
separately from the main Flink release.
Over time more connectors will be migrated to this release model.

This release is equivalent to the connector version released alongside
Flink 1.16.0 and acts as a drop-in replacement.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352588

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Best regards,

Martijn


[ANNOUNCE] Apache flink-connector-jdbc 3.0.0 released

2022-12-21 Thread Martijn Visser
The Apache Flink community is very happy to announce the release of Apache
flink-connector-jdbc 3.0.0

This release marks the first time we have released this connector
separately from the main Flink release.
Over time more connectors will be migrated to this release model.

This release is equivalent to the connector version released alongside
Flink 1.16.0 and acts as a drop-in replacement.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352590

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Best regards,

Martijn


[ANNOUNCE] Apache flink-connector-rabbitmq 3.0.0 released

2022-12-21 Thread Martijn Visser
The Apache Flink community is very happy to announce the release of Apache
flink-connector-rabbitmq 3.0.0

This release marks the first time we have released this connector
separately from the main Flink release.
Over time more connectors will be migrated to this release model.

This release is equivalent to the connector version released alongside
Flink 1.16.0 and acts as a drop-in replacement.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352349

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Best regards,

Martijn


Re: Streaming queries in FTS using Kafka log

2022-12-21 Thread Alexander Sorokoumov
Hello everyone,

Answering my own question, it turns out that Flink Table Store removes the
normalization node on read from an external log system only if
log.changelog-mode='all' and log.consistency = 'transactional' [1].

1.
https://github.com/apache/flink-table-store/blob/7e0d55ff3dc9fd48455b17d9a439647b0554d020/flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/TableStoreSource.java#L136-L141

Best,
Alex


On Fri, Dec 16, 2022 at 5:28 PM Alexander Sorokoumov <
asorokou...@confluent.io> wrote:

> Hello community,
>
> I want to ask about streaming queries with Flink Table Store. After
> reading the documentation on Streaming Queries [1], I was under the
> impression that only tables with LogStore-over-TableStore and No Changelog
> Producer need the normalization step since the Kafka log has the `before`
> values.
>
> However, when I created the following table:
>
> CREATE TABLE word_count (
>  word STRING PRIMARY KEY NOT ENFORCED,
>  cnt BIGINT
> ) WITH (
>  'connector' = 'table-store',
>  'path' = 's3://my-bucket/table-store',
>  'log.system' = 'kafka',
>  'kafka.bootstrap.servers' = 'broker:9092',
>  'kafka.topic' = 'word_count_log',
>  'auto-create' = 'true',
>  'log.changelog-mode' = 'all',
>  'log.consistency' = 'eventual'
> );
>
> And ran a streaming query against it:
>
> SELECT * FROM word_count;
>
> The topology for this query had the normalization task
> (ChangelogNormalize).
>
> Is this a bug or expected behavior? If it is the latter, can you please
> clarify why this is the case?
>
> 1.
> https://nightlies.apache.org/flink/flink-table-store-docs-master/docs/development/streaming-query/
>
> Thank you,
> Alex
>


[jira] [Created] (FLINK-30479) Document flink-connector-files for local execution

2022-12-21 Thread Mingliang Liu (Jira)
Mingliang Liu created FLINK-30479:
-

 Summary: Document flink-connector-files for local execution
 Key: FLINK-30479
 URL: https://issues.apache.org/jira/browse/FLINK-30479
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.16.0
Reporter: Mingliang Liu


The file system SQL connector itself is included in Flink and does not require 
an additional dependency. However, if a user uses the filesystem connector for 
[local execution](\{{< ref "docs/dev/dataset/local_execution" >}}),
for e.g. running Flink job in the IDE, she will need to add dependency. 
Otherwise, the user will get validation exception: {{{}Cannot discover a 
connector using option: 'connector'='filesystem'{}}}. This is confusing and can 
be documented.

The scope of the files connector dependency should be {{{}provided{}}}, because 
they should not be packaged into the JAR file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Discuss] SQL Client syntax to show a configuration

2022-12-21 Thread Sergey Nuyanzin
Hi Mingliang,

Thanks for bringing this up
>From one side, yes some engines use SHOW syntax and I would rather +0.5
here.

>From another side both SHOW and SET are very limited commands.
Maybe it would make sense to look at information_schema (mentioned in SQL
92[1])
and most databases already support it (Postgres[2], MySQL[3], CockRoach[4]).
Hive does not support it yet however there are some activities in that
direction[5].
This approach will allow to query from these tables
 (not only properties, also functions, databases and all other data
available via SHOW command)
 as from any other Flink tables.

[1] https://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
[2] https://www.postgresql.org/docs/current/information-schema.html
[3] https://dev.mysql.com/doc/refman/8.0/en/information-schema.html
[4] https://www.cockroachlabs.com/docs/stable/information-schema.html
[5] https://issues.apache.org/jira/browse/HIVE-1010

On Wed, Dec 21, 2022 at 12:09 PM yuxia  wrote:

> HI, Mingliang.
> Thanks for bringing this up. +1 to add this support.
> I think use syntax "SET 'key'" to show a signle variable is weird, but use
> it may be more convinent and more unified since we have used "SET" to show
> all variable.
> Considering many hive/spark users may be used to it,  seems use "SET
> 'key'" is also acceptable.
> But I have no much strong preference, use some other thing like "SHOW"
> statement also sounds good to me.
>
> Best regards,
> Yuxia
>
> - 原始邮件 -
> 发件人: "Martijn Visser" 
> 收件人: "dev" 
> 发送时间: 星期三, 2022年 12 月 21日 下午 5:44:15
> 主题: Re: [Discuss] SQL Client syntax to show a configuration
>
> Hi Mingliang,
>
> Thanks for opening a discussion thread on this topic, much appreciated. If
> there is no standard SQL for this, then we should have a discussion on this
> topic indeed.
>
> I'm not a fan of using "SET" to show a value. For me, "SET" implies that
> you are setting a value. I prefer "SHOW" since that clearly identifies that
> you want to display a value. Curious what others think on this topic.
>
> Best regards,
>
> Martijn
>
> On Wed, Dec 21, 2022 at 9:48 AM Mingliang Liu  wrote:
>
> > Hi all,
> >
> > Currently in SQL Client we can use the "SET 'key'='value'" command to
> set a
> > value to a config property key for the session. We can also list all
> config
> > properties by just calling "SET". It would be convenient to show the
> value
> > of a specific config property given its key(s). Without this, users will
> > need to eyeball the very one config from all config properties which may
> > need scrolling the screen.
> >
> > I do not find a standard SQL syntax for this.
> > - Some use the syntax "SET 'key'" to show the value, for e.g. Spark SQL
> and
> > Hive.
> > - Some use the "SHOW" keyword to show session properties, for
> > e.g. CockroachDB. Trino and MySqL support a key pattern in the "SHOW
> > SESSION" statement.
> >
> > I filed FLINK-30459 to track this. I also attached initial PR #21535 that
> > uses the "SET 'key'" syntax. I chose that because I previously used this
> > syntax. It seems simpler to just use one keyword to interact with
> > session configs. When using Hive dialect, Flink SQL also supports that.
> > However, it's not straightforward enough and one probably needs to check
> > the doc first.
> >
> > As Martijn suggested, before improving the code, it's better to get it
> > discussed here first. So, is it a good idea to add this support? Which
> > syntax is preferred by our community, and are there other better ways of
> > supporting this?
> >
> > Thanks,
> >
>


-- 
Best regards,
Sergey


[jira] [Created] (FLINK-30480) Add benchmarks for adaptive batch scheduler

2022-12-21 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-30480:
---

 Summary: Add benchmarks for adaptive batch scheduler
 Key: FLINK-30480
 URL: https://issues.apache.org/jira/browse/FLINK-30480
 Project: Flink
  Issue Type: Improvement
  Components: Benchmarks, Runtime / Coordination
Reporter: Zhu Zhu


Currently we only have benchmarks of DefaultScheduler(FLINK-20612). We should 
also have benchmarks of AdaptiveBatchScheduler to identify 
initializing/scheduling/deployment performance problems or regressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30481) create base aws-glue catalog implementation in flink-connector-aws

2022-12-21 Thread Samrat Deb (Jira)
Samrat Deb created FLINK-30481:
--

 Summary: create base aws-glue catalog implementation in 
flink-connector-aws
 Key: FLINK-30481
 URL: https://issues.apache.org/jira/browse/FLINK-30481
 Project: Flink
  Issue Type: Sub-task
Reporter: Samrat Deb






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30482) Update catalog documentation

2022-12-21 Thread Samrat Deb (Jira)
Samrat Deb created FLINK-30482:
--

 Summary: Update catalog documentation 
 Key: FLINK-30482
 URL: https://issues.apache.org/jira/browse/FLINK-30482
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Samrat Deb


After creating implementation of glue catalog update catalog page documentation 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)