[jira] [Closed] (FLINK-18887) Add ElasticSearch connector for Python DataStream API

2022-06-17 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-18887.
---
Resolution: Fixed

Merged to master via 72ef7e010546f41f8fa7ac01cdb3f9a90f100ac2

> Add ElasticSearch connector for Python DataStream API
> -
>
> Key: FLINK-18887
> URL: https://issues.apache.org/jira/browse/FLINK-18887
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Assignee: LuNing Wang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-18887) Add ElasticSearch connector for Python DataStream API

2022-06-17 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18887:

Component/s: Connectors / ElasticSearch

> Add ElasticSearch connector for Python DataStream API
> -
>
> Key: FLINK-18887
> URL: https://issues.apache.org/jira/browse/FLINK-18887
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Connectors / ElasticSearch
>Reporter: Shuiqiang Chen
>Assignee: LuNing Wang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-26051) one sql has row_number =1 and the subsequent SQL has "case when" and "where" statement result Exception : The window can only be ordered in ASCENDING mode

2022-06-17 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555383#comment-17555383
 ] 

godfrey he edited comment on FLINK-26051 at 6/17/22 7:04 AM:
-

[~zhangbinzaifendou] Thanks for providing the pr


was (Author: godfreyhe):
[~zhangbinzaifendou] Thanks for push the pr

> one sql has row_number =1 and the subsequent SQL has "case when" and "where" 
> statement result Exception : The window can only be ordered in ASCENDING mode
> --
>
> Key: FLINK-26051
> URL: https://issues.apache.org/jira/browse/FLINK-26051
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.2, 1.14.4
>Reporter: chuncheng wu
>Priority: Major
> Attachments: image-2022-02-10-20-13-14-424.png, 
> image-2022-02-11-11-18-20-594.png
>
>
> hello,
>    i have 2 sqls. One  sql (sql0) is "select xx from ( ROW_NUMBER statment) 
> where rn=1" and  the other one (sql1) is   "s{color:#505f79}elect ${fields} 
> from result where ${filter_conditions}{color}"  . The fields quoted in sql1 
> has one "case when" field .The two sql can work well seperately.but if they 
> combine  it results the exception as follow . It happen in the occasion when 
> logical plan turn into physical plan :
>  
> {code:java}
> org.apache.flink.table.api.TableException: The window can only be ordered in 
> ASCENDING mode.
>     at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.scala:98)
>     at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.scala:52)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
>     at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregateBase.translateToPlan(StreamExecOverAggregateBase.scala:42)
>     at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54)
>     at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:39)
>     at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
>     at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalcBase.translateToPlan(StreamExecCalcBase.scala:38)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>     at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>     at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>     at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:103)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:42)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.explainInternal(TableEnvironmentImpl.java:630)
>     at 
> org.apache.flink.table.api.internal.TableImpl.explain(TableImpl.java:582)
>     at 
> com.meituan.grocery.data.flink.test.BugTest.testRowNumber(BugTest.java:69)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60){code}
> In the stacktrace above  , rownumber() 's  physical rel which  is 
> StreamExecRank In nomal change to StreamExecOverAggregate . The 
> StreamExecOv

[GitHub] [flink] zhoulii commented on pull request #19984: [hotfix][tests] test the serialized object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread GitBox


zhoulii commented on PR #19984:
URL: https://github.com/apache/flink/pull/19984#issuecomment-1158562510

   > Would you open a JIRA ticket to fix this test issue? @zhoulii
   
   Hi @zhuzhurk , thanks for your reply. 
   
   I thought the change is minor, So I did not open a jira ticket. I am not 
that familiar with the convention of contribution, Do I need to open a jira 
ticket for this issue ? If so, please remind me and sorry for opening this pr 
before open a jira ticket.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28016) Support Maven 3.3+

2022-06-17 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555436#comment-17555436
 ] 

Chesnay Schepler commented on FLINK-28016:
--

Not sure either, but I built it with Maven 3.8.6 and it works as I said it 
would.

This is also a well known behavior, see MNG-5899.

> Support Maven 3.3+
> --
>
> Key: FLINK-28016
> URL: https://issues.apache.org/jira/browse/FLINK-28016
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.16.0
>
>
> We are currently de-facto limited to Maven 3.2.5 because our packaging relies 
> on the shade-plugin modifying the dependency tree at runtime when bundling 
> dependencies, which is no longer possible on Maven 3.3+.
> Being locked in to such an old Maven version isn't a good state to be in, and 
> the contributor experience suffers as well.
> I've been looking into removing this limitation by explicitly marking every 
> dependency that we bundle as {{optional}} in the poms, which really means 
> {{non-transitive}}. This ensures that the everything being bundled by one 
> module is not visible to other modules. Some tooling to capture developer 
> mistakes were also written.
> Overall this is actually quite a nice change, as it makes things more 
> explicit and reduces inconsistencies (e.g., the dependency plugin results are 
> questionable if the shade-plugin didn't run!); and it already highlighted 
> several problems in Flink.
> This change will have no effect on users or the released poms, because the 
> dependency-reduced poms will be generated as before and remove all modified 
> dependencies.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol merged pull request #19990: [FLINK-28095][oss] Replace commons-io IOUtils dependency

2022-06-17 Thread GitBox


zentol merged PR #19990:
URL: https://github.com/apache/flink/pull/19990


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-28095) Replace IOUtils dependency on oss filesystem

2022-06-17 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-28095.

Resolution: Fixed

master: 9aaf09c3db753ef805e3c7e3889a1f919d6362a5

> Replace IOUtils dependency on oss filesystem
> 
>
> Key: FLINK-28095
> URL: https://issues.apache.org/jira/browse/FLINK-28095
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The oss fs has an undeclared dependency on commons-io for a single call to 
> IOUtils.
> We can make our lives a little bit easier by using the Flink IOUtils instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol commented on pull request #19968: [FLINK-27972][coordination] Wait until savepoint operation is complete

2022-06-17 Thread GitBox


zentol commented on PR #19968:
URL: https://github.com/apache/flink/pull/19968#issuecomment-1158571238

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu commented on pull request #19958: [FLINK-27159][table-api] Support first_value/last_value in the Table API

2022-06-17 Thread GitBox


dianfu commented on PR #19958:
URL: https://github.com/apache/flink/pull/19958#issuecomment-1158572228

   @shuiqiangchen It seems that the new stack is used when adding new 
functions. It makes adding new functions easier. Regarding to this PR, its 
purpose is to expose existing functions first_value/last_value to Table API and 
so I guess we could still use the old stack.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28102) Flink AkkaRpcSystemLoader fails when temporary directory is a symlink

2022-06-17 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555442#comment-17555442
 ] 

Weijie Guo commented on FLINK-28102:


You can set an io.tmp.dirs yourself using -D

createDirectories will throw FileAlreadyExistsException if dir exists but is 
not a directory, such as symlink.

> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> -
>
> Key: FLINK-28102
> URL: https://issues.apache.org/jira/browse/FLINK-28102
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.15.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> *Error Message:*
> {code}
> Caused by: java.nio.file.FileAlreadyExistsException: /tmp
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_332]
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) 
> ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectories(Files.java:727) 
> ~[?:1.8.0_332]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystemLoader.loadRpcSystem(AkkaRpcSystemLoader.java:58)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at org.apache.flink.runtime.rpc.RpcSystem.load(RpcSystem.java:101) 
> ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:186)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> {code}
> *Repro:*
> {code}
> 1. /tmp is a symlink points to actual directory /mnt/tmp
> [root@prabhuHost log]# ls -lrt /tmp
> lrwxrwxrwx 1 root root 8 Jun 15 07:51 /tmp -> /mnt/tmp
> 2. Start Cluster
> ./bin/start-cluster.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-28102) Flink AkkaRpcSystemLoader fails when temporary directory is a symlink

2022-06-17 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555442#comment-17555442
 ] 

Weijie Guo edited comment on FLINK-28102 at 6/17/22 7:19 AM:
-

You can set io.tmp.dirs yourself using -D

createDirectories will throw FileAlreadyExistsException if dir exists but is 
not a directory, such as symlink.


was (Author: weijie guo):
You can set an io.tmp.dirs yourself using -D

createDirectories will throw FileAlreadyExistsException if dir exists but is 
not a directory, such as symlink.

> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> -
>
> Key: FLINK-28102
> URL: https://issues.apache.org/jira/browse/FLINK-28102
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.15.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> *Error Message:*
> {code}
> Caused by: java.nio.file.FileAlreadyExistsException: /tmp
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_332]
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) 
> ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectories(Files.java:727) 
> ~[?:1.8.0_332]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystemLoader.loadRpcSystem(AkkaRpcSystemLoader.java:58)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at org.apache.flink.runtime.rpc.RpcSystem.load(RpcSystem.java:101) 
> ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:186)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> {code}
> *Repro:*
> {code}
> 1. /tmp is a symlink points to actual directory /mnt/tmp
> [root@prabhuHost log]# ls -lrt /tmp
> lrwxrwxrwx 1 root root 8 Jun 15 07:51 /tmp -> /mnt/tmp
> 2. Start Cluster
> ./bin/start-cluster.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28071) Support missing built-in functions in Table API

2022-06-17 Thread LuNing Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555445#comment-17555445
 ] 

LuNing Wang commented on FLINK-28071:
-

[~twalthr] I only support existing built-in functions no more new functions in 
this issue. Should I not use the new stack, or in the future we create a new 
issue to refactor all built-in functions to the new stack?

Like this PR [https://github.com/apache/flink/pull/19988] . If I use the new 
stack, I will delete the `StringCallGen#generateAscii` and add an 
`AsciiFunction` like IfNullFunction in 
https://issues.apache.org/jira/browse/FLINK-20522

IMO, if we change `StringCallGen#generateAscii`, we can create a new issue.

> Support missing built-in functions in Table API
> ---
>
> Key: FLINK-28071
> URL: https://issues.apache.org/jira/browse/FLINK-28071
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dian Fu
>Assignee: LuNing Wang
>Priority: Major
> Fix For: 1.16.0
>
>
> There are many built-in functions are not supported. See 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/
>  for more details. There are two columns for each built-in function: *SQL 
> Function* and *Table Function*, if a function is not supported in *Table 
> API*, the *Table Function* column is documented as *N/A*. We need to evaluate 
> each of these functions to ensure that they could be used in both SQL and 
> Table API.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] JingsongLi merged pull request #162: [FLINK-27542] Add end to end tests for Hive to read external table store files

2022-06-17 Thread GitBox


JingsongLi merged PR #162:
URL: https://github.com/apache/flink-table-store/pull/162


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-27542) Add end to end tests for Hive to read external table store files

2022-06-17 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17533700#comment-17533700
 ] 

Jingsong Lee edited comment on FLINK-27542 at 6/17/22 7:29 AM:
---

master:

ad9e09dbd9621ff2935f193c53a35b355c76fac6

2bce9d695f06c2775902532b0ccda5b0f0a6a514

949415fd03ae1caf71a85309eb58d38e02b4a5c9


was (Author: lzljs3620320):
master:

ad9e09dbd9621ff2935f193c53a35b355c76fac6

2bce9d695f06c2775902532b0ccda5b0f0a6a514

> Add end to end tests for Hive to read external table store files
> 
>
> Key: FLINK-27542
> URL: https://issues.apache.org/jira/browse/FLINK-27542
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> To ensure that jar produced by flink-table-store-hive module can actually 
> work in real Hive system we need to add end to end tests.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27542) Add end to end tests for Hive to read external table store files

2022-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-27542.

Resolution: Fixed

> Add end to end tests for Hive to read external table store files
> 
>
> Key: FLINK-27542
> URL: https://issues.apache.org/jira/browse/FLINK-27542
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> To ensure that jar produced by flink-table-store-hive module can actually 
> work in real Hive system we need to add end to end tests.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-28071) Support missing built-in functions in Table API

2022-06-17 Thread LuNing Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555445#comment-17555445
 ] 

LuNing Wang edited comment on FLINK-28071 at 6/17/22 7:33 AM:
--

[~twalthr] [~Sergey Nuyanzin] I only support existing built-in functions no 
more new functions in this issue. Should I not use the new stack, or in the 
future we create a new issue to refactor all built-in functions to the new 
stack?

Like this PR [https://github.com/apache/flink/pull/19988] . If I use the new 
stack, I will delete the `StringCallGen#generateAscii` and add an 
`AsciiFunction` like IfNullFunction in 
https://issues.apache.org/jira/browse/FLINK-20522

IMO, if we change `StringCallGen#generateAscii`, we can create a new issue.


was (Author: ana4):
[~twalthr] I only support existing built-in functions no more new functions in 
this issue. Should I not use the new stack, or in the future we create a new 
issue to refactor all built-in functions to the new stack?

Like this PR [https://github.com/apache/flink/pull/19988] . If I use the new 
stack, I will delete the `StringCallGen#generateAscii` and add an 
`AsciiFunction` like IfNullFunction in 
https://issues.apache.org/jira/browse/FLINK-20522

IMO, if we change `StringCallGen#generateAscii`, we can create a new issue.

> Support missing built-in functions in Table API
> ---
>
> Key: FLINK-28071
> URL: https://issues.apache.org/jira/browse/FLINK-28071
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dian Fu
>Assignee: LuNing Wang
>Priority: Major
> Fix For: 1.16.0
>
>
> There are many built-in functions are not supported. See 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/
>  for more details. There are two columns for each built-in function: *SQL 
> Function* and *Table Function*, if a function is not supported in *Table 
> API*, the *Table Function* column is documented as *N/A*. We need to evaluate 
> each of these functions to ensure that they could be used in both SQL and 
> Table API.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-ml] yunfengzhou-hub opened a new pull request, #112: [FLINK-27096] Flush buffer at epoch watermark

2022-06-17 Thread GitBox


yunfengzhou-hub opened a new pull request, #112:
URL: https://github.com/apache/flink-ml/pull/112

   This PR reduces Flink ML iteration's latency by enforcing flush at each 
iteration epoch watermark.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zhuzhurk commented on pull request #19984: [hotfix][tests] test the serialized object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread GitBox


zhuzhurk commented on PR #19984:
URL: https://github.com/apache/flink/pull/19984#issuecomment-1158591560

   It's better to open a JIRA ticket for it so that fixes can be tracked. You 
can change the priority to minor though.
   There do exist some hotfix commits but they are usually parts of PRs of a 
JIRA ticket, so that they can still get tracked.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on a diff in pull request #19851: [FLINK-20765][table-planner] Make all expressions use the result type passed in instead of inferring it again in OperatorGen to av

2022-06-17 Thread GitBox


wuchong commented on code in PR #19851:
URL: https://github.com/apache/flink/pull/19851#discussion_r899829796


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala:
##
@@ -313,6 +313,7 @@ object ScalarOperatorGens {
   def generateUnaryIntervalPlusMinus(
   ctx: CodeGeneratorContext,
   plus: Boolean,
+  resultType: LogicalType,
   operand: GeneratedExpression): GeneratedExpression = {
 val operator = if (plus) "+" else "-"
 generateUnaryArithmeticOperator(ctx, operator, operand.resultType, operand)

Review Comment:
   Use the `resultType` parameter? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-28104) Drop the unused order parameter in FirstValueFunction/LastValueFunction

2022-06-17 Thread luoyuxia (Jira)
luoyuxia created FLINK-28104:


 Summary: Drop the unused order parameter in 
FirstValueFunction/LastValueFunction
 Key: FLINK-28104
 URL: https://issues.apache.org/jira/browse/FLINK-28104
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API, Table SQL / Planner
Reporter: luoyuxia






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] shuiqiangchen commented on pull request #19958: [FLINK-27159][table-api] Support first_value/last_value in the Table API

2022-06-17 Thread GitBox


shuiqiangchen commented on PR #19958:
URL: https://github.com/apache/flink/pull/19958#issuecomment-1158602558

   @dianfu Yes, the implementation of LastValueAggFunction and 
FirstValueAggFunction have followed the new form. This PR is mainly to expose 
builtin functions to TableAPI.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28104) Drop the unused order parameter in FirstValueFunction/LastValueFunction

2022-06-17 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-28104:
-
Fix Version/s: 1.16.0

> Drop the unused order parameter in FirstValueFunction/LastValueFunction
> ---
>
> Key: FLINK-28104
> URL: https://issues.apache.org/jira/browse/FLINK-28104
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread zl (Jira)
zl created FLINK-28105:
--

 Summary: We should test the copied object in 
GlobFilePathFilterTest#testGlobFilterSerializable
 Key: FLINK-28105
 URL: https://issues.apache.org/jira/browse/FLINK-28105
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Reporter: zl


Variable 
[matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
 is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


zentol commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899866865


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -98,6 +98,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 List buffers;
 try {
 buffers = dataFuture.get();
+} catch (InterruptedException e) {
+writer.fail(e);
+throw e;

Review Comment:
   does this maybe belong rather to FLINK-27792? AFAICT this isn't required for 
the issue at hand.



##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   While this will likely solve the issue I'm not sure if it is the correct 
solution.
   
   We could see in the logs that this future would eventually be completed, 
with several buffers being contained within. Admittedly this happened after the 
hosting TM shut down (so I'm not sure if it can happen in production where the 
JVM would go with it), but I do wonder if this couldn't cause a buffer leak.
   
   Would there be any down-side of doing the clean-up like this:
   ```
   dataFuture.thenAccept(
   buffers -> {
   try {
   CloseableIterator.fromList(buffers, Buffer::recycleBuffer)
   .close();
   } catch (Exception e) {
   }
   });
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28102) Flink AkkaRpcSystemLoader fails when temporary directory is a symlink

2022-06-17 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555455#comment-17555455
 ] 

Prabhu Joseph commented on FLINK-28102:
---

Yes setting io.tmp.dirs to the actual directory pointed by symlink worked. 
Shall we improve the logic to handle the symlink which points to Actual 
Directory case as well.

> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> -
>
> Key: FLINK-28102
> URL: https://issues.apache.org/jira/browse/FLINK-28102
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.15.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> *Error Message:*
> {code}
> Caused by: java.nio.file.FileAlreadyExistsException: /tmp
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_332]
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) 
> ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectories(Files.java:727) 
> ~[?:1.8.0_332]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystemLoader.loadRpcSystem(AkkaRpcSystemLoader.java:58)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at org.apache.flink.runtime.rpc.RpcSystem.load(RpcSystem.java:101) 
> ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:186)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> {code}
> *Repro:*
> {code}
> 1. /tmp is a symlink points to actual directory /mnt/tmp
> [root@prabhuHost log]# ls -lrt /tmp
> lrwxrwxrwx 1 root root 8 Jun 15 07:51 /tmp -> /mnt/tmp
> 2. Start Cluster
> ./bin/start-cluster.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] JingsongLi opened a new pull request, #163: [FLINK-28066] Use FileSystem.createRecoverableWriter in FileStoreCommit

2022-06-17 Thread GitBox


JingsongLi opened a new pull request, #163:
URL: https://github.com/apache/flink-table-store/pull/163

   In FileStoreCommitImpl, currently, it uses `rename` to support atomic commit.
   But this is not work for object store like S3. We can use RecoverableWriter 
to support atomic commit for object store.
   We can introduce `AtomicFileWriter`:
   - Use rename if createRecoverableWriter is not supported or the filesystem 
is FILE_SYSTEM
   - Use RecoverableWriter if FileSystem supports createRecoverableWriter
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-28066) Use FileSystem.createRecoverableWriter in FileStoreCommit

2022-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-28066:


Assignee: Jingsong Lee

> Use FileSystem.createRecoverableWriter in FileStoreCommit
> -
>
> Key: FLINK-28066
> URL: https://issues.apache.org/jira/browse/FLINK-28066
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> In FileStoreCommitImpl, currently, it uses `rename` to support atomic commit.
> But this is not work for object store like S3. We can use RecoverableWriter 
> to support atomic commit for object store.
> We can introduce `AtomicFileCommitter`:
>  * Use RecoverableWriter if FileSystem supports createRecoverableWriter
>  * Use rename if createRecoverableWriter is not supported



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28066) Use FileSystem.createRecoverableWriter in FileStoreCommit

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28066:
---
Labels: pull-request-available  (was: )

> Use FileSystem.createRecoverableWriter in FileStoreCommit
> -
>
> Key: FLINK-28066
> URL: https://issues.apache.org/jira/browse/FLINK-28066
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> In FileStoreCommitImpl, currently, it uses `rename` to support atomic commit.
> But this is not work for object store like S3. We can use RecoverableWriter 
> to support atomic commit for object store.
> We can introduce `AtomicFileCommitter`:
>  * Use RecoverableWriter if FileSystem supports createRecoverableWriter
>  * Use rename if createRecoverableWriter is not supported



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread zl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555457#comment-17555457
 ] 

zl commented on FLINK-28105:


Hi [~zhuzh] ,can you take a look ?

> We should test the copied object in 
> GlobFilePathFilterTest#testGlobFilterSerializable
> -
>
> Key: FLINK-28105
> URL: https://issues.apache.org/jira/browse/FLINK-28105
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: zl
>Priority: Minor
>
> Variable 
> [matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
>  is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


zentol commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899880554


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   Why is the dataFuture not being completed in the first place? Isn't that the 
real issue?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] alpinegizmo commented on pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-06-17 Thread GitBox


alpinegizmo commented on PR #14376:
URL: https://github.com/apache/flink/pull/14376#issuecomment-1158622294

   Would love to see this in 1.16. Hope someone can review it soon!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


zentol commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899884298


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   Another question I would ask is why this can even result in a TM crash; 
shouldn't the waiting be interrupted instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28027) Initialise Async Sink maximum number of in flight messages to low number for rate limiting strategy

2022-06-17 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555460#comment-17555460
 ] 

EMing Zhou commented on FLINK-28027:


Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
//代码占位符
{code}
JdbcExecutionOptions.builder() .withBatchSize(4000) .withBatchIntervalMs(200) 
.withMaxRetries(3) .build()

> Initialise Async Sink maximum number of in flight messages to low number for 
> rate limiting strategy
> ---
>
> Key: FLINK-28027
> URL: https://issues.apache.org/jira/browse/FLINK-28027
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Zichen Liu
>Priority: Minor
> Fix For: 1.16.0
>
>
> *Background*
> In the AsyncSinkWriter, we implement a rate limiting strategy.
> The initial value for the maximum number of in flight messages is set 
> extremely high ({{{}maxBatchSize * maxInFlightRequests{}}}).
> However, in accordance with the AIMD strategy, the TCP implementation for 
> congestion control has found a small value to start with [is 
> better]([https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start]).
> *Suggestion*
> A better default might be:
>  * maxBatchSize
>  * maxBatchSize / parallelism



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-28027) Initialise Async Sink maximum number of in flight messages to low number for rate limiting strategy

2022-06-17 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555460#comment-17555460
 ] 

EMing Zhou edited comment on FLINK-28027 at 6/17/22 8:21 AM:
-

Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
JdbcExecutionOptions.builder() .withBatchSize(4000) .withBatchIntervalMs(200) 
.withMaxRetries(3) .build()
{code}
 


was (Author: zsigner):
Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
//代码占位符
{code}
JdbcExecutionOptions.builder() .withBatchSize(4000) .withBatchIntervalMs(200) 
.withMaxRetries(3) .build()

> Initialise Async Sink maximum number of in flight messages to low number for 
> rate limiting strategy
> ---
>
> Key: FLINK-28027
> URL: https://issues.apache.org/jira/browse/FLINK-28027
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Zichen Liu
>Priority: Minor
> Fix For: 1.16.0
>
>
> *Background*
> In the AsyncSinkWriter, we implement a rate limiting strategy.
> The initial value for the maximum number of in flight messages is set 
> extremely high ({{{}maxBatchSize * maxInFlightRequests{}}}).
> However, in accordance with the AIMD strategy, the TCP implementation for 
> congestion control has found a small value to start with [is 
> better]([https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start]).
> *Suggestion*
> A better default might be:
>  * maxBatchSize
>  * maxBatchSize / parallelism



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] pnowojski commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


pnowojski commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899887457


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   I agree with @zentol  that this doesn't look good and I would be afraid it 
could lead to some resource leaks.
   
   It looks to me like the issue is that `dataFuture` is being cancelled from 
the chain: `PipelinedSubpartition#release()` <- ... <- 
`ResultPartition#release` <- ... <- `NettyShuffleEnvironment#close`. Which 
happens after `StreamTask#cleanUp` (which is waiting for this future to 
complete), leading to a deadlock.
   
   We would either need to cancel the future sooner (`StreamTask#cleanUp`?)`, 
or do what @zentol proposed. I think the latter is indeed a good option. We 
don't need to blockingly wait. Let's just not completely ignore exceptions 
here. Logging error should be fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27792) InterruptedException thrown by ChannelStateWriterImpl

2022-06-17 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555461#comment-17555461
 ] 

Piotr Nowojski commented on FLINK-27792:


What is causing this `InterruptedException`? Where does it originate from?

> InterruptedException thrown by ChannelStateWriterImpl
> -
>
> Key: FLINK-27792
> URL: https://issues.apache.org/jira/browse/FLINK-27792
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Atri Sharma
>Priority: Blocker
>  Labels: test-stability
>
> {code:java}
> 2022-05-25T15:45:17.7584795Z May 25 15:45:17 [ERROR] 
> WindowDistinctAggregateITCase.testTumbleWindow_Rollup  Time elapsed: 1.522 s  
> <<< ERROR!
> 2022-05-25T15:45:17.7586025Z May 25 15:45:17 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-05-25T15:45:17.7587205Z May 25 15:45:17  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-05-25T15:45:17.7588649Z May 25 15:45:17  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> 2022-05-25T15:45:17.7589984Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-05-25T15:45:17.7603647Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-05-25T15:45:17.7605042Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-05-25T15:45:17.7605750Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-05-25T15:45:17.7606751Z May 25 15:45:17  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
> 2022-05-25T15:45:17.7607513Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-05-25T15:45:17.7608232Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-05-25T15:45:17.7608953Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-05-25T15:45:17.7614259Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-05-25T15:45:17.7615777Z May 25 15:45:17  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
> 2022-05-25T15:45:17.7617284Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-05-25T15:45:17.7618847Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-05-25T15:45:17.7620579Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-05-25T15:45:17.7622674Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-05-25T15:45:17.7624066Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-05-25T15:45:17.7625352Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-05-25T15:45:17.7626524Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-05-25T15:45:17.7627743Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-05-25T15:45:17.7628913Z May 25 15:45:17  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-05-25T15:45:17.7629902Z May 25 15:45:17  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-05-25T15:45:17.7630891Z May 25 15:45:17  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-05-25T15:45:17.7632074Z May 25 15:45:17  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-05-25T15:45:17.7654202Z May 25 15:45:17  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-05-25T15:45:17.7655764Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-05-25T15:45:17.7657231Z May 25 15:45:17  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-05-25T15:45:17.7658586Z May 25 15:45:17  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 

[GitHub] [flink] pnowojski commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


pnowojski commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899893420


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -98,6 +98,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 List buffers;
 try {
 buffers = dataFuture.get();
+} catch (InterruptedException e) {
+writer.fail(e);
+throw e;

Review Comment:
   How do you think it's related to FLINK-27792?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-28027) Initialise Async Sink maximum number of in flight messages to low number for rate limiting strategy

2022-06-17 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555460#comment-17555460
 ] 

EMing Zhou edited comment on FLINK-28027 at 6/17/22 8:28 AM:
-

Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
JdbcExecutionOptions
.builder() 
.withBatchSize(4000) 
.withBatchIntervalMs(200) 
.withMaxRetries(3) 
.build()
{code}
 


was (Author: zsigner):
Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
JdbcExecutionOptions.builder() .withBatchSize(4000) .withBatchIntervalMs(200) 
.withMaxRetries(3) .build()
{code}
 

> Initialise Async Sink maximum number of in flight messages to low number for 
> rate limiting strategy
> ---
>
> Key: FLINK-28027
> URL: https://issues.apache.org/jira/browse/FLINK-28027
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Zichen Liu
>Priority: Minor
> Fix For: 1.16.0
>
>
> *Background*
> In the AsyncSinkWriter, we implement a rate limiting strategy.
> The initial value for the maximum number of in flight messages is set 
> extremely high ({{{}maxBatchSize * maxInFlightRequests{}}}).
> However, in accordance with the AIMD strategy, the TCP implementation for 
> congestion control has found a small value to start with [is 
> better]([https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start]).
> *Suggestion*
> A better default might be:
>  * maxBatchSize
>  * maxBatchSize / parallelism



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-28027) Initialise Async Sink maximum number of in flight messages to low number for rate limiting strategy

2022-06-17 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555460#comment-17555460
 ] 

EMing Zhou edited comment on FLINK-28027 at 6/17/22 8:28 AM:
-

Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
JdbcExecutionOptions
.builder() 
.withBatchSize(4000) 
.withBatchIntervalMs(200) 
.withMaxRetries(3) 
.build()
{code}
 


was (Author: zsigner):
Hi [~CrynetLogistics] ,When I use jdbc sink, I found that there are several 
parameters. I don't know if it can help you. I will provide you with reference.
{code:java}
JdbcExecutionOptions
.builder() 
.withBatchSize(4000) 
.withBatchIntervalMs(200) 
.withMaxRetries(3) 
.build()
{code}
 

> Initialise Async Sink maximum number of in flight messages to low number for 
> rate limiting strategy
> ---
>
> Key: FLINK-28027
> URL: https://issues.apache.org/jira/browse/FLINK-28027
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Zichen Liu
>Priority: Minor
> Fix For: 1.16.0
>
>
> *Background*
> In the AsyncSinkWriter, we implement a rate limiting strategy.
> The initial value for the maximum number of in flight messages is set 
> extremely high ({{{}maxBatchSize * maxInFlightRequests{}}}).
> However, in accordance with the AIMD strategy, the TCP implementation for 
> congestion control has found a small value to start with [is 
> better]([https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start]).
> *Suggestion*
> A better default might be:
>  * maxBatchSize
>  * maxBatchSize / parallelism



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] pnowojski commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


pnowojski commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899887457


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   I agree with @zentol  that this doesn't look good and I would be afraid it 
could lead to some resource leaks.
   
   > Why is the dataFuture not being completed in the first place? Isn't that 
the real issue?
   
   It looks to me like the issue is that `dataFuture` is being cancelled from 
the chain: `PipelinedSubpartition#release()` <- ... <- 
`ResultPartition#release` <- ... <- `NettyShuffleEnvironment#close`. Which 
happens after `StreamTask#cleanUp` (which is waiting for this future to 
complete), leading to a deadlock.
   
   We would either need to cancel the future sooner (`StreamTask#cleanUp`?)`, 
or do what @zentol proposed. I think the latter is indeed a good option. We 
don't need to blockingly wait. Let's just not completely ignore exceptions 
here. Logging error should be fine.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] JingsongLi merged pull request #147: [FLINK-27947] Introduce Spark Reader for table store

2022-06-17 Thread GitBox


JingsongLi merged PR #147:
URL: https://github.com/apache/flink-table-store/pull/147


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu opened a new pull request, #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-06-17 Thread GitBox


lsyldliu opened a new pull request, #20001:
URL: https://github.com/apache/flink/pull/20001

   ## What is the purpose of the change
   
   *Planner support to use jar which is registered by 'CREATE FUNTION USING 
JAR' syntax*
   
   
   ## Brief change log
   
 - *Planner support to use jar which is registered by 'CREATE FUNTION USING 
JAR' syntax*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added integration tests in FunctionITCase*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): (now)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-27947) Introduce Spark Reader for table store

2022-06-17 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-27947.

Resolution: Fixed

master: c58576eb3bd3d860c5ba5a940b4d0e0b3cb5f55a

> Introduce Spark Reader for table store
> --
>
> Key: FLINK-27947
> URL: https://issues.apache.org/jira/browse/FLINK-27947
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> Now that we have a more stable connector interface, we can develop a bit more 
> ecology.
> Apache Spark is a common batch computing engine, and the more common 
> scenarios are: Flink Streaming writes storage, Spark reads storage.
> So we can support Spark's reader.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27659) Planner support to use jar which is registered by "USING JAR" syntax

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27659:
---
Labels: pull-request-available  (was: )

> Planner support to use jar which is registered by "USING JAR" syntax
> 
>
> Key: FLINK-27659
> URL: https://issues.apache.org/jira/browse/FLINK-27659
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] pnowojski commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


pnowojski commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899899066


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   > Another question I would ask is why this can even result in a TM crash; 
shouldn't the waiting be interrupted instead.
   
   Why should it be interrupted? We are only using interrupts to wake up user 
code or 3rd party libraries. Our own code should be able to shutdown cleanly 
without interruptions. We even explicitly disallow SIGINTs during `StreamTask` 
cleanup (`StreamTask#disableInterruptOnCancel`), once task thread exists from 
user code as otherwise this could lead to resource leaks. If we can not clean 
up resources, we have to relay on the `TaskCancelerWatchDog` that will fail 
over whole TM after a time out.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-28105:

Component/s: Tests
 (was: API / Core)

> We should test the copied object in 
> GlobFilePathFilterTest#testGlobFilterSerializable
> -
>
> Key: FLINK-28105
> URL: https://issues.apache.org/jira/browse/FLINK-28105
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: zl
>Priority: Minor
>
> Variable 
> [matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
>  is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-28105:

Fix Version/s: 1.16.0

> We should test the copied object in 
> GlobFilePathFilterTest#testGlobFilterSerializable
> -
>
> Key: FLINK-28105
> URL: https://issues.apache.org/jira/browse/FLINK-28105
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0, 1.14.4, 1.16.0
>Reporter: zl
>Priority: Minor
> Fix For: 1.16.0
>
>
> Variable 
> [matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
>  is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-28106) Create flink-table-store-connector-base to shade all flink dependencies

2022-06-17 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-28106:


 Summary: Create flink-table-store-connector-base to shade all 
flink dependencies
 Key: FLINK-28106
 URL: https://issues.apache.org/jira/browse/FLINK-28106
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.2.0


For Hive and other readers, they currently need to shade a bunch of 
dependencies, which is not very friendly, we can have a common module, and 
connector depends on this one module.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-28105:

Affects Version/s: 1.14.4
   1.15.0
   1.16.0

> We should test the copied object in 
> GlobFilePathFilterTest#testGlobFilterSerializable
> -
>
> Key: FLINK-28105
> URL: https://issues.apache.org/jira/browse/FLINK-28105
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0, 1.14.4, 1.16.0
>Reporter: zl
>Priority: Minor
>
> Variable 
> [matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
>  is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-28105:
---

Assignee: zl

> We should test the copied object in 
> GlobFilePathFilterTest#testGlobFilterSerializable
> -
>
> Key: FLINK-28105
> URL: https://issues.apache.org/jira/browse/FLINK-28105
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0, 1.14.4, 1.16.0
>Reporter: zl
>Assignee: zl
>Priority: Minor
> Fix For: 1.16.0
>
>
> Variable 
> [matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
>  is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28105) We should test the copied object in GlobFilePathFilterTest#testGlobFilterSerializable

2022-06-17 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555467#comment-17555467
 ] 

Zhu Zhu commented on FLINK-28105:
-

Thanks for reporting this problem! [~Leo Zhou] 

The ticket is assigned to you.

> We should test the copied object in 
> GlobFilePathFilterTest#testGlobFilterSerializable
> -
>
> Key: FLINK-28105
> URL: https://issues.apache.org/jira/browse/FLINK-28105
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0, 1.14.4, 1.16.0
>Reporter: zl
>Assignee: zl
>Priority: Minor
> Fix For: 1.16.0
>
>
> Variable 
> [matcherCopy|https://github.com/apache/flink/blob/master/flink-core/src/test/java/org/apache/flink/api/common/io/GlobFilePathFilterTest.java#L170]
>  is created without testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-06-17 Thread GitBox


flinkbot commented on PR #20001:
URL: https://github.com/apache/flink/pull/20001#issuecomment-1158639485

   
   ## CI report:
   
   * e0586d561301b1356fccad5c01ce0e1fbf88bcfb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


zentol commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899904091


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -98,6 +98,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 List buffers;
 try {
 buffers = dataFuture.get();
+} catch (InterruptedException e) {
+writer.fail(e);
+throw e;

Review Comment:
   I'm interpreting that ticket as a more general "how to handle 
InterruptedException" ticket.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer merged pull request #19937: [FLINK-28007][connectors/kinesis,firehose] Migrated Kinesis Firehose & Streams …

2022-06-17 Thread GitBox


dannycranmer merged PR #19937:
URL: https://github.com/apache/flink/pull/19937


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


zentol commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899906279


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   > Why should it be interrupted?
   
   I figured that since the other code path (from the comment above) can be 
interrupted, the cleanup can as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28007) Tests for AWS Connectors Using SDK v2 to use Synchronous Clients

2022-06-17 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555473#comment-17555473
 ] 

Danny Cranmer commented on FLINK-28007:
---

Merged to master 
https://github.com/apache/flink/commit/403cd3b86b9131161d6380bfd2c5200dcbe6989c

> Tests for AWS Connectors Using SDK v2 to use Synchronous Clients
> 
>
> Key: FLINK-28007
> URL: https://issues.apache.org/jira/browse/FLINK-28007
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common, Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Zichen Liu
>Assignee: Zichen Liu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> h3. Background
> The unit & integration tests for the aws connectors in the Flink repository 
> create clients using static helper methods in flink-connector-aws-base, in 
> the AWSServicesTestUtils class.
> These static helper methods create the asynchronous flavour of the clients 
> required by aws connectors.
> *Task*
> * Change these to the synchronous version for each aws client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (FLINK-28007) Tests for AWS Connectors Using SDK v2 to use Synchronous Clients

2022-06-17 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer resolved FLINK-28007.
---
Resolution: Fixed

> Tests for AWS Connectors Using SDK v2 to use Synchronous Clients
> 
>
> Key: FLINK-28007
> URL: https://issues.apache.org/jira/browse/FLINK-28007
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common, Connectors / Kinesis
>Affects Versions: 1.15.0
>Reporter: Zichen Liu
>Assignee: Zichen Liu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> h3. Background
> The unit & integration tests for the aws connectors in the Flink repository 
> create clients using static helper methods in flink-connector-aws-base, in 
> the AWSServicesTestUtils class.
> These static helper methods create the asynchronous flavour of the clients 
> required by aws connectors.
> *Task*
> * Change these to the synchronous version for each aws client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-28107) Support id of document is null

2022-06-17 Thread LuNing Wang (Jira)
LuNing Wang created FLINK-28107:
---

 Summary: Support id of document is null
 Key: FLINK-28107
 URL: https://issues.apache.org/jira/browse/FLINK-28107
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Connectors / ElasticSearch
Affects Versions: 1.16.0
Reporter: LuNing Wang
 Fix For: 1.16.0


```

es7_sink = Elasticsearch7SinkBuilder() \
.set_emitter(ElasticsearchEmitter.static_index('foo')) \
.set_hosts(['localhost:9200']) \

```

 

Caused by: java.lang.NullPointerException

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)

at 
org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)

at 
org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28107) Support id of document is null

2022-06-17 Thread LuNing Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LuNing Wang updated FLINK-28107:

Description: 
 
{code:java}
es7_sink = Elasticsearch7SinkBuilder() \
.set_emitter(ElasticsearchEmitter.static_index('foo')) \
.set_hosts(['localhost:9200'])  {code}
Caused by: java.lang.NullPointerException

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)

at 
org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)

at 
org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)

  was:
```

es7_sink = Elasticsearch7SinkBuilder() \
.set_emitter(ElasticsearchEmitter.static_index('foo')) \
.set_hosts(['localhost:9200']) \

```

 

Caused by: java.lang.NullPointerException

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)

at 
org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)

at 
org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)

at 
org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)


> Support id of document is null
> --
>
> Key: FLINK-28107
> URL: https://issues.apache.org/jira/browse/FLINK-28107
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: LuNing Wang
>Priority: Major
> Fix For: 1.16.0
>
>
>  
> {code:java}
> es7_sink = Elasticsearch7SinkBuilder() \
> .set_emitter(ElasticsearchEmitter.static_index('foo')) \
> .set_hosts(['localhost:9200'])  {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)
> at 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)
> at 
> org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] pnowojski commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


pnowojski commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899910882


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   Huh. It looks like interrupt should never happen there. We are/should be 
never interrupting the `ChannelStateWriterImpl#executor` thread.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28035) Support rescale overwrite

2022-06-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-28035:
--
Summary: Support rescale overwrite  (was: Don't check num of buckets for 
rescale bucket condition)

> Support rescale overwrite
> -
>
> Key: FLINK-28035
> URL: https://issues.apache.org/jira/browse/FLINK-28035
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> For an ordinary read-write job, the scan will check the numBuckets read from 
> manifests against the current numBuckets, to avoid data corruption. See 
> FLINK-27316.
>  
> For rescale-bucket rewrite, we should allow the rescale task to read the data 
> as the old bucket number and rescale according to the new bucket number.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] deadwind4 opened a new pull request, #20002: [FLINK-28107][python][connector/elasticsearch] Support id of document is null

2022-06-17 Thread GitBox


deadwind4 opened a new pull request, #20002:
URL: https://github.com/apache/flink/pull/20002

   ## What is the purpose of the change
   
   Support id of document is null
   
   ## Brief change log
   
 - *Add IndexRequest in SimpleElasticsearchEmitter*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28107) Support id of document is null

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28107:
---
Labels: pull-request-available  (was: )

> Support id of document is null
> --
>
> Key: FLINK-28107
> URL: https://issues.apache.org/jira/browse/FLINK-28107
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: LuNing Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
>  
> {code:java}
> es7_sink = Elasticsearch7SinkBuilder() \
> .set_emitter(ElasticsearchEmitter.static_index('foo')) \
> .set_hosts(['localhost:9200'])  {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)
> at 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)
> at 
> org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28107) Support id of document is null

2022-06-17 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555482#comment-17555482
 ] 

Martijn Visser commented on FLINK-28107:


[~afedulov] [~alexanderpreuss] Should this PR also be directed towards the 
externalized Elasticsearch repo or will that be synced later? 

> Support id of document is null
> --
>
> Key: FLINK-28107
> URL: https://issues.apache.org/jira/browse/FLINK-28107
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: LuNing Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
>  
> {code:java}
> es7_sink = Elasticsearch7SinkBuilder() \
> .set_emitter(ElasticsearchEmitter.static_index('foo')) \
> .set_hosts(['localhost:9200'])  {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)
> at 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)
> at 
> org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28035) Support rescale overwrite

2022-06-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-28035:
--
Description: 
For an ordinary read-write job, the scan will check the numBuckets read from 
manifests against the current numBuckets, to avoid data corruption. See 
FLINK-27316.

 

However, this can be improved as follows.
 * If no new writes happen after changing the bucket number, the reads should 
not be blocked.
 * For rescale overwrite, we should support scan as the old bucket num, rescale 
and commit as the new bucket num.
 * The streaming job can be suspended, and recovered from the rescaled data 
layout.

  was:
For an ordinary read-write job, the scan will check the numBuckets read from 
manifests against the current numBuckets, to avoid data corruption. See 
FLINK-27316.

 

For rescale-bucket rewrite, we should allow the rescale task to read the data 
as the old bucket number and rescale according to the new bucket number.


> Support rescale overwrite
> -
>
> Key: FLINK-28035
> URL: https://issues.apache.org/jira/browse/FLINK-28035
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> For an ordinary read-write job, the scan will check the numBuckets read from 
> manifests against the current numBuckets, to avoid data corruption. See 
> FLINK-27316.
>  
> However, this can be improved as follows.
>  * If no new writes happen after changing the bucket number, the reads should 
> not be blocked.
>  * For rescale overwrite, we should support scan as the old bucket num, 
> rescale and commit as the new bucket num.
>  * The streaming job can be suspended, and recovered from the rescaled data 
> layout.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28035) Support rescale overwrite

2022-06-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-28035:
--
Description: 
For an ordinary read-write job, the scan will check the numBuckets read from 
manifests against the current numBuckets, to avoid data corruption. See 
FLINK-27316.

 

However, this can be improved as follows.
 * If no new writes happen after changing the bucket number, the reads should 
not be blocked.
 * For rescale overwrite, we should support scan as the old bucket num, rescale 
and commit as the new bucket num.
 * The streaming job can be suspended and recovered from the rescaled data 
layout.

  was:
For an ordinary read-write job, the scan will check the numBuckets read from 
manifests against the current numBuckets, to avoid data corruption. See 
FLINK-27316.

 

However, this can be improved as follows.
 * If no new writes happen after changing the bucket number, the reads should 
not be blocked.
 * For rescale overwrite, we should support scan as the old bucket num, rescale 
and commit as the new bucket num.
 * The streaming job can be suspended, and recovered from the rescaled data 
layout.


> Support rescale overwrite
> -
>
> Key: FLINK-28035
> URL: https://issues.apache.org/jira/browse/FLINK-28035
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> For an ordinary read-write job, the scan will check the numBuckets read from 
> manifests against the current numBuckets, to avoid data corruption. See 
> FLINK-27316.
>  
> However, this can be improved as follows.
>  * If no new writes happen after changing the bucket number, the reads should 
> not be blocked.
>  * For rescale overwrite, we should support scan as the old bucket num, 
> rescale and commit as the new bucket num.
>  * The streaming job can be suspended and recovered from the rescaled data 
> layout.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28103) Job cancelling api returns 404 when job is actually running

2022-06-17 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555484#comment-17555484
 ] 

Martijn Visser commented on FLINK-28103:


[~ldwnt] Thanks, can you verify this with the latest versions of Flink since 
1.13 is not supported by the community anymore? 

> Job cancelling api returns 404 when job is actually running
> ---
>
> Key: FLINK-28103
> URL: https://issues.apache.org/jira/browse/FLINK-28103
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.13.5
>Reporter: David
>Priority: Major
> Attachments: image-2022-06-17-14-04-44-307.png, 
> image-2022-06-17-14-05-36-264.png
>
>
> The job is still running:
> !image-2022-06-17-14-04-44-307.png!
>  
> but the cancelling api returns 404:
> !image-2022-06-17-14-05-36-264.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28102) Flink AkkaRpcSystemLoader fails when temporary directory is a symlink

2022-06-17 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555485#comment-17555485
 ] 

Weijie Guo commented on FLINK-28102:


 We can handle symlinks correctly before FLINK-23500, but now it's broken, from 
my personal point of view, we should allow symlinks as before, what do you 
think [~chesnay] 

> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> -
>
> Key: FLINK-28102
> URL: https://issues.apache.org/jira/browse/FLINK-28102
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.15.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> *Error Message:*
> {code}
> Caused by: java.nio.file.FileAlreadyExistsException: /tmp
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_332]
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) 
> ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectories(Files.java:727) 
> ~[?:1.8.0_332]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystemLoader.loadRpcSystem(AkkaRpcSystemLoader.java:58)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at org.apache.flink.runtime.rpc.RpcSystem.load(RpcSystem.java:101) 
> ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:186)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> {code}
> *Repro:*
> {code}
> 1. /tmp is a symlink points to actual directory /mnt/tmp
> [root@prabhuHost log]# ls -lrt /tmp
> lrwxrwxrwx 1 root root 8 Jun 15 07:51 /tmp -> /mnt/tmp
> 2. Start Cluster
> ./bin/start-cluster.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-28102) Flink AkkaRpcSystemLoader fails when temporary directory is a symlink

2022-06-17 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555485#comment-17555485
 ] 

Weijie Guo edited comment on FLINK-28102 at 6/17/22 8:58 AM:
-

 We can handle symlinks correctly before FLINK-23500, but now it's broken, from 
my personal point of view, we should allow symlinks as before, what do you 
think [~chesnay] [~prabhujoseph] 


was (Author: weijie guo):
 We can handle symlinks correctly before FLINK-23500, but now it's broken, from 
my personal point of view, we should allow symlinks as before, what do you 
think [~chesnay] 

> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> -
>
> Key: FLINK-28102
> URL: https://issues.apache.org/jira/browse/FLINK-28102
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.15.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> *Error Message:*
> {code}
> Caused by: java.nio.file.FileAlreadyExistsException: /tmp
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_332]
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) 
> ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectories(Files.java:727) 
> ~[?:1.8.0_332]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystemLoader.loadRpcSystem(AkkaRpcSystemLoader.java:58)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at org.apache.flink.runtime.rpc.RpcSystem.load(RpcSystem.java:101) 
> ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:186)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> {code}
> *Repro:*
> {code}
> 1. /tmp is a symlink points to actual directory /mnt/tmp
> [root@prabhuHost log]# ls -lrt /tmp
> lrwxrwxrwx 1 root root 8 Jun 15 07:51 /tmp -> /mnt/tmp
> 2. Start Cluster
> ./bin/start-cluster.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #20002: [FLINK-28107][python][connector/elasticsearch] Support id of document is null

2022-06-17 Thread GitBox


flinkbot commented on PR #20002:
URL: https://github.com/apache/flink/pull/20002#issuecomment-1158658645

   
   ## CI report:
   
   * 8b75c0fc11b7ac3e83a73c317d0a12f5cd1116bf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-28102) Flink AkkaRpcSystemLoader fails when temporary directory is a symlink

2022-06-17 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555485#comment-17555485
 ] 

Weijie Guo edited comment on FLINK-28102 at 6/17/22 8:59 AM:
-

 We can handle symlinks correctly before FLINK-23500, but now it's broken, from 
my personal point of view, we should allow symlinks as before, what do you 
think [~chesnay] [~prabhujoseph] ,I can try to fix this if you guys think so 
too.


was (Author: weijie guo):
 We can handle symlinks correctly before FLINK-23500, but now it's broken, from 
my personal point of view, we should allow symlinks as before, what do you 
think [~chesnay] [~prabhujoseph] 

> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> -
>
> Key: FLINK-28102
> URL: https://issues.apache.org/jira/browse/FLINK-28102
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.15.0
>Reporter: Prabhu Joseph
>Priority: Minor
>
> Flink AkkaRpcSystemLoader fails when temporary directory is a symlink
> *Error Message:*
> {code}
> Caused by: java.nio.file.FileAlreadyExistsException: /tmp
> at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) 
> ~[?:1.8.0_332]
> at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_332]
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) 
> ~[?:1.8.0_332]
> at java.nio.file.Files.createDirectories(Files.java:727) 
> ~[?:1.8.0_332]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystemLoader.loadRpcSystem(AkkaRpcSystemLoader.java:58)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at org.apache.flink.runtime.rpc.RpcSystem.load(RpcSystem.java:101) 
> ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:186)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481)
>  ~[flink-dist-1.15.0.jar:1.15.0]
> {code}
> *Repro:*
> {code}
> 1. /tmp is a symlink points to actual directory /mnt/tmp
> [root@prabhuHost log]# ls -lrt /tmp
> lrwxrwxrwx 1 root root 8 Jun 15 07:51 /tmp -> /mnt/tmp
> 2. Start Cluster
> ./bin/start-cluster.sh
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread jackwangcs (Jira)


[ https://issues.apache.org/jira/browse/FLINK-28021 ]


jackwangcs deleted comment on FLINK-28021:


was (Author: jackwangcs):
Hi [~martijnvisser] , I'd like to implement this feature, could you assign this 
ticket to me?

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28107) Support id of document is null

2022-06-17 Thread LuNing Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555488#comment-17555488
 ] 

LuNing Wang commented on FLINK-28107:
-

[~martijnvisser] When this PR merge the main repo, I will create a new PR to 
Elasticsearch repo and add Python ES docs.

> Support id of document is null
> --
>
> Key: FLINK-28107
> URL: https://issues.apache.org/jira/browse/FLINK-28107
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: LuNing Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
>  
> {code:java}
> es7_sink = Elasticsearch7SinkBuilder() \
> .set_emitter(ElasticsearchEmitter.static_index('foo')) \
> .set_hosts(['localhost:9200'])  {code}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:68)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter$StaticIndexRequestGenerator.apply(SimpleElasticsearchEmitter.java:55)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:52)
> at 
> org.apache.flink.connector.elasticsearch.sink.SimpleElasticsearchEmitter.emit(SimpleElasticsearchEmitter.java:30)
> at 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter.write(ElasticsearchWriter.java:123)
> at 
> org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:158)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-28108) Support compaction for append-only table

2022-06-17 Thread Jane Chan (Jira)
Jane Chan created FLINK-28108:
-

 Summary: Support compaction for append-only table
 Key: FLINK-28108
 URL: https://issues.apache.org/jira/browse/FLINK-28108
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] lsyldliu opened a new pull request, #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


lsyldliu opened a new pull request, #20003:
URL: https://github.com/apache/flink/pull/20003

   
   ## What is the purpose of the change
   In table module, we need an `URLClassLoader` which exposes the `addURL` 
method because we need to load jar dynamically in sql job. Although the 
SafetyNetWrapperClassLoader has exposed `addURL` method, but we can't ensure 
the  classloader created by `FlinkUserCodeClassLoaders` is 
`SafetyNetWrapperClassLoader`, because the returned classloader might not be 
`SafetyNetWrapperClassLoader` if checkClassLoaderLeak is false. So we need 
introduce a `MutableURLClassLoader` that exposes the `addURL`, and the 
`SafetyNetWrapperClassLoader` and `FlinkUserCodeClassLoader` both extends it, 
we only need refer to this class in table module.
   
   ## Brief change log
 - *Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader 
and SafetyNetWrapperClassLoader*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*FlinkUserCodeClassLoadersTest*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: ( no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread Shubham Bansal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17555492#comment-17555492
 ] 

Shubham Bansal commented on FLINK-28021:


As [~jackwangcs] has retracted his comment, Can I take this up if Jack is not 
picking it up?
 
 
 

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] lsyldliu commented on pull request #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


lsyldliu commented on PR #20003:
URL: https://github.com/apache/flink/pull/20003#issuecomment-1158679157

   cc @wuchong @zhuzhurk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28080) Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28080:
---
Labels: pull-request-available  (was: )

> Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and 
> SafetyNetWrapperClassLoader
> ---
>
> Key: FLINK-28080
> URL: https://issues.apache.org/jira/browse/FLINK-28080
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Affects Versions: 1.16.0
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


flinkbot commented on PR #20003:
URL: https://github.com/apache/flink/pull/20003#issuecomment-1158682088

   
   ## CI report:
   
   * 738c2db31de828155396b1f89076459f6957607c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #19993: [FLINK-28077][checkpoint] Fix the bug that tasks get stuck during cancellation in ChannelStateWriteRequestExecutorImpl

2022-06-17 Thread GitBox


zentol commented on code in PR #19993:
URL: https://github.com/apache/flink/pull/19993#discussion_r899942581


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequest.java:
##
@@ -109,6 +112,9 @@ static ChannelStateWriteRequest buildFutureWriteRequest(
 }
 },
 throwable -> {
+if (!dataFuture.isDone()) {
+return;
+}

Review Comment:
   TBF I haven't seen an interrupt there myself; that idea is purely based on 
the proposed change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-28109) Delete useful code in the row emitter.

2022-06-17 Thread LuNing Wang (Jira)
LuNing Wang created FLINK-28109:
---

 Summary: Delete useful code in the row emitter.
 Key: FLINK-28109
 URL: https://issues.apache.org/jira/browse/FLINK-28109
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Affects Versions: 1.15.0
Reporter: LuNing Wang
 Fix For: 1.16.0


 

The `.id(key)` in the RowElasticsearchEmitter make users get confused/

The following is the source code. key always null, we can never call the `id` 
method.
{code:java}
final IndexRequest indexRequest =
new IndexRequest(indexGenerator.generate(row), documentType)
.id(key)
.source(document, contentType);
indexer.add(indexRequest); {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28109) Delete useful code in the row emitter.

2022-06-17 Thread LuNing Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LuNing Wang updated FLINK-28109:

Description: 
 

The `.id(key)` in the RowElasticsearchEmitter make users get confused.

The following is the source code. key always null, we can never call the `id` 
method.
{code:java}
if (key != null) {
final UpdateRequest updateRequest =
new UpdateRequest(indexGenerator.generate(row), documentType, key)
.doc(document, contentType)
.upsert(document, contentType);
indexer.add(updateRequest);
} else {
final IndexRequest indexRequest =
new IndexRequest(indexGenerator.generate(row), documentType)
.id(key)
.source(document, contentType);
indexer.add(indexRequest);
}{code}
 

 

  was:
 

The `.id(key)` in the RowElasticsearchEmitter make users get confused/

The following is the source code. key always null, we can never call the `id` 
method.
{code:java}
final IndexRequest indexRequest =
new IndexRequest(indexGenerator.generate(row), documentType)
.id(key)
.source(document, contentType);
indexer.add(indexRequest); {code}
 

 


> Delete useful code in the row emitter.
> --
>
> Key: FLINK-28109
> URL: https://issues.apache.org/jira/browse/FLINK-28109
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: LuNing Wang
>Priority: Major
> Fix For: 1.16.0
>
>
>  
> The `.id(key)` in the RowElasticsearchEmitter make users get confused.
> The following is the source code. key always null, we can never call the `id` 
> method.
> {code:java}
> if (key != null) {
> final UpdateRequest updateRequest =
> new UpdateRequest(indexGenerator.generate(row), documentType, key)
> .doc(document, contentType)
> .upsert(document, contentType);
> indexer.add(updateRequest);
> } else {
> final IndexRequest indexRequest =
> new IndexRequest(indexGenerator.generate(row), documentType)
> .id(key)
> .source(document, contentType);
> indexer.add(indexRequest);
> }{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1708#comment-1708
 ] 

Martijn Visser commented on FLINK-28021:


[~shubham.bansal] I've assigned it to you

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-28021:
--

Assignee: Shubham Bansal

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Assignee: Shubham Bansal
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1709#comment-1709
 ] 

Martijn Visser commented on FLINK-28021:


[~shubham.bansal] I think that both https://github.com/apache/flink/pull/16838 
and https://github.com/apache/flink/pull/16875 could potentially help out, 
since those were the PRs that implemented these for Kafka (both Source and Sink)

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Assignee: Shubham Bansal
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol commented on pull request #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


zentol commented on PR #20003:
URL: https://github.com/apache/flink/pull/20003#issuecomment-1158691051

   Why do you need to modify the class loader instead of adding another child 
classloader?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] deadwind4 opened a new pull request, #20004: [FLINK-28109][connector/elasticsearch] Delete useful code in the row emitter

2022-06-17 Thread GitBox


deadwind4 opened a new pull request, #20004:
URL: https://github.com/apache/flink/pull/20004

   ## Brief change log
   
 - *Delete id(key) in the RowElasticsearchEmitter class*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-28021:
---

Assignee: Shubham Bansal  (was: Shubham Pathak)

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Assignee: Shubham Bansal
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-28109) Delete useful code in the row emitter.

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28109:
---
Labels: pull-request-available  (was: )

> Delete useful code in the row emitter.
> --
>
> Key: FLINK-28109
> URL: https://issues.apache.org/jira/browse/FLINK-28109
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: LuNing Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
>  
> The `.id(key)` in the RowElasticsearchEmitter make users get confused.
> The following is the source code. key always null, we can never call the `id` 
> method.
> {code:java}
> if (key != null) {
> final UpdateRequest updateRequest =
> new UpdateRequest(indexGenerator.generate(row), documentType, key)
> .doc(document, contentType)
> .upsert(document, contentType);
> indexer.add(updateRequest);
> } else {
> final IndexRequest indexRequest =
> new IndexRequest(indexGenerator.generate(row), documentType)
> .id(key)
> .source(document, contentType);
> indexer.add(indexRequest);
> }{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-28021) Add FLIP-33 metrics to FileSystem connector

2022-06-17 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-28021:
---

Assignee: Shubham Pathak  (was: Shubham Bansal)

> Add FLIP-33 metrics to FileSystem connector
> ---
>
> Key: FLINK-28021
> URL: https://issues.apache.org/jira/browse/FLINK-28021
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Assignee: Shubham Pathak
>Priority: Major
>
> Both the current FileSource and FileSink have no metrics implemented. They 
> should have the FLIP-33 metrics implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-connector-elasticsearch] deadwind4 opened a new pull request, #21: [FLINK-28109][connector/elasticsearch] Delete useful code in the row emitter

2022-06-17 Thread GitBox


deadwind4 opened a new pull request, #21:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/21

   Delete id(key) in the RowElasticsearchEmitter class
   
   the key is always null, this makes users get confused.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20004: [FLINK-28109][connector/elasticsearch] Delete useful code in the row emitter

2022-06-17 Thread GitBox


flinkbot commented on PR #20004:
URL: https://github.com/apache/flink/pull/20004#issuecomment-1158694368

   
   ## CI report:
   
   * 040d7e2a0e2a0bacf44ea0629d6161a93a43cefe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on pull request #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


lsyldliu commented on PR #20003:
URL: https://github.com/apache/flink/pull/20003#issuecomment-1158696541

   > Why do you need to modify the class loader instead of adding another child 
classloader?
   
   Because we want the classlaoder has the ability of 
`SafetyNetWrapperClassLoader` and `FlinkUserCodeClassLoader ` simultaneously, 
user can decide wether checkClassLoaderLeak and child-first or parent first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa commented on a diff in pull request #19974: [FLINK-28083][Connector/Pulsar] Object-reusing for Pulsar source

2022-06-17 Thread GitBox


reswqa commented on code in PR #19974:
URL: https://github.com/apache/flink/pull/19974#discussion_r899950625


##
flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/reader/emitter/PulsarRecordEmitter.java:
##
@@ -20,26 +20,69 @@
 
 import org.apache.flink.api.connector.source.SourceOutput;
 import org.apache.flink.connector.base.source.reader.RecordEmitter;
-import org.apache.flink.connector.pulsar.source.reader.message.PulsarMessage;
+import 
org.apache.flink.connector.pulsar.source.reader.deserializer.PulsarDeserializationSchema;
 import 
org.apache.flink.connector.pulsar.source.reader.source.PulsarOrderedSourceReader;
 import 
org.apache.flink.connector.pulsar.source.reader.source.PulsarUnorderedSourceReader;
 import 
org.apache.flink.connector.pulsar.source.split.PulsarPartitionSplitState;
+import org.apache.flink.util.Collector;
+
+import org.apache.pulsar.client.api.Message;
 
 /**
  * The {@link RecordEmitter} implementation for both {@link 
PulsarOrderedSourceReader} and {@link
  * PulsarUnorderedSourceReader}. We would always update the last consumed 
message id in this
  * emitter.
  */
 public class PulsarRecordEmitter
-implements RecordEmitter, T, 
PulsarPartitionSplitState> {
+implements RecordEmitter, T, 
PulsarPartitionSplitState> {
+
+private final PulsarDeserializationSchema deserializationSchema;
+private final SourceOutputWrapper sourceOutputWrapper = new 
SourceOutputWrapper<>();
+
+public PulsarRecordEmitter(PulsarDeserializationSchema 
deserializationSchema) {
+this.deserializationSchema = deserializationSchema;
+}
 
 @Override
 public void emitRecord(
-PulsarMessage element, SourceOutput output, 
PulsarPartitionSplitState splitState)
+Message element, SourceOutput output, 
PulsarPartitionSplitState splitState)
 throws Exception {
-// Sink the record to source output.
-output.collect(element.getValue(), element.getEventTime());
-// Update the split state.
-splitState.setLatestConsumedId(element.getId());
+// Update the source output.
+sourceOutputWrapper.setSourceOutput(output);
+sourceOutputWrapper.setTimestamp(element);
+
+deserializationSchema.deserialize(element, sourceOutputWrapper);
+splitState.setLatestConsumedId(element.getMessageId());
+}
+
+private static class SourceOutputWrapper implements Collector {
+private SourceOutput sourceOutput;
+private long timestamp;
+
+@Override
+public void collect(T record) {
+if (timestamp > 0) {
+sourceOutput.collect(record, timestamp);
+} else {
+sourceOutput.collect(record);
+}
+}
+
+@Override
+public void close() {
+// Nothing to do here.
+}
+
+private void setSourceOutput(SourceOutput sourceOutput) {
+this.sourceOutput = sourceOutput;
+}
+
+/**
+ * Get the event timestamp from Pulsar. Zero means there is no event 
time. See {@link
+ * Message#getEventTime()} to get the reason why it returns zero.
+ */
+private void setTimestamp(Message message) {
+this.timestamp = message.getEventTime();

Review Comment:
   Why use Message as parameter instead of passing timestamp directly



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on pull request #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


zentol commented on PR #20003:
URL: https://github.com/apache/flink/pull/20003#issuecomment-1158699288

   What prevents you from creating such a classloader as a child classloader?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27792) InterruptedException thrown by ChannelStateWriterImpl

2022-06-17 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1711#comment-1711
 ] 

Chesnay Schepler commented on FLINK-27792:
--

A potential source {{ChannelStateWriteRequestExecutorImpl#close}}, which 
interrupts the internal thread.

> InterruptedException thrown by ChannelStateWriterImpl
> -
>
> Key: FLINK-27792
> URL: https://issues.apache.org/jira/browse/FLINK-27792
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Atri Sharma
>Priority: Blocker
>  Labels: test-stability
>
> {code:java}
> 2022-05-25T15:45:17.7584795Z May 25 15:45:17 [ERROR] 
> WindowDistinctAggregateITCase.testTumbleWindow_Rollup  Time elapsed: 1.522 s  
> <<< ERROR!
> 2022-05-25T15:45:17.7586025Z May 25 15:45:17 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-05-25T15:45:17.7587205Z May 25 15:45:17  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-05-25T15:45:17.7588649Z May 25 15:45:17  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> 2022-05-25T15:45:17.7589984Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-05-25T15:45:17.7603647Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-05-25T15:45:17.7605042Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-05-25T15:45:17.7605750Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-05-25T15:45:17.7606751Z May 25 15:45:17  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
> 2022-05-25T15:45:17.7607513Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-05-25T15:45:17.7608232Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-05-25T15:45:17.7608953Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-05-25T15:45:17.7614259Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-05-25T15:45:17.7615777Z May 25 15:45:17  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
> 2022-05-25T15:45:17.7617284Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-05-25T15:45:17.7618847Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-05-25T15:45:17.7620579Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-05-25T15:45:17.7622674Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-05-25T15:45:17.7624066Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-05-25T15:45:17.7625352Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-05-25T15:45:17.7626524Z May 25 15:45:17  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-05-25T15:45:17.7627743Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-05-25T15:45:17.7628913Z May 25 15:45:17  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-05-25T15:45:17.7629902Z May 25 15:45:17  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-05-25T15:45:17.7630891Z May 25 15:45:17  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-05-25T15:45:17.7632074Z May 25 15:45:17  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-05-25T15:45:17.7654202Z May 25 15:45:17  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-05-25T15:45:17.7655764Z May 25 15:45:17  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-05-25T15:45:17.7657231Z May 25 15:45:17  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-05-25T15:45:17.7658586Z May 25 15:45:17  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$

[jira] [Commented] (FLINK-28103) Job cancelling api returns 404 when job is actually running

2022-06-17 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1713#comment-1713
 ] 

Chesnay Schepler commented on FLINK-28103:
--

Why are you doing a POST request with X-HTTP-Method-Override instead of a plain 
PATCH?

> Job cancelling api returns 404 when job is actually running
> ---
>
> Key: FLINK-28103
> URL: https://issues.apache.org/jira/browse/FLINK-28103
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.13.5
>Reporter: David
>Priority: Major
> Attachments: image-2022-06-17-14-04-44-307.png, 
> image-2022-06-17-14-05-36-264.png
>
>
> The job is still running:
> !image-2022-06-17-14-04-44-307.png!
>  
> but the cancelling api returns 404:
> !image-2022-06-17-14-05-36-264.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-28110) Table Store Hive Reader supports projection pushdown

2022-06-17 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-28110:


 Summary: Table Store Hive Reader supports projection pushdown
 Key: FLINK-28110
 URL: https://issues.apache.org/jira/browse/FLINK-28110
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.2.0


When the user declares fields in the DDL, we may not report an error when the 
declared fields are incomplete, at this time we can assume that the user only 
wants to read these fields, in fact, it is projection pushdown



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] jmd300 opened a new pull request, #20005: Update table_api.md, 翻译一段试试流程

2022-06-17 Thread GitBox


jmd300 opened a new pull request, #20005:
URL: https://github.com/apache/flink/pull/20005

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zhuzhurk commented on pull request #20003: [FLINK-28080][runtime] Introduce MutableURLClassLoader as parent class of FlinkUserClassLoader and SafetyNetWrapperClassLoader

2022-06-17 Thread GitBox


zhuzhurk commented on PR #20003:
URL: https://github.com/apache/flink/pull/20003#issuecomment-1158708046

   Is it possible to add the `FlinkUserCodeClassLoader` to the table module, 
and use it to wrap the existing classloader in the case that a classloader 
needs to be mutated, i.e. in CTAS code paths?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >