[jira] [Created] (FLINK-36919) Add missing dropTable/dropView methods to TableEnvironment

2024-12-17 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-36919:
---

 Summary: Add missing dropTable/dropView methods to TableEnvironment
 Key: FLINK-36919
 URL: https://issues.apache.org/jira/browse/FLINK-36919
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


 FLIP-489: Add missing dropTable/dropView methods to TableEnvironment 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=332499781



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36918][table] Introduce ProcTimeSortOperator in TemporalSort with Async State API [flink]

2024-12-17 Thread via GitHub


flinkbot commented on PR #25807:
URL: https://github.com/apache/flink/pull/25807#issuecomment-2547735581

   
   ## CI report:
   
   * 4dce496a2343eb22a24b6996f3967b6e46da6b5d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36889] Mention locking down a Flink cluster in the 'Production Readiness Checklist' [flink]

2024-12-17 Thread via GitHub


Samrat002 commented on PR #25793:
URL: https://github.com/apache/flink/pull/25793#issuecomment-2547740879

   @rmetzger Please review whenever time 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-36916) Unexpected error in type inference logic of function 'TYPEOF' for ROW

2024-12-17 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-36916:
---

Assignee: Yiyu Tian

> Unexpected error in type inference logic of function 'TYPEOF' for ROW
> -
>
> Key: FLINK-36916
> URL: https://issues.apache.org/jira/browse/FLINK-36916
> Project: Flink
>  Issue Type: Bug
>Reporter: Yiyu Tian
>Assignee: Yiyu Tian
>Priority: Major
>
> {{SELECT TYPEOF(CAST((1,2) AS ROW));}}
> results in {{Unexpected error in type inference logic of function 'TYPEOF'. 
> This is a bug.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


yuxiqian commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888029795


##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -78,6 +78,8 @@ public class YamlPipelineDefinitionParser implements 
PipelineDefinitionParser {
 private static final String TRANSFORM_PROJECTION_KEY = "projection";
 private static final String TRANSFORM_FILTER_KEY = "filter";
 private static final String TRANSFORM_DESCRIPTION_KEY = "description";
+private static final String TRANSFORM_CONVERTOR_AFTER_TRANSFORM_KEY =

Review Comment:
   Seems "converter" is used more



##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -316,6 +318,10 @@ private TransformDef toTransformDef(JsonNode 
transformNode) {
 
Optional.ofNullable(transformNode.get(TRANSFORM_DESCRIPTION_KEY))
 .map(JsonNode::asText)
 .orElse(null);
+String convertorAfterTransform =
+
Optional.ofNullable(transformNode.get(TRANSFORM_CONVERTOR_AFTER_TRANSFORM_KEY))
+.map(JsonNode::asText)
+.orElse(null);

Review Comment:
   Is is possible to make this a list, so users can apply more than one 
converters sequentially?



##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -316,6 +318,10 @@ private TransformDef toTransformDef(JsonNode 
transformNode) {
 
Optional.ofNullable(transformNode.get(TRANSFORM_DESCRIPTION_KEY))
 .map(JsonNode::asText)
 .orElse(null);
+String convertorAfterTransform =
+
Optional.ofNullable(transformNode.get(TRANSFORM_CONVERTOR_AFTER_TRANSFORM_KEY))

Review Comment:
   What about naming this option "post-transform converters"? It's reasonable 
since it does take effect in `PostTransform` operator.



##
flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/transform/convertor/TransformConvertor.java:
##
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.runtime.operators.transform.convertor;
+
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+import org.apache.flink.cdc.runtime.operators.transform.PostTransformOperator;
+import org.apache.flink.cdc.runtime.operators.transform.TransformRule;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The TransformConvertor applies to convert the {@link DataChangeEvent} after 
other part of {@link
+ * TransformRule} in {@link PostTransformOperator}.
+ */
+public interface TransformConvertor extends Serializable {

Review Comment:
   If users are supposed to write their own converters, this interface should 
be moved to `flink-cdc-common` and be marked with `@PublicEvolving`.



##
flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/transform/convertor/TransformConvertor.java:
##
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.runtime.operators.transform.convertor;
+
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+impo

Re: [PR] Bump ws and socket.io in /flink-runtime-web/web-dashboard [flink]

2024-12-17 Thread via GitHub


dependabot[bot] commented on PR #25741:
URL: https://github.com/apache/flink/pull/25741#issuecomment-2547855684

   Looks like these dependencies are up-to-date now, so this is no longer 
needed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump ws and socket.io in /flink-runtime-web/web-dashboard [flink]

2024-12-17 Thread via GitHub


dependabot[bot] closed pull request #25741: Bump ws and socket.io in 
/flink-runtime-web/web-dashboard
URL: https://github.com/apache/flink/pull/25741


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump d3-color, @antv/g-base and d3-interpolate in /flink-runtime-web/web-dashboard [flink]

2024-12-17 Thread via GitHub


dependabot[bot] commented on PR #25740:
URL: https://github.com/apache/flink/pull/25740#issuecomment-2547855664

   Looks like these dependencies are up-to-date now, so this is no longer 
needed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-36912) Shaded Hadoop S3A with credentials provider failed on AZP

2024-12-17 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906330#comment-17906330
 ] 

Weijie Guo commented on FLINK-36912:


> This was an infrastructure issue that was fixed by Ververica yesterday.

Great! thanks for tracking this.

> Shaded Hadoop S3A with credentials provider failed on AZP
> -
>
> Key: FLINK-36912
> URL: https://issues.apache.org/jira/browse/FLINK-36912
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 2.0.0
>Reporter: Weijie Guo
>Priority: Major
>
> Shaded Hadoop S3A with credentials provider end-to-end test failed as 
> FileNotFoundException
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=64384&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=0f3adb59-eefa-51c6-2858-3654d9e0749d&l=10208
> {code:java}
> Dec 16 08:18:13   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
>  ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
> ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) 
> ~[?:?]
> Dec 16 08:18:13   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
>  ~[?:?]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) 
> ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:272) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:233) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:245) ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
>  ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
>  ~[?:?]
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Bump d3-color, @antv/g-base and d3-interpolate in /flink-runtime-web/web-dashboard [flink]

2024-12-17 Thread via GitHub


dependabot[bot] closed pull request #25740: Bump d3-color, @antv/g-base and 
d3-interpolate in /flink-runtime-web/web-dashboard
URL: https://github.com/apache/flink/pull/25740


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump cookie and socket.io in /flink-runtime-web/web-dashboard [flink]

2024-12-17 Thread via GitHub


dependabot[bot] closed pull request #25751: Bump cookie and socket.io in 
/flink-runtime-web/web-dashboard
URL: https://github.com/apache/flink/pull/25751


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump com.nimbusds:nimbus-jose-jwt from 4.41.1 to 9.37.2 in /flink-end-to-end-tests/flink-end-to-end-tests-sql [flink]

2024-12-17 Thread via GitHub


dependabot[bot] opened a new pull request, #25808:
URL: https://github.com/apache/flink/pull/25808

   Bumps 
[com.nimbusds:nimbus-jose-jwt](https://bitbucket.org/connect2id/nimbus-jose-jwt)
 from 4.41.1 to 9.37.2.
   
   Changelog
   Sourced from https://bitbucket.org/connect2id/nimbus-jose-jwt/src/master/CHANGELOG.txt";>com.nimbusds:nimbus-jose-jwt's
 changelog.
   
   version 1.0 (2012-03-01)
   
   First version based on the OpenInfoCard JWT, JWS and JWE code base.
   
   version 1.1 (2012-03-06)
   
   Introduces type-safe enumeration of the JSON Web Algorithms (JWA).
   Refactors the JWT class.
   
   version 1.2 (2012-03-08)
   
   Moves JWS and JWE code into separate classes.
   
   version 1.3 (2012-03-09)
   
   Switches to Apache Commons Codec for Base64URL encoding and decoding
   Consolidates the crypto utilities within the package.
   Introduces a JWT content serialiser class.
   
   version 1.4 (2012-03-09)
   
   Refactoring of JWT class and JUnit tests.
   
   version 1.5 (2012-03-18)
   
   Switches to JSON Smart for JSON serialisation and parsing.
   Introduces claims set class with JSON objects, string, Base64URL and
   byte array views.
   
   version 1.6 (2012-03-20)
   
   Creates class for representing, serialising and parsing JSON Web Keys
   (JWK).
   Introduces separate class for representing JWT headers.
   
   version 1.7 (2012-04-01)
   
   Introduces separate classes for plain, JWS and JWE headers.
   Introduces separate classes for plain, signed and encrypted JWTs.
   Removes the JWTContent class.
   Removes password-based (PE820) encryption support.
   
   version 1.8 (2012-04-03)
   
   Adds support for the ZIP JWE header parameter.
   Removes unsupported algorithms from the JWA enumeration.
   
   version 1.9 (2012-04-03)
   
   Renames JWEHeader.{get|set}EncryptionAlgorithm() to
   JWEHeader.{get|set}EncryptionMethod().
   
   version 1.9.1 (2012-04-03)
   
   Upgrades JSON Smart JAR to 1.1.1.
   
   version 1.10 (2012-04-14)
   
   Introduces serialize() method to base abstract JWT class.
   
   version 1.11 (2012-05-13)
   
   JWT.serialize() throws checked JWTException instead of
   
   
   
   ... (truncated)
   
   
   Commits
   
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/d91be4cc572d828fc16f2388fd60e729e62efade";>d91be4c
 [maven-release-plugin] prepare release 9.33
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/9a277ea957884bbd4388c1611b56ad618c51f976";>9a277ea
 [maven-release-plugin] prepare for next development iteration
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/c695b11a3e3fdaa73058e84cb305595de6720be5";>c695b11
 Fixes the MACSigner.sign method for SecretKey instances that don't expose 
the...
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/45f15d1e2733502face534f914908c382888bc6b";>45f15d1
 Updates the MACVerifier to support SecretKey instances don't expose the key 
m...
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/e965e9603d239d037512449bc8ad850cf2fdf6b7";>e965e96
 [maven-release-plugin] prepare release 9.34
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/8d67f6c3fdf6af4ed4473b4a81968328bbe08726";>8d67f6c
 [maven-release-plugin] prepare for next development iteration
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/f64e094030ab82659dbfaea8c489cc56291539cf";>f64e094
 Makes the abstract class BaseJWEProvider public (iss https://bitbucket.org/connect2id/nimbus-jose-jwt/issues/521";>#521)
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/ad6fed330a6bc5dbcb343aafd085ffd0d15c07d7";>ad6fed3
 [maven-release-plugin] prepare release 9.35
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/81c7f24cc8a49f0f87c530e50d750bb1db22b4a8";>81c7f24
 [maven-release-plugin] prepare for next development iteration
   https://bitbucket.org/connect2id/nimbus-jose-jwt/commits/24aaaf02edf5d1ae4cc449b3d81a9151f26953dc";>24aaaf0
 Bumps jacoco-maven-plugin to 0.8.10
   Additional commits viewable in https://bitbucket.org/connect2id/nimbus-jose-jwt/branches/compare/9.37.2..4.41.1";>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.nimbusds:nimbus-jose-jwt&package-manager=maven&previous-version=4.41.1&new-version=9.37.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it

Re: [PR] [FLINK-36889] Mention locking down a Flink cluster in the 'Production Readiness Checklist' [flink]

2024-12-17 Thread via GitHub


rmetzger commented on code in PR #25793:
URL: https://github.com/apache/flink/pull/25793#discussion_r1888139433


##
docs/content/docs/ops/production_ready.md:
##
@@ -81,4 +81,11 @@ It is a single point of failure within the cluster, and if 
it crashes, no new jo
 
 Configuring [High Availability]({{< ref "docs/deployment/ha/overview" >}}), in 
conjunction with Apache Zookeeper or Flinks Kubernetes based service, allows 
for a swift recovery and is highly recommended for production setups.
 
-{{< top >}}
\ No newline at end of file
+### Secure Flink Cluster Access
+
+To prevent potential security vulnerabilities, such as arbitrary code 
execution, 

Review Comment:
   I think the language here can be more explicit. Arbitrary code execution is 
not a potential vulnerability.
   Flink has been designed for arbitrary, remote code execution -- it is not an 
accident ;) 
   Check also this FAQ entry for how we've worded this in the past: 
https://flink.apache.org/what-is-flink/security/#during-a-security-analysis-of-flink-i-noticed-that-flink-allows-for-remote-code-execution-is-this-an-issue



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36920) Update org.quartz-schedule:quartz

2024-12-17 Thread Anupam Aggarwal (Jira)
Anupam Aggarwal created FLINK-36920:
---

 Summary: Update org.quartz-schedule:quartz
 Key: FLINK-36920
 URL: https://issues.apache.org/jira/browse/FLINK-36920
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Affects Versions: 1.10.0
Reporter: Anupam Aggarwal


Update dependency on org.quartz-scheduler:quartz used in flink-autoscaler 
module from 2.3.2 to 2.4.0

 

*Vulnerability info:*
cve-2023-39017

quartz-jobs 2.3.2 and below was discovered to contain a code injection 
vulnerability in the component 
org.quartz.jobs.ee.jms.SendQueueMessageJob.execute. This vulnerability is 
exploited via passing an unchecked argument. NOTE: this is disputed by multiple 
parties because it is not plausible that untrusted user input would reach the 
code location where injection must occur.

More details are at: [https://nvd.nist.gov/vuln/detail/cve-2023-39017] 

*Proposed fix*
Bumping the dependency from 2.3.2 to 2.4.0 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36740] [WebFrontend] Update frontend dependencies to address vulnerabilities [flink]

2024-12-17 Thread via GitHub


rmetzger merged PR #25718:
URL: https://github.com/apache/flink/pull/25718


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-36740) Update frontend dependencies to address vulnerabilities

2024-12-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-36740.

Resolution: Fixed

Merged to master (2.0.0) in 
https://github.com/apache/flink/commit/1c2ec0664fc2d63d09651e940a36ddb6d6516f9f.

> Update frontend dependencies to address vulnerabilities
> ---
>
> Key: FLINK-36740
> URL: https://issues.apache.org/jira/browse/FLINK-36740
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 2.0.0, 1.20.0
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Bump com.nimbusds:nimbus-jose-jwt from 4.41.1 to 9.37.2 in /flink-end-to-end-tests/flink-end-to-end-tests-sql [flink]

2024-12-17 Thread via GitHub


flinkbot commented on PR #25808:
URL: https://github.com/apache/flink/pull/25808#issuecomment-2547869110

   
   ## CI report:
   
   * 6cd0e581df8053fae08c710dd126a0bdbd2881f8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Bump cookie and socket.io in /flink-runtime-web/web-dashboard [flink]

2024-12-17 Thread via GitHub


dependabot[bot] commented on PR #25751:
URL: https://github.com/apache/flink/pull/25751#issuecomment-2547855649

   Looks like these dependencies are up-to-date now, so this is no longer 
needed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27355][runtime] Unregister JobManagerRunner after it's closed [flink]

2024-12-17 Thread via GitHub


XComp commented on code in PR #25027:
URL: https://github.com/apache/flink/pull/25027#discussion_r1888127410


##
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/DefaultJobManagerRunnerRegistry.java:
##
@@ -83,9 +83,16 @@ public Collection getJobManagerRunners() {
 }
 
 @Override
-public CompletableFuture localCleanupAsync(JobID jobId, Executor 
unusedExecutor) {
+public CompletableFuture localCleanupAsync(
+JobID jobId, Executor ignoredExecutor, Executor 
mainThreadExecutor) {
 if (isRegistered(jobId)) {
-return unregister(jobId).closeAsync();
+CompletableFuture resultFuture = 
this.jobManagerRunners.get(jobId).closeAsync();
+
+return resultFuture.thenApplyAsync(
+result -> {
+mainThreadExecutor.execute(() -> unregister(jobId));
+return result;
+});

Review Comment:
   Uh, that was a tricky one, I have to admit. The cause is that the 
[TestingResourceCleanerFactory#createResourceCleaner](https://github.com/apache/flink/blob/28791382c37b77a7c8db550a54e50fc50d591e35/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/cleanup/TestingResourceCleanerFactory.java#L81)
 provides a "buggy"/not-well-implemented `ResourceCleaner` implementation.
   
   The `ResourceCleaner` is supposed to be called on the main thread. There 
shouldn't be any blocking functionality (because that would block other calls 
on the main thread). Instead, we should make the test implementation async as 
well:
   ```java
   private  ResourceCleaner createResourceCleaner(
   ComponentMainThreadExecutor mainThreadExecutor,
   Collection resources,
   DefaultResourceCleaner.CleanupFn cleanupFn) {
   return jobId -> {
   mainThreadExecutor.assertRunningInMainThread();
   
   final Collection> 
asyncCleanupCallbackResults = resources
   .stream()
   .map(resource -> cleanupFn.cleanupAsync(resource, jobId, 
cleanupExecutor))
   .collect(
   Collectors.toList());
   
   return FutureUtils.completeAll(asyncCleanupCallbackResults);
   };
   }
   ```
   Changing the code to the one above fixes the issue with the 
`testNotArchivingSuspendedJobToHistoryServer` test.



##
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/DefaultJobManagerRunnerRegistry.java:
##
@@ -83,9 +83,16 @@ public Collection getJobManagerRunners() {
 }
 
 @Override
-public CompletableFuture localCleanupAsync(JobID jobId, Executor 
unusedExecutor) {
+public CompletableFuture localCleanupAsync(
+JobID jobId, Executor ignoredExecutor, Executor 
mainThreadExecutor) {
 if (isRegistered(jobId)) {
-return unregister(jobId).closeAsync();
+CompletableFuture resultFuture = 
this.jobManagerRunners.get(jobId).closeAsync();
+
+return resultFuture.thenApplyAsync(
+result -> {
+mainThreadExecutor.execute(() -> unregister(jobId));
+return result;
+});

Review Comment:
   The difference between your previous version of the code (using 
`thenApplyAsync`) and the changed code (using `thenRunAsync`) is a special 
(kind of annoying) issue with `CompletableFutures`:
   
   The async `unregister` call is preceded by the 
[TestingJobManagerRunner#closeAsync](https://github.com/apache/flink/blob/153614716767d5fb8be1d4caa5e72167ce64a9d7/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/TestingJobManagerRunner.java#L132)
 which is set up to be non-blocking. That means that the method's 
`CompletableFuture` is completed before returning.
   
   There's a different runtime behavior between chained operations of an 
already-completed future and a not-yet-completed future. In the former case, 
the chained operation will be executed right-away on the calling thread . In 
the latter case, the chained operation will be scheduled for execution.
   
   In our scenario (see 
[DefaultJobManagerRunnerRegistry#localCleanupAsync](https://github.com/apache/flink/blob/e37d9c9b7b649030d9cb5d3ba6540a006aa0aa23/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/DefaultJobManagerRunnerRegistry.java#L91)),
 the `closeAsync` call returns the already-completed future. Keep in mind that 
this happens on the component's main thread! So, what happens with the chained 
operation?
   * with `thenApplyAsync`: The `mainThreadExecutor#execute` call is called 
asynchronously (no executor is specified, so the JVM common thread pool is 
used). The `unregister` callback will be scheduled on the main thread again. 
But `localCleanupAsync` returns the future of the `thenApplyAsync` which 
completes as soon as the `mainThreadExecutor#execute` returns (

Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


snuyanzin commented on code in PR #25810:
URL: https://github.com/apache/flink/pull/25810#discussion_r1888463167


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java:
##
@@ -661,6 +673,18 @@ public boolean dropTemporaryView(String path) {
 }
 }
 
+@Override
+public boolean dropView(String path) {
+UnresolvedIdentifier unresolvedIdentifier = 
getParser().parseIdentifier(path);
+ObjectIdentifier identifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
+try {
+catalogManager.dropView(identifier, false);
+return true;
+} catch (ValidationException e) {
+return false;

Review Comment:
   >I guess the before the fix, this exception would have bubbled up the table 
API.
   
   can you elaborate what you mean by this? Before the PR this method wasn't 
exists so it could not bub up



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36889] Mention locking down a Flink cluster in the 'Production Readiness Checklist' [flink]

2024-12-17 Thread via GitHub


rmetzger merged PR #25793:
URL: https://github.com/apache/flink/pull/25793


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-36889) Mention locking down a Flink cluster in the 'Production Readiness Checklist'

2024-12-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-36889.

Fix Version/s: 2.0.0
   Resolution: Fixed

Merged to master (2.0.0) in 
https://github.com/apache/flink/commit/473a3e8a7b36ab3c4c9422fdc7da5dae60451f13

> Mention locking down a Flink cluster in the 'Production Readiness Checklist'
> 
>
> Key: FLINK-36889
> URL: https://issues.apache.org/jira/browse/FLINK-36889
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Robert Metzger
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> The Flink PMC often receives vulnerability reports about arbitrary code 
> execution vulnerabilities in Flink. We therefore added an entry into the 
> security FAQ page: 
> [https://flink.apache.org/what-is-flink/security/#during-a-security-analysis-of-flink-i-noticed-that-flink-allows-for-remote-code-execution-is-this-an-issue]
> Still, people seem to run into this issue. To raise awareness for the issue, 
> we should also add a note to the 'Production Readiness Checklist' to make 
> sure that Flink clusters should only be accessible to trusted users, and not 
> the whole company intranet or even the public internet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36706] Refactor TypeInferenceExtractor for PTFs [flink]

2024-12-17 Thread via GitHub


twalthr commented on PR #25805:
URL: https://github.com/apache/flink/pull/25805#issuecomment-2548723724

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36496][table]Remove deprecated method `CatalogFunction#isGeneric` [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25595:
URL: https://github.com/apache/flink/pull/25595#discussion_r1888720082


##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogTestBase.java:
##
@@ -177,20 +177,8 @@ protected Map 
getStreamingTableProperties() {
 return new HashMap() {
 {
 put(IS_STREAMING, "true");
-putAll(getGenericFlag(isGeneric()));
+put(FactoryUtil.CONNECTOR.key(), "hive");

Review Comment:
   I wonder why we have introduced the keyword hive into a non-hive specific 
module. Can we move the hive specific reference into a hive module as part of 
this change? I assume setting this as hive is misleading and not useful for a 
non-hive use case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36496][table]Remove deprecated method `CatalogFunction#isGeneric` [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25595:
URL: https://github.com/apache/flink/pull/25595#discussion_r1888720082


##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogTestBase.java:
##
@@ -177,20 +177,8 @@ protected Map 
getStreamingTableProperties() {
 return new HashMap() {
 {
 put(IS_STREAMING, "true");
-putAll(getGenericFlag(isGeneric()));
+put(FactoryUtil.CONNECTOR.key(), "hive");

Review Comment:
Can we move the hive specific reference into a hive module as part of this 
change? I assume setting this as hive is misleading and not useful for a 
non-hive use case.



##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogTestBase.java:
##
@@ -168,7 +168,7 @@ protected Map getBatchTableProperties() {
 return new HashMap() {
 {
 put(IS_STREAMING, "false");
-putAll(getGenericFlag(isGeneric()));
+put(FactoryUtil.CONNECTOR.key(), "hive");

Review Comment:
   Can we move the hive specific reference into a hive module as part of this 
change? I assume setting this as hive is misleading and not useful for a 
non-hive use case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][configuration] Remove the deprecated test class TaskManagerLoadBalanceModeTest. [flink]

2024-12-17 Thread via GitHub


RocMarshal commented on code in PR #25800:
URL: https://github.com/apache/flink/pull/25800#discussion_r1888725463


##
flink-core/src/test/java/org/apache/flink/configuration/TaskManagerLoadBalanceModeTest.java:
##
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.configuration;
-
-import org.junit.jupiter.api.Test;
-
-import static 
org.apache.flink.configuration.TaskManagerOptions.TASK_MANAGER_LOAD_BALANCE_MODE;
-import static 
org.apache.flink.configuration.TaskManagerOptions.TaskManagerLoadBalanceMode;
-import static org.assertj.core.api.Assertions.assertThat;
-
-/** Test for {@link TaskManagerLoadBalanceMode}. */
-class TaskManagerLoadBalanceModeTest {
-
-@Test
-void testReadTaskManagerLoadBalanceMode() {

Review Comment:
   thanks @davidradl  for the comments.
   Please let me clarify the background fo the testing class.
   
   The test was initially introduced to verify whether the new configuration 
method is compatible with the old one. Since the old configuration method has 
now been removed, the logic for determining the value of the current 
configuration item has become much simpler and more straightforward. In other 
words, continuing to keep this test case would be somewhat redundant.
   
   WDYTA?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36496][table]Remove deprecated method `CatalogFunction#isGeneric` [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25595:
URL: https://github.com/apache/flink/pull/25595#discussion_r1888721096


##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogTestBase.java:
##
@@ -168,7 +168,7 @@ protected Map getBatchTableProperties() {
 return new HashMap() {
 {
 put(IS_STREAMING, "false");
-putAll(getGenericFlag(isGeneric()));
+put(FactoryUtil.CONNECTOR.key(), "hive");

Review Comment:
   Can we move the hive specific reference into a hive module as part of this 
change? I assume setting this as hive is misleading and not useful for a 
non-hive use case.



##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogTestBase.java:
##
@@ -177,20 +177,8 @@ protected Map 
getStreamingTableProperties() {
 return new HashMap() {
 {
 put(IS_STREAMING, "true");
-putAll(getGenericFlag(isGeneric()));
+put(FactoryUtil.CONNECTOR.key(), "hive");

Review Comment:
Can we move the hive specific reference into a hive module as part of this 
change? I assume setting this as hive is misleading and not useful for a 
non-hive use case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35721) I found out that in the Flink SQL documentation it says that Double type cannot be converted to Boolean type, but in reality, it can.

2024-12-17 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906445#comment-17906445
 ] 

Alexander Fedulov commented on FLINK-35721:
---

I verified on 1.19.1 that casts work as expected using the following statements:

> SELECT CAST(CAST(0.0 AS DECIMAL) AS BOOLEAN) AS decimal_zero,
>        CAST(CAST(1.0 AS DECIMAL) AS BOOLEAN) AS decimal_nonzero;

> SELECT CAST(CAST(0.0 AS FLOAT) AS BOOLEAN) AS float_zero,
>        CAST(CAST(3.14 AS FLOAT) AS BOOLEAN) AS float_nonzero;

> SELECT CAST(CAST(0.0 AS DOUBLE) AS BOOLEAN) AS double_zero,
>        CAST(CAST(2.718 AS DOUBLE) AS BOOLEAN) AS double_nonzero;

[~lucas_jin] could you please open a PR with the documentation fixes for 
DECIMAL, DOUBLE, FLOAT -> BOOLEAN conversions? 

> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> -
>
> Key: FLINK-35721
> URL: https://issues.apache.org/jira/browse/FLINK-35721
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.1
>Reporter: jinzhuguang
>Priority: Minor
> Fix For: 1.19.2
>
> Attachments: image-2024-06-28-16-57-54-354.png
>
>
> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> Ralated code : 
> org.apache.flink.table.planner.functions.casting.NumericToBooleanCastRule#generateExpression
> Ralated document url : 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/]
> !image-2024-06-28-16-57-54-354.png|width=378,height=342!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35721) I found out that in the Flink SQL documentation it says that Double type cannot be converted to Boolean type, but in reality, it can.

2024-12-17 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906445#comment-17906445
 ] 

Alexander Fedulov edited comment on FLINK-35721 at 12/17/24 3:40 PM:
-

I verified on 1.19.1 that casts work as expected using the following statements:

> SELECT CAST(CAST(0.0 AS DECIMAL) AS BOOLEAN) AS decimal_zero,
>        CAST(CAST(1.0 AS DECIMAL) AS BOOLEAN) AS decimal_nonzero;

> SELECT CAST(CAST(0.0 AS FLOAT) AS BOOLEAN) AS float_zero,
>        CAST(CAST(3.14 AS FLOAT) AS BOOLEAN) AS float_nonzero;

> SELECT CAST(CAST(0.0 AS DOUBLE) AS BOOLEAN) AS double_zero,
>        CAST(CAST(3.14 AS DOUBLE) AS BOOLEAN) AS double_nonzero;

[~lucas_jin] could you please open a PR with the documentation fixes for 
DECIMAL, DOUBLE, FLOAT -> BOOLEAN conversions? 


was (Author: afedulov):
I verified on 1.19.1 that casts work as expected using the following statements:

> SELECT CAST(CAST(0.0 AS DECIMAL) AS BOOLEAN) AS decimal_zero,
>        CAST(CAST(1.0 AS DECIMAL) AS BOOLEAN) AS decimal_nonzero;

> SELECT CAST(CAST(0.0 AS FLOAT) AS BOOLEAN) AS float_zero,
>        CAST(CAST(3.14 AS FLOAT) AS BOOLEAN) AS float_nonzero;

> SELECT CAST(CAST(0.0 AS DOUBLE) AS BOOLEAN) AS double_zero,
>        CAST(CAST(2.718 AS DOUBLE) AS BOOLEAN) AS double_nonzero;

[~lucas_jin] could you please open a PR with the documentation fixes for 
DECIMAL, DOUBLE, FLOAT -> BOOLEAN conversions? 

> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> -
>
> Key: FLINK-35721
> URL: https://issues.apache.org/jira/browse/FLINK-35721
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.1
>Reporter: jinzhuguang
>Priority: Minor
> Fix For: 1.19.2
>
> Attachments: image-2024-06-28-16-57-54-354.png
>
>
> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> Ralated code : 
> org.apache.flink.table.planner.functions.casting.NumericToBooleanCastRule#generateExpression
> Ralated document url : 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/]
> !image-2024-06-28-16-57-54-354.png|width=378,height=342!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36916) Unexpected error in type inference logic of function 'TYPEOF' for ROW

2024-12-17 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906419#comment-17906419
 ] 

Sergey Nuyanzin commented on FLINK-36916:
-

FYI the root cause of this task seems to be lack of support for column list
If you try to debug it you will see that in this case there will be an attempt 
to use COLUMN_LIST
however it is unsupported by FlinkTypeFactory
https://github.com/apache/flink/blob/c7d8515c0285fc0019571a1f637377630c5a06fa/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeFactory.scala#L396-L400
 

This kind of support is going to be added as a part of FLIP-440
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=298781093#FLIP440:UserdefinedSQLoperators/ProcessTableFunction(PTF)-DescriptorType,LogicalTypeRoot,DataTypes,ColumnList

so probably makes to postpone till FLIP-440 implementation

> Unexpected error in type inference logic of function 'TYPEOF' for ROW
> -
>
> Key: FLINK-36916
> URL: https://issues.apache.org/jira/browse/FLINK-36916
> Project: Flink
>  Issue Type: Bug
>Reporter: Yiyu Tian
>Assignee: Yiyu Tian
>Priority: Major
>
> {{SELECT TYPEOF(CAST((1,2) AS ROW));}}
> results in {{Unexpected error in type inference logic of function 'TYPEOF'. 
> This is a bug.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


ruanhang1993 commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888103827


##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -316,6 +318,10 @@ private TransformDef toTransformDef(JsonNode 
transformNode) {
 
Optional.ofNullable(transformNode.get(TRANSFORM_DESCRIPTION_KEY))
 .map(JsonNode::asText)
 .orElse(null);
+String convertorAfterTransform =
+
Optional.ofNullable(transformNode.get(TRANSFORM_CONVERTOR_AFTER_TRANSFORM_KEY))
+.map(JsonNode::asText)
+.orElse(null);

Review Comment:
   It is hard to define the order of the converters. Only one converter is 
enough. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33921) Cleanup deprecated IdleStateRetentionTime related method in org.apache.flink.table.api.TableConfig

2024-12-17 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-33921:
-
Fix Version/s: 2.0.0

> Cleanup deprecated IdleStateRetentionTime related method in 
> org.apache.flink.table.api.TableConfig
> --
>
> Key: FLINK-33921
> URL: https://issues.apache.org/jira/browse/FLINK-33921
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 2.0.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> getMinIdleStateRetentionTime()
> getMaxIdleStateRetentionTime()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36912) Shaded Hadoop S3A with credentials provider failed on AZP

2024-12-17 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906322#comment-17906322
 ] 

Matthias Pohl commented on FLINK-36912:
---

This was an infrastructure issue that was fixed by Ververica yesterday.

> Shaded Hadoop S3A with credentials provider failed on AZP
> -
>
> Key: FLINK-36912
> URL: https://issues.apache.org/jira/browse/FLINK-36912
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 2.0.0
>Reporter: Weijie Guo
>Priority: Major
>
> Shaded Hadoop S3A with credentials provider end-to-end test failed as 
> FileNotFoundException
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=64384&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=0f3adb59-eefa-51c6-2858-3654d9e0749d&l=10208
> {code:java}
> Dec 16 08:18:13   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
>  ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
> ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) 
> ~[?:?]
> Dec 16 08:18:13   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
>  ~[?:?]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) 
> ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:272) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:233) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:245) ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
>  ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
>  ~[?:?]
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-36912) Shaded Hadoop S3A with credentials provider failed on AZP

2024-12-17 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-36912.
---
Resolution: Fixed

I'm closing this one. Feel free to open it again if you think that it's not 
resolved, yet.

> Shaded Hadoop S3A with credentials provider failed on AZP
> -
>
> Key: FLINK-36912
> URL: https://issues.apache.org/jira/browse/FLINK-36912
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 2.0.0
>Reporter: Weijie Guo
>Priority: Major
>
> Shaded Hadoop S3A with credentials provider end-to-end test failed as 
> FileNotFoundException
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=64384&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=0f3adb59-eefa-51c6-2858-3654d9e0749d&l=10208
> {code:java}
> Dec 16 08:18:13   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
>  ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
> ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) 
> ~[?:?]
> Dec 16 08:18:13   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
>  ~[?:?]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-2.0-SNAPSHOT.jar:2.0-SNAPSHOT]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) 
> ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:272) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:233) ~[?:?]
> Dec 16 08:18:13   at 
> org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:245) ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
>  ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) 
> ~[?:?]
> Dec 16 08:18:13   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
>  ~[?:?]
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


ruanhang1993 commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888103827


##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -316,6 +318,10 @@ private TransformDef toTransformDef(JsonNode 
transformNode) {
 
Optional.ofNullable(transformNode.get(TRANSFORM_DESCRIPTION_KEY))
 .map(JsonNode::asText)
 .orElse(null);
+String convertorAfterTransform =
+
Optional.ofNullable(transformNode.get(TRANSFORM_CONVERTOR_AFTER_TRANSFORM_KEY))
+.map(JsonNode::asText)
+.orElse(null);

Review Comment:
   It is hard to define the order of the converters. Only one converter is 
enough. 
   Multi converters should be merged into one converter.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36919) Add missing dropTable/dropView methods to TableEnvironment

2024-12-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36919:
---
Labels: pull-request-available  (was: )

> Add missing dropTable/dropView methods to TableEnvironment
> --
>
> Key: FLINK-36919
> URL: https://issues.apache.org/jira/browse/FLINK-36919
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
>  FLIP-489: Add missing dropTable/dropView methods to TableEnvironment 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=332499781



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-28897] [TABLE-SQL] Backport fix for fail to use udf in added jar when enabling checkpoint to 1.19.2 [flink]

2024-12-17 Thread via GitHub


ammu20-dev opened a new pull request, #25809:
URL: https://github.com/apache/flink/pull/25809

   
   
   ## What is the purpose of the change
   
   This pull request fixes the class loading issues when using udf in add jar 
and enabling checkpointing. This is a back port of the fix to 1.19.2
   
   ## Brief change log
   
   - Pulled in the FlinkUserCodeClassLoader for UDF jar loading from the 
resource manager
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change added tests and can be verified as follows:
 - Added test case to verify ADD JAR with checkpointing in FunctionITCase 
in flink-table-planner module.
 - Verified that this test case will fail in older versions without the fix.
 - Manually verified the change by running a sample UDF job with 
checkpointing and verified the working.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (don't know)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


snuyanzin opened a new pull request, #25810:
URL: https://github.com/apache/flink/pull/25810

   
   ## What is the purpose of the change
   
   FLIP-489 [1] adds two methods to TableEnvironment
   https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=332499781
   
   ## Verifying this change
   
   TableEnvironmentImpl.java
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no)
 - The S3 file system connector: (yes)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? ( JavaDocs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


yuxiqian commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888179373


##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -316,6 +318,10 @@ private TransformDef toTransformDef(JsonNode 
transformNode) {
 
Optional.ofNullable(transformNode.get(TRANSFORM_DESCRIPTION_KEY))
 .map(JsonNode::asText)
 .orElse(null);
+String convertorAfterTransform =
+
Optional.ofNullable(transformNode.get(TRANSFORM_CONVERTOR_AFTER_TRANSFORM_KEY))
+.map(JsonNode::asText)
+.orElse(null);

Review Comment:
   Makes sense.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-36746) Found deadlock in SerializedThrowable

2024-12-17 Thread raoraoxiong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906338#comment-17906338
 ] 

raoraoxiong commented on FLINK-36746:
-

[~yunta] It's right. I am happy to take this ticket and solve it, please assign 
to me, thanks!

> Found deadlock in SerializedThrowable
> -
>
> Key: FLINK-36746
> URL: https://issues.apache.org/jira/browse/FLINK-36746
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: raoraoxiong
>Priority: Major
>
> When a job failover, it may occurr CIRCULAR REFERENCE when job serialized 
> throwable object like this
>  
>  
> {code:java}
> 2024-04-07 12:19:01,242 WARN  [49126] 
> [org.apache.flink.runtime.taskmanager.Task.transitionState(Task.java:1112)]  
> - search-ipv6 (298/480)#0 (7c9485e8359657b4da729358270805ea) switched from 
> RUNNING to FAILED with failure cause: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Lost connection to task manager '***'. This indicates that the remote task 
> manager was lost.
>     at 
> org.apache.flink.runtime.io.network.netty.CreditBasedPartitionRequestClientHandler.exceptionCaught(CreditBasedPartitionRequestClientHandler.java:156)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.handleReadException(AbstractEpollStreamChannel.java:728)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:821)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>     at 
> org.apache.flink.shaded.netty4.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>     at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
>     at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at java.lang.Thread.run(Thread.java:750)
>     Suppressed: java.lang.RuntimeException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Lost connection to task manager '***'. This indicates that the remote task 
> manager was lost.
>         at 
> org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:319)
>         at 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager.recycle(BufferManager.java:237)
>         at 
> org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182)
>         at 
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
>         at 
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
>         at 
> org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156)
>         at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.clear(SpillingAdaptiveSpanningRecordDeserializer.java:138)
>         at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.releaseDeserializer(AbstractStreamTaskNetworkInput.java:218)
>         at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.close(AbstractStreamTaskNetworkInput.java:210)
>         at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.close(StreamTaskNetworkInput.java:117)
>         at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.close(StreamOneInputProcessor.java:88)
>         at 
> org.apache.flink.st

Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


flinkbot commented on PR #25810:
URL: https://github.com/apache/flink/pull/25810#issuecomment-2548193698

   
   ## CI report:
   
   * 7df8fa81518b37809a24e5c72887c50328859ac6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28897] [TABLE-SQL] Backport fix for fail to use udf in added jar when enabling checkpoint to 1.19.2 [flink]

2024-12-17 Thread via GitHub


flinkbot commented on PR #25809:
URL: https://github.com/apache/flink/pull/25809#issuecomment-2548193207

   
   ## CI report:
   
   * f12f137ae259824341b6bbac537e9503ab5f3ace UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36921) AdaptiveExecutionPlanSchedulingContextTest.testGetParallelismAndMaxParallelism fails

2024-12-17 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-36921:
--

 Summary: 
AdaptiveExecutionPlanSchedulingContextTest.testGetParallelismAndMaxParallelism 
fails
 Key: FLINK-36921
 URL: https://issues.apache.org/jira/browse/FLINK-36921
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 2.0.0
Reporter: Robert Metzger


[https://github.com/apache/flink/actions/runs/12373422822/job/34534179798] 

{code}
Dec 17 13:14:50 13:14:50.087 [INFO] 
Error: 13:14:50 13:14:50.087 [ERROR] Failures: 
Error: 13:14:50 13:14:50.087 [ERROR]   
AdaptiveExecutionPlanSchedulingContextTest.testGetParallelismAndMaxParallelism:65
 
Dec 17 13:14:50 expected: 5
Dec 17 13:14:50  but was: 128
Dec 17 13:14:50 13:14:50.087 [INFO] 
Error: 13:14:50 13:14:50.087 [ERROR] Tests run: 8997, Failures: 1, Errors: 0, 
Skipped: 277
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


yuxiqian commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888227512


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);
+
+static Optional of(String classPath) {
+if (StringUtils.isNullOrWhitespaceOnly(classPath)) {
+return Optional.empty();
+}
+
+Optional postTransformConverter =
+
PostTransformConverters.getInternalPostTransformConverter(classPath);
+if (postTransformConverter.isPresent()) {
+return postTransformConverter;
+}
+
+ClassLoader classLoader = 
Thread.currentThread().getContextClassLoader();
+try {
+Class converterClass = classLoader.loadClass(classPath);
+Object converter = converterClass.newInstance();
+if (converter instanceof PostTransformConverter) {
+return Optional.of((PostTransformConverter) converter);
+}
+throw new IllegalArgumentException(
+String.format(
+"%s is not an instance of %s",
+classPath, 
PostTransformConverter.class.getName()));
+} catch (Exception e) {
+throw new RuntimeException("Create post transform converter 
failed.", e);
+}
+}

Review Comment:
   This should be part of `PostTransformConverters` too



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


yuxiqian commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888227512


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);
+
+static Optional of(String classPath) {
+if (StringUtils.isNullOrWhitespaceOnly(classPath)) {
+return Optional.empty();
+}
+
+Optional postTransformConverter =
+
PostTransformConverters.getInternalPostTransformConverter(classPath);
+if (postTransformConverter.isPresent()) {
+return postTransformConverter;
+}
+
+ClassLoader classLoader = 
Thread.currentThread().getContextClassLoader();
+try {
+Class converterClass = classLoader.loadClass(classPath);
+Object converter = converterClass.newInstance();
+if (converter instanceof PostTransformConverter) {
+return Optional.of((PostTransformConverter) converter);
+}
+throw new IllegalArgumentException(
+String.format(
+"%s is not an instance of %s",
+classPath, 
PostTransformConverter.class.getName()));
+} catch (Exception e) {
+throw new RuntimeException("Create post transform converter 
failed.", e);
+}
+}

Review Comment:
   This should be part of `PostTransformConverters` too. Also a JavaDoc for 
public interface methods would be nice.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


lvyanquan commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888249961


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);

Review Comment:
   Do we allow user to modify the elements in before and after RecordDatas? if 
allow, we should provide Schema of this dataChangeEvent in this interface.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


lvyanquan commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888249961


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);

Review Comment:
   Do we allow user to modify the before and after RecordData? if allow, we 
should provide Schema of this dataChangeEvent in this interface.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


lvyanquan commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888249961


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);

Review Comment:
   Do we allow user to modify the elements in before and after RecordDatas? if 
yes, we should provide Schema of this dataChangeEvent in this interface.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


lvyanquan commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888249961


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);

Review Comment:
   Do we allow user to modify the elements in before and after RecordDatas? if 
do, we should provide Schema of this dataChangeEvent in this interface.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36906] Optimize the logic for determining if a split is finished [flink-connector-kafka]

2024-12-17 Thread via GitHub


xiaochen-zhou commented on PR #141:
URL: 
https://github.com/apache/flink-connector-kafka/pull/141#issuecomment-2548084763

   > Afaik this doesn't work and was the main reason for #100. If you last 
message is a transaction marker, then you would never check the stop condition 
on that partition at the point in time.
   > 
   > I'll trigger the CI which should fail for the test that was specifically 
added for that scenario.
   > 
   > I'll leave this PR open untli we figured out if it can indeed be improved. 
Please double check the linked PR and the respective ticket.
   
   OK.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36910][table] Add function call syntax support to time-related dynamic functions [flink]

2024-12-17 Thread via GitHub


gustavodemorais commented on PR #25806:
URL: https://github.com/apache/flink/pull/25806#issuecomment-2547929083

   Lint seems to have failed, taking a look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-36917) Add OceanBase DataSource to Flink CDC

2024-12-17 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-36917:
--

Assignee: ChaomingZhang

> Add OceanBase DataSource to Flink CDC
> -
>
> Key: FLINK-36917
> URL: https://issues.apache.org/jira/browse/FLINK-36917
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Reporter: ChaomingZhang
>Assignee: ChaomingZhang
>Priority: Major
>
> Support read events from OceanBase by YAML based on FLINK-34545.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36920) Update org.quartz-schedule:quartz in flink-autoscaler module from 2.3.2 to 2.4.0

2024-12-17 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-36920:
---
Summary: Update org.quartz-schedule:quartz in flink-autoscaler module from 
2.3.2 to 2.4.0  (was: Update org.quartz-schedule:quartz)

> Update org.quartz-schedule:quartz in flink-autoscaler module from 2.3.2 to 
> 2.4.0
> 
>
> Key: FLINK-36920
> URL: https://issues.apache.org/jira/browse/FLINK-36920
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.10.0
>Reporter: Anupam Aggarwal
>Priority: Minor
>
> Update dependency on org.quartz-scheduler:quartz used in flink-autoscaler 
> module from 2.3.2 to 2.4.0
>  
> *Vulnerability info:*
> cve-2023-39017
> quartz-jobs 2.3.2 and below was discovered to contain a code injection 
> vulnerability in the component 
> org.quartz.jobs.ee.jms.SendQueueMessageJob.execute. This vulnerability is 
> exploited via passing an unchecked argument. NOTE: this is disputed by 
> multiple parties because it is not plausible that untrusted user input would 
> reach the code location where injection must occur.
> More details are at: [https://nvd.nist.gov/vuln/detail/cve-2023-39017] 
> *Proposed fix*
> Bumping the dependency from 2.3.2 to 2.4.0 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


lvyanquan commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888249961


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);

Review Comment:
   Do we allow user to modify the elements in before and after RecordDatas? if 
we do, we should provide Schema of this dataChangeEvent in this interface.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36913) Add an option in Kafka Sink to manual map table to topic name.

2024-12-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36913:
---
Labels: pull-request-available  (was: )

> Add an option in Kafka Sink to manual map table to topic name.
> --
>
> Key: FLINK-36913
> URL: https://issues.apache.org/jira/browse/FLINK-36913
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Affects Versions: cdc-3.3.0
>Reporter: Yanquan Lv
>Assignee: Yanquan Lv
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.3.0
>
>
> For many users who used Kafka as DataSink in YAML job, them usually want to 
> send changelog to one topic, in current framework, we can use `route` module 
> to do that, like what the following YAML file does:
> {noformat}
> source:
> type: mysql
> hostname: xx
> port: 3306
> username: flink
> password: Flinkxxx
> tables: flink_source.news_[0-9]
> sink:
> type: kafka
> properties.bootstrap.servers: xxx:9092
> value.format: canal-json
> route:
> - source-table: flink_source.news_[0-9]
> sink-table: my_source
> {noformat}
> However, the out put of Kafka in canal-json format doesn't contain the 
> original database/table information, instead, it only contains the 
> database/table information after routing. Although this is in line with the 
> functionality of route, it does not meet the needs of users.
> Therefore, I suggest adding a parameter in the sink to let Kafka determine 
> how to handle the mapping from table to topic name, so we can create one 
> topic for many source tables, and keep all database/table information of 
> source tables when output records.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-36236] JdbcSourceBuilder overrides set connection provider [flink-connector-jdbc]

2024-12-17 Thread via GitHub


coopstah13 opened a new pull request, #150:
URL: https://github.com/apache/flink-connector-jdbc/pull/150

   Only instantiate the provider from the connection options builder if no 
provider was set on the builder


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36236] JdbcSourceBuilder overrides set connection provider [flink-connector-jdbc]

2024-12-17 Thread via GitHub


boring-cyborg[bot] commented on PR #150:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/150#issuecomment-2548447332

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36236) JdbcSourceBuilder overrides set connection provider

2024-12-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36236:
---
Labels: pull-request-available  (was: )

> JdbcSourceBuilder overrides set connection provider
> ---
>
> Key: FLINK-36236
> URL: https://issues.apache.org/jira/browse/FLINK-36236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.3.0
>Reporter: Martin Krüger
>Priority: Major
>  Labels: pull-request-available
>
> It's not possible to set a custom JdbcConnectionProvider in the 
> JdbcSourceBuilder as it's overridden in the build method. See:
> [https://github.com/apache/flink-connector-jdbc/blob/1fcb9dd43a496778ee9e717657fcccdd116078cd/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/source/JdbcSourceBuilder.java#L273]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35241][table]Support SQL FLOOR and CEIL functions with SECOND and MINUTE for TIMESTAMP_TLZ [flink]

2024-12-17 Thread via GitHub


snuyanzin merged PR #25759:
URL: https://github.com/apache/flink/pull/25759


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35241) Support SQL FLOOR and CEIL functions with SECOND and MINUTE for TIMESTAMP_TLZ

2024-12-17 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35241.
-
Fix Version/s: 2.0.0
   Resolution: Fixed

> Support SQL FLOOR and CEIL functions with SECOND and MINUTE for TIMESTAMP_TLZ
> -
>
> Key: FLINK-35241
> URL: https://issues.apache.org/jira/browse/FLINK-35241
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Alexey Leonov-Vendrovskiy
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> We need a fix for both SECOND and MINUTE.
> The following query doesn't work:
> {code:java}
> SELECT
>   FLOOR(
>       CAST(TIMESTAMP '2024-04-25 17:19:42.654' AS TIMESTAMP_LTZ(3))
>   TO MINUTE) {code}
> These two queries work:
> {code:java}
> SELECT
>   FLOOR(
>       CAST(TIMESTAMP '2024-04-25 17:19:42.654' AS TIMESTAMP_LTZ(3))
>   TO HOUR) {code}
>  
> {code:java}
> SELECT
>   FLOOR(
>       TIMESTAMP '2024-04-25 17:19:42.654'
>   TO MINUTE) {code}
> Stack trace for the first not working query from above:
> {code:java}
> Caused by: io.confluent.flink.table.utils.CleanedException: 
> org.codehaus.commons.compiler.CompileException: Line 41, Column 69: No 
> applicable constructor/method found for actual parameters 
> "org.apache.flink.table.data.TimestampData, 
> org.apache.flink.table.data.TimestampData"; candidates are: "public static 
> long org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(long, 
> long)", "public static float 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(float)", 
> "public static org.apache.flink.table.data.DecimalData 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(org.apache.flink.table.data.DecimalData)",
>  "public static int 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(int, int)", 
> "public static double 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(double)"
> at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:13080)
> at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9646)
> at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9506)
> at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9422)
> at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5263)
> ... {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35241) Support SQL FLOOR and CEIL functions with SECOND and MINUTE for TIMESTAMP_TLZ

2024-12-17 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906416#comment-17906416
 ] 

Sergey Nuyanzin commented on FLINK-35241:
-

Merged as 
[c7d8515c0285fc0019571a1f637377630c5a06fa|https://github.com/apache/flink/commit/c7d8515c0285fc0019571a1f637377630c5a06fa]

> Support SQL FLOOR and CEIL functions with SECOND and MINUTE for TIMESTAMP_TLZ
> -
>
> Key: FLINK-35241
> URL: https://issues.apache.org/jira/browse/FLINK-35241
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Alexey Leonov-Vendrovskiy
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> We need a fix for both SECOND and MINUTE.
> The following query doesn't work:
> {code:java}
> SELECT
>   FLOOR(
>       CAST(TIMESTAMP '2024-04-25 17:19:42.654' AS TIMESTAMP_LTZ(3))
>   TO MINUTE) {code}
> These two queries work:
> {code:java}
> SELECT
>   FLOOR(
>       CAST(TIMESTAMP '2024-04-25 17:19:42.654' AS TIMESTAMP_LTZ(3))
>   TO HOUR) {code}
>  
> {code:java}
> SELECT
>   FLOOR(
>       TIMESTAMP '2024-04-25 17:19:42.654'
>   TO MINUTE) {code}
> Stack trace for the first not working query from above:
> {code:java}
> Caused by: io.confluent.flink.table.utils.CleanedException: 
> org.codehaus.commons.compiler.CompileException: Line 41, Column 69: No 
> applicable constructor/method found for actual parameters 
> "org.apache.flink.table.data.TimestampData, 
> org.apache.flink.table.data.TimestampData"; candidates are: "public static 
> long org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(long, 
> long)", "public static float 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(float)", 
> "public static org.apache.flink.table.data.DecimalData 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(org.apache.flink.table.data.DecimalData)",
>  "public static int 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(int, int)", 
> "public static double 
> org.apache.flink.table.runtime.functions.SqlFunctionUtils.floor(double)"
> at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:13080)
> at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9646)
> at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9506)
> at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9422)
> at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5263)
> ... {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30975) Enable AWS SDK V2 Support for Flink's S3 FileSystem Modules

2024-12-17 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb updated FLINK-30975:
---
Description: 
Currently, *Flink's S3 FileSystem* is limited to using AWS SDK V1. However, AWS 
strongly recommends adopting AWS SDK V2 because it offers significant 
improvements, including better performance, additional features, and extended 
maintenance support. Transitioning to AWS SDK V2 will ensure Flink remains 
aligned with AWS's long-term support strategy and benefits from enhancements 
available in the newer SDK.
h3. Modules Requiring Updates

To fully support AWS SDK V2, the following Flink modules need updates:
 # *{{flink-s3-fs-base}}*
 # *{{flink-s3-fs-hadoop}}*
 # *{{flink-s3-fs-presto}}*

While the *Hadoop module* has already incorporated AWS SDK V2 support, the same 
cannot be said for {*}Presto's S3 FileSystem{*}, which currently lacks this 
capability. This gap creates a blocker for the {{flink-s3-fs-presto}} module to 
adopt AWS SDK V2.
h3. Options to Enable AWS SDK V2 Support for Flink's S3 FileSystem
 # {*}Copy Presto's S3 FileSystem and Add AWS SDK V2 Support in Flink{*}:

 * 
 ** Flink can maintain its own version of Presto's S3 FileSystem, updated to 
support AWS SDK V2.
 ** This approach gives Flink immediate control over the feature but increases 
maintenance overhead as Flink will need to manage updates independently if 
Presto evolves further.

       \{*}2. Update Presto's S3 FileSystem Directly{*}:
 * 
 ** Add AWS SDK V2 support to Presto's S3 FileSystem in Presto itself.
 ** Flink can then use the updated Presto version that includes AWS SDK V2 
support.
 ** While this option ensures better collaboration and reuse across projects, 
it depends on the Presto community’s priorities and timelines to accept and 
release these changes.

       \{*}3. Adopt Trino's S3 FileSystem{*}:
 * 
 ** Trino's S3 FileSystem already supports AWS SDK V2.
 ** Flink could consider switching from Presto's S3 FileSystem to Trino's 
implementation.
 ** This approach avoids duplicating effort or waiting for Presto's support 
while benefiting from Trino's active maintenance and AWS SDK V2 support. 
However, it may require significant integration work and adjustments in Flink 
to support the Trino S3 FileSystem.

h3.  

Transitioning to AWS SDK V2 for Flink's S3 FileSystem is essential to align 
with AWS's recommendations and benefit from better support. Among the proposed 
options:
 * The first option offers quick resolution but increases long-term maintenance.
 * The second option promotes collaboration but may be slower due to external 
dependencies.
 * The third option is the most efficient in terms of leveraging existing work 
but may require substantial integration effort.

Choosing the right approach will depend on Flink's priorities, resources, and 
collaboration potential with Presto or Trino.

 

[changelog-details 
|https://github.com/aws/aws-sdk-java-v2/blob/master/docs/LaunchChangelog.md] 

 

  was:
Currently, *Flink's S3 FileSystem* is limited to using AWS SDK V1. However, AWS 
strongly recommends adopting AWS SDK V2 because it offers significant 
improvements, including better performance, additional features, and extended 
maintenance support. Transitioning to AWS SDK V2 will ensure Flink remains 
aligned with AWS's long-term support strategy and benefits from enhancements 
available in the newer SDK.
h3. Modules Requiring Updates

To fully support AWS SDK V2, the following Flink modules need updates:
 # *{{flink-s3-fs-base}}*
 # *{{flink-s3-fs-hadoop}}*
 # *{{flink-s3-fs-presto}}*

While the *Hadoop module* has already incorporated AWS SDK V2 support, the same 
cannot be said for {*}Presto's S3 FileSystem{*}, which currently lacks this 
capability. This gap creates a blocker for the {{flink-s3-fs-presto}} module to 
adopt AWS SDK V2.
h3. Options to Enable AWS SDK V2 Support for Flink's S3 FileSystem
 # {*}Copy Presto's S3 FileSystem and Add AWS SDK V2 Support in Flink{*}:

 ** Flink can maintain its own version of Presto's S3 FileSystem, updated to 
support AWS SDK V2.
 ** This approach gives Flink immediate control over the feature but increases 
maintenance overhead as Flink will need to manage updates independently if 
Presto evolves further.
 # {*}Update Presto's S3 FileSystem Directly{*}:

 ** Add AWS SDK V2 support to Presto's S3 FileSystem in Presto itself.
 ** Flink can then use the updated Presto version that includes AWS SDK V2 
support.
 ** While this option ensures better collaboration and reuse across projects, 
it depends on the Presto community’s priorities and timelines to accept and 
release these changes.
 # {*}Adopt Trino's S3 FileSystem{*}:

 ** Trino's S3 FileSystem already supports AWS SDK V2.
 ** Flink could consider switching from Presto's S3 FileSystem to Trino's 
implementation.
 ** This approach avoids duplicating effort or waiting for Presto's support 
while benef

Re: [PR] [hotfix][docs] Fix Apache Avro Specification Link. [flink]

2024-12-17 Thread via GitHub


davidradl commented on PR #25769:
URL: https://github.com/apache/flink/pull/25769#issuecomment-2548134819

   @rmetzger @donPain I had a quick look at the docs, it seems that Hugo does 
allow templating which supports variables. I can see the Flink code referencing 
the predefined variables with {{  }}. I see `{{< all_versions >}}  ` which 
looks like a custom variable; but I am not sure how it is populated. I see code 
bringing in the different connectors. I have not yet found how we would want to 
add global variables like this. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][configuration] Remove the deprecated test class TaskManagerLoadBalanceModeTest. [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25800:
URL: https://github.com/apache/flink/pull/25800#discussion_r1888317764


##
flink-core/src/test/java/org/apache/flink/configuration/TaskManagerLoadBalanceModeTest.java:
##
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.configuration;
-
-import org.junit.jupiter.api.Test;
-
-import static 
org.apache.flink.configuration.TaskManagerOptions.TASK_MANAGER_LOAD_BALANCE_MODE;
-import static 
org.apache.flink.configuration.TaskManagerOptions.TaskManagerLoadBalanceMode;
-import static org.assertj.core.api.Assertions.assertThat;
-
-/** Test for {@link TaskManagerLoadBalanceMode}. */
-class TaskManagerLoadBalanceModeTest {
-
-@Test
-void testReadTaskManagerLoadBalanceMode() {

Review Comment:
   I do not think we should remove tests of deprecated methods - which are 
still supported. I think we should remove the tests at the same time as the 
methods / constants are removed   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25810:
URL: https://github.com/apache/flink/pull/25810#discussion_r1888663499


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java:
##
@@ -661,6 +673,18 @@ public boolean dropTemporaryView(String path) {
 }
 }
 
+@Override
+public boolean dropView(String path) {
+UnresolvedIdentifier unresolvedIdentifier = 
getParser().parseIdentifier(path);
+ObjectIdentifier identifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
+try {
+catalogManager.dropView(identifier, false);
+return true;
+} catch (ValidationException e) {
+return false;

Review Comment:
   Yes you are right it is not a regression - still I think it is worth a log 
entry when there is an Exception and it returns false;   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25810:
URL: https://github.com/apache/flink/pull/25810#discussion_r1888663499


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java:
##
@@ -661,6 +673,18 @@ public boolean dropTemporaryView(String path) {
 }
 }
 
+@Override
+public boolean dropView(String path) {
+UnresolvedIdentifier unresolvedIdentifier = 
getParser().parseIdentifier(path);
+ObjectIdentifier identifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
+try {
+catalogManager.dropView(identifier, false);
+return true;
+} catch (ValidationException e) {
+return false;

Review Comment:
   Yes you are right it is not a regression - still I think it is worth a log 
entry when there is an Exception and it returns false;  Though I see all the 
other `return false `do not log - so we should add logging to each of them if 
we do this one . 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-36916) Unexpected error in type inference logic of function 'TYPEOF' for ROW

2024-12-17 Thread Yiyu Tian (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17906439#comment-17906439
 ] 

Yiyu Tian commented on FLINK-36916:
---

Hi [~Sergey Nuyanzin] , got it. Thank you for the insight! That's very helpful.

> Unexpected error in type inference logic of function 'TYPEOF' for ROW
> -
>
> Key: FLINK-36916
> URL: https://issues.apache.org/jira/browse/FLINK-36916
> Project: Flink
>  Issue Type: Bug
>Reporter: Yiyu Tian
>Assignee: Yiyu Tian
>Priority: Major
>
> {{SELECT TYPEOF(CAST((1,2) AS ROW));}}
> results in {{Unexpected error in type inference logic of function 'TYPEOF'. 
> This is a bug.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.20][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2024-12-17 Thread via GitHub


davidradl commented on PR #25794:
URL: https://github.com/apache/flink/pull/25794#issuecomment-2548013021

   @mehdid93 are you ok to look into this ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36898) Support SQL FLOOR and CEIL functions with NanoSecond for TIMESTAMP_TLZ

2024-12-17 Thread Hanyu Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanyu Zheng updated FLINK-36898:

Description: Currently, Flink does not support nanosecond precision for the 
CEIL and FLOOR functions. This limitation exists because Flink does not have a 
data type with precision greater than nanoseconds. To introduce support for 
nanoseconds in these functions, we need do some research and find an feasible 
way to support nanoSecond.  (was: Currently, Flink does not support nanosecond 
precision for the CEIL and FLOOR functions. This limitation exists because 
Flink does not have a data type with precision greater than nanoseconds. To 
introduce support for nanoseconds in these functions, we can suppose ceil and 
floor nanosecond return same answer, because we didn't support a data type with 
precision greater than nanoseconds, this is an feasible way to support 
nanosecond.)

> Support SQL FLOOR and CEIL functions with NanoSecond for TIMESTAMP_TLZ
> --
>
> Key: FLINK-36898
> URL: https://issues.apache.org/jira/browse/FLINK-36898
> Project: Flink
>  Issue Type: Improvement
>Reporter: Hanyu Zheng
>Priority: Major
>
> Currently, Flink does not support nanosecond precision for the CEIL and FLOOR 
> functions. This limitation exists because Flink does not have a data type 
> with precision greater than nanoseconds. To introduce support for nanoseconds 
> in these functions, we need do some research and find an feasible way to 
> support nanoSecond.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-27355][runtime] Unregister JobManagerRunner after it's closed [flink]

2024-12-17 Thread via GitHub


XComp commented on code in PR #25027:
URL: https://github.com/apache/flink/pull/25027#discussion_r1888147138


##
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/DefaultJobManagerRunnerRegistry.java:
##
@@ -83,9 +83,16 @@ public Collection getJobManagerRunners() {
 }
 
 @Override
-public CompletableFuture localCleanupAsync(JobID jobId, Executor 
unusedExecutor) {
+public CompletableFuture localCleanupAsync(
+JobID jobId, Executor ignoredExecutor, Executor 
mainThreadExecutor) {
 if (isRegistered(jobId)) {
-return unregister(jobId).closeAsync();
+CompletableFuture resultFuture = 
this.jobManagerRunners.get(jobId).closeAsync();
+
+return resultFuture.thenApplyAsync(
+result -> {
+mainThreadExecutor.execute(() -> unregister(jobId));
+return result;
+});

Review Comment:
   The difference between your previous version of the code (using 
`thenApplyAsync`) and the changed code (using `thenRunAsync`) is a special 
(kind of annoying) issue with `CompletableFutures`:
   
   The async `unregister` call is preceded by the 
[TestingJobManagerRunner#closeAsync](https://github.com/apache/flink/blob/153614716767d5fb8be1d4caa5e72167ce64a9d7/flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/TestingJobManagerRunner.java#L132)
 which is set up to be non-blocking. That means that the method's 
`CompletableFuture` is completed before returning.
   
   There's a different runtime behavior between chained operations of an 
already-completed future and a not-yet-completed future. In the former case, 
the chained operation will be executed right-away on the calling thread . In 
the latter case, the chained operation will be scheduled for execution.
   
   In our scenario (see 
[DefaultJobManagerRunnerRegistry#localCleanupAsync](https://github.com/apache/flink/blob/e37d9c9b7b649030d9cb5d3ba6540a006aa0aa23/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/DefaultJobManagerRunnerRegistry.java#L91)),
 the `closeAsync` call returns the already-completed future. Keep in mind that 
this happens on the component's main thread! So, what happens with the chained 
operation?
   * with `thenApplyAsync`: The `mainThreadExecutor#execute` call is called 
asynchronously (no executor is specified, so the JVM common thread pool is 
used). The `unregister` callback will be scheduled on the main thread again. 
But `localCleanupAsync` returns the future of the `thenApplyAsync` which 
completes as soon as the `mainThreadExecutor#execute` returns (not the 
`unregister` call!). That unblocks the "buggy" code I mentioned in [my previous 
comment](https://github.com/apache/flink/pull/25027#discussion_r1888127410).
   * with `thenRunAsync`: We're on the main thread. `thenRunAsync` states that 
the `unregister` call should be executed on the main thread as well. But the 
`unregister` callback will be executed rightaway (instead of putting it on a 
task queue) due to the fact that the future that was returned by the previous 
`closeAsync` is already completed. In the mean time, the main thread is still 
blocked by the `ResourceCleaner` trying to complete the cleanup in the main 
thread. We reach a deadlock.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


ruanhang1993 commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1888161102


##
flink-cdc-cli/src/main/java/org/apache/flink/cdc/cli/parser/YamlPipelineDefinitionParser.java:
##
@@ -316,6 +318,10 @@ private TransformDef toTransformDef(JsonNode 
transformNode) {
 
Optional.ofNullable(transformNode.get(TRANSFORM_DESCRIPTION_KEY))
 .map(JsonNode::asText)
 .orElse(null);
+String convertorAfterTransform =

Review Comment:
   Docs should be added when the feature is settled down.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36873) Adapting batch job progress recovery to Apache Celeborn

2024-12-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36873:
---
Labels: pull-request-available  (was: )

> Adapting batch job progress recovery to Apache Celeborn
> ---
>
> Key: FLINK-36873
> URL: https://issues.apache.org/jira/browse/FLINK-36873
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: xuhuang
>Assignee: Junrui Lee
>Priority: Major
>  Labels: pull-request-available
>
> I've identified several issues while attempting to enable Apache Celeborn to 
> support Flink batch job recovery.
> *1. RestoreState Invocation*
>  * The method _*{{ShuffleMaster#restoreState}}*_ should be triggered 
> regardless of whether the Flink job requires recovery.
>  * This method signifies that a Flink job needs to restore its state, but it 
> is currently called only after {_}*{{ShuffleMaster#registerJob}}*{_}.
>  * Consequently, it might not be invoked if the Flink job does not require 
> recovery.
>  * For Celeborn, this creates uncertainty regarding when to initialize 
> certain components; if the initialization occurs during 
> {*}_{{registerJob}}_{*}, it may lack essential information from the stored 
> snapshot, whereas if it takes place during {*}_{{restoreState}}_{*}, there is 
> a risk that it may not be invoked at all.
> *2. JobID Information Requirement* 
>  * Several methods in _*{{ShuffleMaster}}*_ should include _*JobID*_ 
> information: {*}_{{ShuffleMaster#supportsBatchSnapshot}}_{*}, 
> {_}*{{ShuffleMaster#snapshotState}}*{_}, and 
> {_}*{{ShuffleMaster#restoreState}}*{_}.
>  * These methods are intended for job-granularity state storage and 
> restoration, but they currently do not incorporate JobID.
>  * Consequently, Celeborn is unable to determine which job triggered these 
> calls.
> {*}3. Cluster granularity store/restore state{*}:
>  * Presently, _*{{ShuffleMaster}}*_ only offers job-granularity interfaces 
> for storing and restoring state, as the _*{{NettyShuffleService}}*_ is 
> stateless in terms of cluster granularity.
>  * However, _*{{Celeborn#ShuffleMaster}}*_ needs to communicate with the 
> Celeborn Master, necessitating the storage of certain cluster-level states, 
> such as {_}*{{CelebornAppId}}*{_}.
>  * In my opinion, the cluster-granularity store state interface can be 
> execute after {_}*{{ShuffleMaster#start}}*{_}, and 
> _*{{ShuffleMaster#start}}*_ adding a snapshot parameter to restore the 
> cluster state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25810:
URL: https://github.com/apache/flink/pull/25810#discussion_r1888439721


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java:
##
@@ -1040,6 +1047,13 @@ void createTemporarySystemFunction(
  */
 boolean dropTemporaryView(String path);
 
+/**
+ * Drops a table in a given fully qualified path.

Review Comment:
   NIT:  table -> view



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-36873][runtime] Adapting batch job progress recovery to Apache Celeborn. [flink]

2024-12-17 Thread via GitHub


JunRuiLee opened a new pull request, #25811:
URL: https://github.com/apache/flink/pull/25811

   
   
   
   
   ## What is the purpose of the change
   
   [FLINK-36873][runtime] Adapting batch job progress recovery to Apache 
Celeborn.
   
   
   ## Brief change log
   
   Adapting batch job progress recovery to Apache Celeborn.
   
   ## Verifying this change
   
   
   This change added tests and can be verified by ShuffleMasterSnapshotUtilTest
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36873][runtime] Adapting batch job progress recovery to Apache Celeborn. [flink]

2024-12-17 Thread via GitHub


flinkbot commented on PR #25811:
URL: https://github.com/apache/flink/pull/25811#issuecomment-2548346343

   
   ## CI report:
   
   * b868c0a44f00036e62d0a37ac4f74b525ecb855c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


davidradl commented on code in PR #25810:
URL: https://github.com/apache/flink/pull/25810#discussion_r1888448176


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java:
##
@@ -661,6 +673,18 @@ public boolean dropTemporaryView(String path) {
 }
 }
 
+@Override
+public boolean dropView(String path) {
+UnresolvedIdentifier unresolvedIdentifier = 
getParser().parseIdentifier(path);
+ObjectIdentifier identifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
+try {
+catalogManager.dropView(identifier, false);
+return true;
+} catch (ValidationException e) {
+return false;

Review Comment:
   I guess the before the fix, this exception would have bubbled up the table 
API. After the fix it is gobbled. I am thinking we should log the exception 
here, so we have an indication of what failed somewhere. WDYT?  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36919][table] Add missing dropTable/dropView methods to TableEnvironment [flink]

2024-12-17 Thread via GitHub


twalthr commented on code in PR #25810:
URL: https://github.com/apache/flink/pull/25810#discussion_r1888450395


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java:
##
@@ -1030,6 +1030,13 @@ void createTemporarySystemFunction(
  */
 boolean dropTemporaryTable(String path);
 
+/**
+ * Drops a table in a given fully qualified path.

Review Comment:
   The path doesn't need to be fully qualified. We should copy existing 
explanation to not make mistakes.
   Also we need information about temporary vs permant objects. I guess it 
drops only permanent tables, not temporary ones. If temporary still exists it 
throws an exception?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Flink 33117 [flink]

2024-12-17 Thread via GitHub


afedulov closed pull request #25812: Flink 33117
URL: https://github.com/apache/flink/pull/25812


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-17 Thread via GitHub


snuyanzin commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1889030551


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##
@@ -365,6 +368,38 @@ public static TimestampData toTimestampData(double v, int 
precision) {
 }
 }
 
+public static TimestampData toTimestampData(int v, int precision) {
+switch (precision) {
+case 0:
+if (MIN_EPOCH_SECONDS <= v && v <= MAX_EPOCH_SECONDS) {
+return timestampDataFromEpochMills((v * 
MILLIS_PER_SECOND));
+} else {
+return null;
+}
+case 3:
+return timestampDataFromEpochMills(v);
+default:
+throw new TableException(
+"The precision value '"
++ precision
++ "' for function "
++ "TO_TIMESTAMP_LTZ(numeric, precision) is 
unsupported,"
++ " the supported value is '0' for second or 
'3' for millisecond.");
+}
+}

Review Comment:
   once feedback under comment 
https://github.com/apache/flink/pull/25763/files#r1889025297
   is addressed, I think we can remove this method



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-17 Thread via GitHub


snuyanzin commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1889033882


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##
@@ -421,6 +456,21 @@ public static TimestampData parseTimestampData(String 
dateStr, int precision, Ti
 .toInstant());
 }
 
+public static TimestampData parseTimestampData(String dateStr, String 
format, String timezone) {
+if (dateStr == null || format == null || timezone == null) {
+return null;
+}
+
+TimestampData ts = parseTimestampData(dateStr, format);
+if (ts == null) {
+return null;
+}
+
+ZonedDateTime utcZoned = ts.toLocalDateTime().atZone(ZoneId.of("UTC"));

Review Comment:
   Since it is always same, could be extracted into constants



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35721) I found out that in the Flink SQL documentation it says that Double type cannot be converted to Boolean type, but in reality, it can.

2024-12-17 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov updated FLINK-35721:
--
Fix Version/s: 1.20.1

> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> -
>
> Key: FLINK-35721
> URL: https://issues.apache.org/jira/browse/FLINK-35721
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.1
>Reporter: jinzhuguang
>Priority: Minor
> Fix For: 1.19.2, 1.20.1
>
> Attachments: image-2024-06-28-16-57-54-354.png
>
>
> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> Ralated code : 
> org.apache.flink.table.planner.functions.casting.NumericToBooleanCastRule#generateExpression
> Ralated document url : 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/]
> !image-2024-06-28-16-57-54-354.png|width=378,height=342!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33117][table][docs] Fix scala example in udfs page [flink]

2024-12-17 Thread via GitHub


afedulov merged PR #23439:
URL: https://github.com/apache/flink/pull/23439


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33921][table] Cleanup deprecated IdleStateRetentionTime related method in org.apache.flink.table.api.TableConfig [flink]

2024-12-17 Thread via GitHub


lincoln-lil merged PR #23980:
URL: https://github.com/apache/flink/pull/23980


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33921) Cleanup deprecated IdleStateRetentionTime related method in org.apache.flink.table.api.TableConfig

2024-12-17 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-33921.
---
Resolution: Fixed

fixed in master: 6248fd4fd61a0d76f1b6160abff19fdfdc0f4c63

> Cleanup deprecated IdleStateRetentionTime related method in 
> org.apache.flink.table.api.TableConfig
> --
>
> Key: FLINK-33921
> URL: https://issues.apache.org/jira/browse/FLINK-33921
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 2.0.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> getMinIdleStateRetentionTime()
> getMaxIdleStateRetentionTime()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36227] Restore compatibility with Logback 1.2 [flink]

2024-12-17 Thread via GitHub


supalle commented on PR #25813:
URL: https://github.com/apache/flink/pull/25813#issuecomment-2550105113

   I need it too


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36879][runtime] Support to convert delete as insert in transform [flink-cdc]

2024-12-17 Thread via GitHub


ruanhang1993 commented on code in PR #3804:
URL: https://github.com/apache/flink-cdc/pull/3804#discussion_r1889487420


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/transform/converter/PostTransformConverter.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.transform.converter;
+
+import org.apache.flink.cdc.common.annotation.Experimental;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.utils.StringUtils;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/**
+ * The PostTransformConverter applies to convert the {@link DataChangeEvent} 
after other part of
+ * TransformRule in PostTransformOperator.
+ */
+@Experimental
+public interface PostTransformConverter extends Serializable {
+
+Optional convert(DataChangeEvent dataChangeEvent);

Review Comment:
   This PR will only introduce SOFT_DELETE converter and I will revert some 
changes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31691][table] Add built-in MAP_FROM_ENTRIES function. [flink]

2024-12-17 Thread via GitHub


liuyongvs commented on PR #22745:
URL: https://github.com/apache/flink/pull/22745#issuecomment-2550125041

   Hi @snuyanzin  i rebase to fix conflict and squash the commits, will you 
help look it again?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36925][table] Introduce SemiAntiJoinOperator in Join with Async State API [flink]

2024-12-17 Thread via GitHub


flinkbot commented on PR #25814:
URL: https://github.com/apache/flink/pull/25814#issuecomment-2550439863

   
   ## CI report:
   
   * 1f3e464799a52457b45e23b6bdb174a3fc9c13d7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36575][runtime] ExecutionVertexInputInfo supports consuming subpartition groups [flink]

2024-12-17 Thread via GitHub


noorall commented on PR #25551:
URL: https://github.com/apache/flink/pull/25551#issuecomment-2550443470

   @JunRuiLee Thanks for review. The PR has been updated based on your comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix Java 11 target compatibility & add tests [flink-cdc]

2024-12-17 Thread via GitHub


yuxiqian commented on PR #3633:
URL: https://github.com/apache/flink-cdc/pull/3633#issuecomment-2550456391

   Thanks for @GOODBOY008's review, fixed and rebased.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] upgrade-tikv-client [flink-cdc]

2024-12-17 Thread via GitHub


github-actions[bot] commented on PR #3654:
URL: https://github.com/apache/flink-cdc/pull/3654#issuecomment-2549943981

   This pull request has been automatically marked as stale because it has not 
had recent activity for 60 days. It will be closed in 30 days if no further 
activity occurs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36564][ci] Running CI in random timezone to expose more time related bugs. [flink-cdc]

2024-12-17 Thread via GitHub


github-actions[bot] commented on PR #3650:
URL: https://github.com/apache/flink-cdc/pull/3650#issuecomment-2549944007

   This pull request has been automatically marked as stale because it has not 
had recent activity for 60 days. It will be closed in 30 days if no further 
activity occurs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36925) Introduce SemiAntiJoinOperator in Join with Async State API

2024-12-17 Thread Wang Qilong (Jira)
Wang Qilong created FLINK-36925:
---

 Summary: Introduce SemiAntiJoinOperator in Join with Async State 
API
 Key: FLINK-36925
 URL: https://issues.apache.org/jira/browse/FLINK-36925
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Runtime
Affects Versions: 2.0.0
Reporter: Wang Qilong






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-36925][table] Introduce SemiAntiJoinOperator in Join with Async State API [flink]

2024-12-17 Thread via GitHub


Au-Miner opened a new pull request, #25814:
URL: https://github.com/apache/flink/pull/25814

   ## What is the purpose of the change
   
   Introduce SemiAntiJoinOperator in Join with async state api.
   
   ## Brief change log
   
 - *Introduce SemiAntiJoinOperator in Join with async state api.*
 - *Add ITs and StreamingSemiAntiJoinOperatorTest*
   
   
   ## Verifying this change
   
   Existent tests and new added tests can verify this change.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix Java 11 target compatibility & add tests [flink-cdc]

2024-12-17 Thread via GitHub


GOODBOY008 commented on code in PR #3633:
URL: https://github.com/apache/flink-cdc/pull/3633#discussion_r1889667552


##
.github/workflows/flink_cdc_migration_test_base.yml:
##
@@ -13,22 +13,36 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-name: Migration Tests
+name: Flink CDC Migration Test Base Workflow

Review Comment:
   Make the CI name short.



##
.github/workflows/flink_cdc_java_8.yml:
##
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-name: Flink CDC CI
+name: CI (Java 8)

Review Comment:
   I think there is no need to change this name.



##
.github/workflows/flink_cdc_java_11.yml:
##
@@ -0,0 +1,77 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: CI (Java 11, Experimental)

Review Comment:
   Pls add `Flink CDC` prefix, because asf infra have a [dashboard](url) to 
summary the ci usage . BTW, You can just name the ci `Flink CDC CI Nightly` 
   https://github.com/user-attachments/assets/236fe390-5119-4eca-8d0d-38fbe5ac6c0b";
 />



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34944] Use Incremental Source Framework in Flink CDC OceanBase Source Connector [flink-cdc]

2024-12-17 Thread via GitHub


ChaomingZhangCN commented on PR #3211:
URL: https://github.com/apache/flink-cdc/pull/3211#issuecomment-2550294278

   > @whhe RichSourceFunction is Deprecated interface,We can implements use 
Source interface?
   
   JdbcIncrementalSource implements new Source interface :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-36922) Add warning when creating KryoSerializer for generic type

2024-12-17 Thread LEONID ILYEVSKY (Jira)
LEONID ILYEVSKY created FLINK-36922:
---

 Summary: Add warning when creating KryoSerializer for generic type
 Key: FLINK-36922
 URL: https://issues.apache.org/jira/browse/FLINK-36922
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: LEONID ILYEVSKY


In 
[https://github.com/apache/flink/blob/fbf532e213882369494ee0f8595814a60de999bd/flink-core/src/main/java/org/apache/flink/api/java/typeutils/GenericTypeInfo.java#L84]
 
the code throws an exception when the type is treated as generic and generic 
types are disabled.
It would be very helpful if this function logged a warning every time, so that 
we can see all generic types for which KryoSerializer is used. This way we can 
enable generic types and and examine them all by looking in the log.
So it would make sense just to add a warning log statement  before line 85.

Currently the only way to see the problematic generic type is to disable them 
and look at the exception, but this shows only the first one. We need to see 
all.

Another workaround is to use the debugger, with breakpoint on line 85. This is 
pretty tedious, and cannot be used in runtime environment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36907) Bump flink-connector-aws to 1.20

2024-12-17 Thread Yanquan Lv (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanquan Lv updated FLINK-36907:
---
Parent: FLINK-36641
Issue Type: Sub-task  (was: New Feature)

> Bump flink-connector-aws to 1.20
> 
>
> Key: FLINK-36907
> URL: https://issues.apache.org/jira/browse/FLINK-36907
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Yanquan Lv
>Priority: Major
>
> flink-connector-aws has released 5.0.0 that supports Flink 1.19 and 1.20, 
> considering that Flink is also preparing to release version 2.0, we should 
> bump to 1.20 to keep up with the new version of the API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36887) Add support for Flink 1.20 in Flink Hive connector

2024-12-17 Thread Yanquan Lv (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanquan Lv updated FLINK-36887:
---
Parent: FLINK-36641
Issue Type: Sub-task  (was: New Feature)

> Add support for Flink 1.20 in Flink Hive connector 
> ---
>
> Key: FLINK-36887
> URL: https://issues.apache.org/jira/browse/FLINK-36887
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Yanquan Lv
>Assignee: Yanquan Lv
>Priority: Major
>  Labels: pull-request-available
>
> Flink 1.20 have been released for several months and  Flink 1.20 is the last 
> minor release, we should provide a release that support 1.20 for users who 
> want to bump to the *Long Term Support* version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36641) Support and Release Connectors for Flink 1.20

2024-12-17 Thread Yanquan Lv (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanquan Lv updated FLINK-36641:
---
Description: 
This is a parent task to trace all external connectors support and release for 
Flink 1.20

Before task that traced all external connectors support and release for Flink 
1.19:
https://issues.apache.org/jira/browse/FLINK-35131

  was:This is a parent task to trace all external connectors support and 
release for Flink 1.20


> Support and Release Connectors for Flink 1.20
> -
>
> Key: FLINK-36641
> URL: https://issues.apache.org/jira/browse/FLINK-36641
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.20.0
>Reporter: Yanquan Lv
>Assignee: Yanquan Lv
>Priority: Major
>
> This is a parent task to trace all external connectors support and release 
> for Flink 1.20
> Before task that traced all external connectors support and release for Flink 
> 1.19:
> https://issues.apache.org/jira/browse/FLINK-35131



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >