Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886088951


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala:
##
@@ -1158,7 +1157,7 @@ class TemporalTypesTest extends ExpressionTestBase {
 testAllApis(
   toTimestampLtz(100.01.cast(DataTypes.FLOAT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100.01 AS FLOAT), 0)",
-  "1970-01-01 08:01:40.010")
+  "1970-01-01 08:01:40.000")

Review Comment:
   Hi @snuyanzin, the 010 is the original test case. I reverted it back to 010 
and the test passes now. 
   I added a precision with 3 in the new Java test. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886085683


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/TimeFunctionsITCase.java:
##
@@ -783,8 +790,15 @@ private Stream toTimestampLtzTestCases() {
 .testResult(
 toTimestampLtz($("f4"), literal(0)),
 "TO_TIMESTAMP_LTZ(-" + Double.MAX_VALUE + ", 
0)",
-null, // expecting NULL result
-TIMESTAMP_LTZ(0).nullable())
+null,
+TIMESTAMP_LTZ(3).nullable())
+.testResult(

Review Comment:
   To address [this 
comment](https://github.com/apache/flink/pull/25763#discussion_r1884184421), 
added a FLOAT case with precision 3.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886084382


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -2332,7 +2332,8 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
 public static final BuiltInFunctionDefinition TO_TIMESTAMP_LTZ =
 BuiltInFunctionDefinition.newBuilder()
-.name("TO_TIMESTAMP_LTZ")
+.name("toTimestampLtz")
+.sqlName("TO_TIMESTAMP_LTZ")

Review Comment:
   Reverted the change of function name. Added necessary logic register the 
function. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-36876) Operator Heap Memory Leak

2024-12-15 Thread chenyuzhi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17905885#comment-17905885
 ] 

chenyuzhi commented on FLINK-36876:
---

Well, how about rewrite a new RestClient/RestClusterClient(just the solution2 
mentioned above) in Operator as a temporary solution before Flink-core change 
release.

 

 

> Operator Heap Memory Leak
> -
>
> Key: FLINK-36876
> URL: https://issues.apache.org/jira/browse/FLINK-36876
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.6.1, kubernetes-operator-1.10.0
> Environment:  
> Flink Operator Version:   1.6.1 (I think the latest version 1.10 has the same 
> problem)
>  
> JDK:  openjdk version "11.0.24" 
>  
> GC: G1
>  
> FlinkDeployment ammout:  3000+
>  
>Reporter: chenyuzhi
>Assignee: chenyuzhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-12-10-15-57-00-309.png, 
> image-2024-12-10-16-09-22-749.png, image-2024-12-10-16-11-21-830.png, 
> screenshot-1.png
>
>
> When the amount of FlinkDeployment increasing, the heap memory used by 
> Kubernetes Operator keep increasing.
> Eventhough after Old GC, the heap memory used can not be decreased as 
> expected. 
> !image-2024-12-10-15-57-00-309.png|width=443,height=541!
> Finally, the Kubernetes Operator was OOMKilled by OS;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886085970


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala:
##
@@ -1140,41 +1140,41 @@ class TemporalTypesTest extends ExpressionTestBase {
 tableConfig.setLocalTimeZone(ZoneId.of("Asia/Shanghai"))
 
 // INT -> TIMESTAMP_LTZ
-testAllApis(toTimestampLtz(100, 0), "TO_TIMESTAMP_LTZ(100, 0)", 
"1970-01-01 08:01:40")
+testAllApis(toTimestampLtz(100, 0), "TO_TIMESTAMP_LTZ(100, 0)", 
"1970-01-01 08:01:40.000")
 
 // TINYINT -> TIMESTAMP_LTZ
 testAllApis(
   toTimestampLtz(100.cast(DataTypes.TINYINT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100 AS TINYINT), 0)",
-  "1970-01-01 08:01:40")
+  "1970-01-01 08:01:40.000")
 
 // BIGINT -> TIMESTAMP_LTZ
 testAllApis(
   toTimestampLtz(100.cast(DataTypes.BIGINT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100 AS BIGINT), 0)",
-  "1970-01-01 08:01:40")
+  "1970-01-01 08:01:40.000")
 
 // FLOAT -> TIMESTAMP_LTZ
 testAllApis(
   toTimestampLtz(100.01.cast(DataTypes.FLOAT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100.01 AS FLOAT), 0)",
-  "1970-01-01 08:01:40")
+  "1970-01-01 08:01:40.010")

Review Comment:
   Revert to the original test case. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886090826


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ToTimestampLtzFunction.java:
##
@@ -92,74 +121,72 @@ public TimestampData eval(Integer epoch) {
 return null;
 }
 
-return eval(epoch, 3);

Review Comment:
   Replying to this comment:
   This is not possible, because we handle exact numeric and fractional data 
differently. Therefore, we can’t use the umbrella Number for them. 
   I keep them as is to be consistent with existing logic. 
   @snuyanzin , if you have a better way, please shed some lights here. 
   
   
   > Can not we replace a number of these duplicating methods with something 
like
   > 
   > public TimestampData eval(Number epoch) {
   > if (epoch == null) {
   > return null;
   > }
   > 
   > return eval(epoch, 3);
   > }
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix][configuration] Remove the deprecated test class TaskManagerLoadBalanceModeTest. [flink]

2024-12-15 Thread via GitHub


RocMarshal opened a new pull request, #25800:
URL: https://github.com/apache/flink/pull/25800

   
   
   
   ## What is the purpose of the change
   
   [hotfix][configuration] Remove the deprecated test class 
TaskManagerLoadBalanceModeTest.
   
   
   ## Brief change log
   
   [hotfix][configuration] Remove the deprecated test class 
TaskManagerLoadBalanceModeTest.
   
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][configuration] Remove the deprecated test class TaskManagerLoadBalanceModeTest. [flink]

2024-12-15 Thread via GitHub


flinkbot commented on PR #25800:
URL: https://github.com/apache/flink/pull/25800#issuecomment-2544698974

   
   ## CI report:
   
   * 433c2ec4dddc3a77448feed1bddc1585c13fef35 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36892][Runtime] Properly handle the watermark status within async state processing [flink]

2024-12-15 Thread via GitHub


Zakelly commented on PR #25792:
URL: https://github.com/apache/flink/pull/25792#issuecomment-2544427450

   Thanks for the review! Merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36759][sql-gateway] Add REST API to deploy script in application mode [flink]

2024-12-15 Thread via GitHub


hackergin commented on code in PR #25730:
URL: https://github.com/apache/flink/pull/25730#discussion_r1886088606


##
flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/service/SqlGatewayServiceImpl.java:
##
@@ -330,6 +336,41 @@ public OperationHandle refreshMaterializedTable(
 }
 }
 
+@Override
+public  ClusterID deployScript(
+SessionHandle sessionHandle,
+@Nullable Path scriptPath,
+@Nullable String script,
+Configuration executionConfig)
+throws SqlGatewayException {
+Session session = sessionManager.getSession(sessionHandle);
+if (scriptPath == null && script == null) {
+throw new IllegalArgumentException("Please specify script path or 
script.");
+}
+Configuration mergedConfig = 
Configuration.fromMap(session.getSessionConfig());
+mergedConfig.addAll(executionConfig);
+
+List arguments = new ArrayList<>();
+if (scriptPath != null) {
+arguments.add("--" + SqlDriver.OPTION_SQL_FILE.getLongOpt());
+arguments.add(scriptPath.toString());
+}
+if (script != null) {
+arguments.add("--" + SqlDriver.OPTION_SQL_SCRIPT.getLongOpt());
+arguments.add(script);
+}
+
+ApplicationConfiguration applicationConfiguration =
+new ApplicationConfiguration(
+arguments.toArray(new String[0]), 
SqlDriver.class.getName());
+try {
+return new ApplicationClusterDeployer(new 
DefaultClusterClientServiceLoader())
+.run(mergedConfig, applicationConfiguration);
+} catch (Exception e) {
+throw new SqlGatewayException(e);

Review Comment:
   Consider using log.error to log the details before throwing an exception and 
including relevant context information in the exception message. This would 
ensure consistency with other functions and facilitate troubleshooting



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886088390


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ToTimestampLtzTypeStrategy.java:
##
@@ -23,19 +23,54 @@
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.inference.CallContext;
 import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
+import java.util.List;
 import java.util.Optional;
 
 /** Type strategy of {@code TO_TIMESTAMP_LTZ}. */
 @Internal
 public class ToTimestampLtzTypeStrategy implements TypeStrategy {
 
+private static final int DEFAULT_PRECISION = 3;
+
 @Override
 public Optional inferType(CallContext callContext) {
-if (callContext.isArgumentLiteral(1)) {
-final int precision = callContext.getArgumentValue(1, 
Integer.class).get();
-return Optional.of(DataTypes.TIMESTAMP_LTZ(precision));
+List argumentTypes = callContext.getArgumentDataTypes();

Review Comment:
   Added unit tests. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886084382


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -2332,7 +2332,8 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
 public static final BuiltInFunctionDefinition TO_TIMESTAMP_LTZ =
 BuiltInFunctionDefinition.newBuilder()
-.name("TO_TIMESTAMP_LTZ")
+.name("toTimestampLtz")
+.sqlName("TO_TIMESTAMP_LTZ")

Review Comment:
   reverted the change of function name. Added necessary logic register the 
function in this commit. 



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -2332,7 +2332,8 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
 public static final BuiltInFunctionDefinition TO_TIMESTAMP_LTZ =
 BuiltInFunctionDefinition.newBuilder()
-.name("TO_TIMESTAMP_LTZ")
+.name("toTimestampLtz")
+.sqlName("TO_TIMESTAMP_LTZ")

Review Comment:
   Reverted the change of function name. Added necessary logic register the 
function in this commit. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886083289


##
docs/data/sql_functions.yml:
##
@@ -679,9 +679,15 @@ temporal:
   - sql: TO_DATE(string1[, string2])
 table: toDate(STRING1[, STRING2])
 description: Converts a date string string1 with format string2 (by 
default '-MM-dd') to a date.
-  - sql: TO_TIMESTAMP_LTZ(numeric, precision)
+  - sql: TO_TIMESTAMP_LTZ(numeric[, precision])

Review Comment:
   added doc to `sql_functions_zh.yml`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886094264


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ToTimestampLtzFunction.java:
##
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.utils.DateTimeUtils;
+
+/**
+ * Implementation of {@link BuiltInFunctionDefinitions#TO_TIMESTAMP_LTZ}.
+ *
+ * Supported function signatures: TO_TIMESTAMP_LTZ(numeric, precision) ->
+ * TIMESTAMP_LTZ(precision) TO_TIMESTAMP_LTZ(string) -> TIMESTAMP_LTZ(3) 
TO_TIMESTAMP_LTZ(string,
+ * format) -> TIMESTAMP_LTZ(3) TO_TIMESTAMP_LTZ(string, format, timezone) -> 
TIMESTAMP_LTZ(3)
+ * TO_TIMESTAMP_LTZ(numeric) -> TIMESTAMP_LTZ(3)
+ */
+@Internal
+public class ToTimestampLtzFunction extends BuiltInScalarFunction {
+
+public ToTimestampLtzFunction(SpecializedFunction.SpecializedContext 
context) {
+super(BuiltInFunctionDefinitions.TO_TIMESTAMP_LTZ, context);
+}
+
+public TimestampData eval(Integer epoch, Integer precision) {
+if (epoch == null || precision == null) {
+return null;
+}
+
+return DateTimeUtils.toTimestampData(epoch, precision);
+}
+
+public TimestampData eval(Long epoch, Integer precision) {
+if (epoch == null || precision == null) {
+return null;
+}
+
+return DateTimeUtils.toTimestampData(epoch, precision);
+}
+
+public TimestampData eval(Double epoch, Integer precision) {
+if (epoch == null || precision == null) {
+return null;
+}
+
+return DateTimeUtils.toTimestampData(epoch, precision);
+}
+
+public TimestampData eval(Float value, Integer precision) {
+if (value == null || precision == null) {
+return null;
+}
+return DateTimeUtils.toTimestampData(value.longValue(), precision);
+}
+
+public TimestampData eval(Byte value, Integer precision) {
+if (value == null || precision == null) {
+return null;
+}
+return DateTimeUtils.toTimestampData(value.longValue(), precision);
+}
+
+public TimestampData eval(DecimalData epoch, Integer precision) {
+if (epoch == null || precision == null) {
+return null;
+}
+
+return DateTimeUtils.toTimestampData(epoch, precision);
+}
+
+public TimestampData eval(Integer epoch) {
+if (epoch == null) {
+return null;
+}
+
+return eval(epoch, 3);
+}
+
+public TimestampData eval(Long epoch) {
+if (epoch == null) {
+return null;
+}
+
+return eval(epoch, 3);
+}
+
+public TimestampData eval(Float epoch) {
+if (epoch == null) {
+return null;
+}
+
+return eval(epoch, 3);
+}

Review Comment:
   Address here: 
https://github.com/apache/flink/pull/25763/commits/533532547fc7ccf84e3ec6ee163597d3d61afcc2#r1886090826



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36904] Fix document error on how to programmatically configure serialization [flink]

2024-12-15 Thread via GitHub


reswqa merged PR #25795:
URL: https://github.com/apache/flink/pull/25795


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36906) Optimize the logic for determining if a split is finished.

2024-12-15 Thread xiaochen.zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaochen.zhou updated FLINK-36906:
--
Description: When determining if a split is finished, process only the 
partitions with data from the current fetch instead of all partitions. This 
reduces unnecessary partition checks and improves performance and resource 
utilization.

> Optimize the logic for determining if a split is finished.
> --
>
> Key: FLINK-36906
> URL: https://issues.apache.org/jira/browse/FLINK-36906
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: xiaochen.zhou
>Priority: Minor
>  Labels: pull-request-available
>
> When determining if a split is finished, process only the partitions with 
> data from the current fetch instead of all partitions. This reduces 
> unnecessary partition checks and improves performance and resource 
> utilization.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36906) Optimize the logic for determining if a split is finished.

2024-12-15 Thread xiaochen.zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaochen.zhou updated FLINK-36906:
--
Labels: pull-request-available  (was: )

> Optimize the logic for determining if a split is finished.
> --
>
> Key: FLINK-36906
> URL: https://issues.apache.org/jira/browse/FLINK-36906
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: xiaochen.zhou
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36906) Optimize the logic for determining if a split is finished.

2024-12-15 Thread xiaochen.zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaochen.zhou updated FLINK-36906:
--
Priority: Minor  (was: Major)

> Optimize the logic for determining if a split is finished.
> --
>
> Key: FLINK-36906
> URL: https://issues.apache.org/jira/browse/FLINK-36906
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: xiaochen.zhou
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36906) Optimize the logic for determining if a split is finished.

2024-12-15 Thread xiaochen.zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaochen.zhou updated FLINK-36906:
--
Component/s: Connectors / Kafka

> Optimize the logic for determining if a split is finished.
> --
>
> Key: FLINK-36906
> URL: https://issues.apache.org/jira/browse/FLINK-36906
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: xiaochen.zhou
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-36906) Optimize the logic for determining if a split is finished.

2024-12-15 Thread xiaochen.zhou (Jira)
xiaochen.zhou created FLINK-36906:
-

 Summary: Optimize the logic for determining if a split is finished.
 Key: FLINK-36906
 URL: https://issues.apache.org/jira/browse/FLINK-36906
 Project: Flink
  Issue Type: Improvement
Reporter: xiaochen.zhou






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36906] Optimize the logic for determining if a split is finished [flink-connector-kafka]

2024-12-15 Thread via GitHub


xiaochen-zhou commented on PR #141:
URL: 
https://github.com/apache/flink-connector-kafka/pull/141#issuecomment-2544655796

   Friendly ping, do you have time to take a look @AHeise 🙏 ?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886093369


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##
@@ -385,6 +388,18 @@ public static TimestampData toTimestampData(int v, int 
precision) {
 }
 }
 

Review Comment:
   To address this comment, I replaced 3 with a constant. 
   After giving it some thought, I believe it's better to keep these functions 
in DateTimeUtils.java rather than moving them to toTimestampLtzFunction, since 
toTimestampLtzFunction already depends on DateTimeUtils.java as a library. What 
are your thoughts on this? @snuyanzin 
   
   > from any user outside of TO_TIMESTAMP_LTZ 3 here is a magic number.
   > I would suggest to have such calls only in your toTimestampLtzFunction 
class and don't put them here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-36892) Properly handle the watermark status

2024-12-15 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17905873#comment-17905873
 ] 

Zakelly Lan commented on FLINK-36892:
-

Merged 8aeb2ff into master.

> Properly handle the watermark status
> 
>
> Key: FLINK-36892
> URL: https://issues.apache.org/jira/browse/FLINK-36892
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Async State Processing, Runtime / Task
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>
> The watermark status is not properly handle in async state processing. It is 
> emitted to downstream synchronously, while the watermarks will respect the 
> ongoing timer processing. That should be fixed and aligned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36892][Runtime] Properly handle the watermark status within async state processing [flink]

2024-12-15 Thread via GitHub


Zakelly merged PR #25792:
URL: https://github.com/apache/flink/pull/25792


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886086309


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala:
##
@@ -1308,29 +1303,42 @@ class TemporalTypesTest extends ExpressionTestBase {
 // invalid type for the first input
 testExpectedSqlException(
   "TO_TIMESTAMP_LTZ('test_string_type', 0)",
-  "Cannot apply 'TO_TIMESTAMP_LTZ' to arguments of type" +
-" 'TO_TIMESTAMP_LTZ(, )'. Supported form(s):" +
-" 'TO_TIMESTAMP_LTZ(, )'",
+  "SQL validation failed. From line 1, column 8 to line 1, column 46: 
Cannot apply " +

Review Comment:
   Need to change these existing scala tests, because I changed the function 
signature. The change reflects the expected function behaviors. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886085970


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala:
##
@@ -1140,41 +1140,41 @@ class TemporalTypesTest extends ExpressionTestBase {
 tableConfig.setLocalTimeZone(ZoneId.of("Asia/Shanghai"))
 
 // INT -> TIMESTAMP_LTZ
-testAllApis(toTimestampLtz(100, 0), "TO_TIMESTAMP_LTZ(100, 0)", 
"1970-01-01 08:01:40")
+testAllApis(toTimestampLtz(100, 0), "TO_TIMESTAMP_LTZ(100, 0)", 
"1970-01-01 08:01:40.000")
 
 // TINYINT -> TIMESTAMP_LTZ
 testAllApis(
   toTimestampLtz(100.cast(DataTypes.TINYINT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100 AS TINYINT), 0)",
-  "1970-01-01 08:01:40")
+  "1970-01-01 08:01:40.000")
 
 // BIGINT -> TIMESTAMP_LTZ
 testAllApis(
   toTimestampLtz(100.cast(DataTypes.BIGINT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100 AS BIGINT), 0)",
-  "1970-01-01 08:01:40")
+  "1970-01-01 08:01:40.000")
 
 // FLOAT -> TIMESTAMP_LTZ
 testAllApis(
   toTimestampLtz(100.01.cast(DataTypes.FLOAT()), 0),
   "TO_TIMESTAMP_LTZ(CAST(100.01 AS FLOAT), 0)",
-  "1970-01-01 08:01:40")
+  "1970-01-01 08:01:40.010")

Review Comment:
   Reverted to the original test case. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on PR #25763:
URL: https://github.com/apache/flink/pull/25763#issuecomment-2544463015

   > Reviewed by Chi on 12/12/24 Group to test and/or review outside of the 
meeting (Nic Townsend from IBM has some insight he will add on this)
   
   Hi @davidradl, I appreciate your involvement with this PR. As a new member 
to this community, this means a lot to me! 
   However, this function is currently a blocker for a user at my company, 
Confluent. 
   
   I've also just submitted another commit to address the previous comments, 
added tests, and made other updates, which may have rendered some of your 
current feedback outdated.
   
   Given the time constraints, can I proceed with the Flink committer's 
approval, and address your comments in a separate PR?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36160) Support hive advanced configuration

2024-12-15 Thread zhuanshenbsj1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuanshenbsj1 updated FLINK-36160:
--
Description: 
Currently, Flink only supports advanced parameter configuration for its direct 
interaction with Hadoop. We aim to enable the convenient configuration of Hive 
sink parameters (including Hive's native configuration and Hive's Hadoop 
configuration) through advanced settings.



I plan to configure the Hive itself configuration  and the Hive's Hadoop 
configuration separately using the prefixes "flink.hive." and 
"flink.hive.hadoop."

> Support hive advanced configuration
> ---
>
> Key: FLINK-36160
> URL: https://issues.apache.org/jira/browse/FLINK-36160
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 2.0.0
>Reporter: zhuanshenbsj1
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> Currently, Flink only supports advanced parameter configuration for its 
> direct interaction with Hadoop. We aim to enable the convenient configuration 
> of Hive sink parameters (including Hive's native configuration and Hive's 
> Hadoop configuration) through advanced settings.
> I plan to configure the Hive itself configuration  and the Hive's Hadoop 
> configuration separately using the prefixes "flink.hive." and 
> "flink.hive.hadoop."



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36160] Support hive advanced configuration [flink]

2024-12-15 Thread via GitHub


zhuanshenbsj1 commented on code in PR #25258:
URL: https://github.com/apache/flink/pull/25258#discussion_r1886263037


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java:
##
@@ -226,6 +243,37 @@ public HiveCatalog(
 }
 
 public static HiveConf createHiveConf(
+@Nullable String hiveConfDir,
+@Nullable String hadoopConfDir,
+@Nullable ReadableConfig flinkConfiguration) {
+HiveConf hiveconf = initHiveConf(hiveConfDir, hadoopConfDir);
+// add all configuration key with prefix 'flink.hive.hadoop.' and 
'flink.hive.' in flink
+// conf to hive conf
+String hivePrefix = FLINK_HIVE_CONFIG_PREFIXES;
+String hadoopPrefix = "hadoop.";
+if (flinkConfiguration != null) {
+for (String key : flinkConfiguration.toMap().keySet()) {
+if (key.startsWith(hivePrefix)) {
+String newKey = key.substring(hivePrefix.length());
+if (newKey.startsWith(hadoopPrefix)) {
+newKey = newKey.substring(hadoopPrefix.length());
+}
+String value =
+flinkConfiguration.get(
+
ConfigOptions.key(key).stringType().noDefaultValue());
+hiveconf.set(newKey, value);
+LOG.debug(
+"Adding Flink config entry for {} as {}={} to Hive 
config",
+key,
+newKey,
+value);
+}
+}
+}
+return hiveconf;
+}
+
+public static HiveConf initHiveConf(

Review Comment:
   > nit: We can still name the method `createHiveConf`
   
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36160] Support hive advanced configuration [flink]

2024-12-15 Thread via GitHub


zhuanshenbsj1 commented on PR #25258:
URL: https://github.com/apache/flink/pull/25258#issuecomment-2544738657

   @luoyuxia  cc~
   
   > @zhuanshenbsj1 Thanks for contribution. And sorry for late reply. I left 
some comments; Also, please fill the motivation in JIRA 
[FLINK-36160](https://issues.apache.org/jira/browse/FLINK-36160), like why 
introduce it a must. And I think you may need document about the behavior, may 
be in the doc part 
https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/connectors/table/hive/overview/#connecting-to-hive
   
   Added motivation in JIRA and doc. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36160] Support hive advanced configuration [flink]

2024-12-15 Thread via GitHub


zhuanshenbsj1 commented on PR #25258:
URL: https://github.com/apache/flink/pull/25258#issuecomment-2544739370

   @luoyuxia cc~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36160] Support hive advanced configuration [flink]

2024-12-15 Thread via GitHub


zhuanshenbsj1 commented on code in PR #25258:
URL: https://github.com/apache/flink/pull/25258#discussion_r1886265775


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java:
##
@@ -226,6 +243,37 @@ public HiveCatalog(
 }
 
 public static HiveConf createHiveConf(
+@Nullable String hiveConfDir,
+@Nullable String hadoopConfDir,
+@Nullable ReadableConfig flinkConfiguration) {
+HiveConf hiveconf = initHiveConf(hiveConfDir, hadoopConfDir);
+// add all configuration key with prefix 'flink.hive.hadoop.' and 
'flink.hive.' in flink
+// conf to hive conf
+String hivePrefix = FLINK_HIVE_CONFIG_PREFIXES;
+String hadoopPrefix = "hadoop.";
+if (flinkConfiguration != null) {
+for (String key : flinkConfiguration.toMap().keySet()) {

Review Comment:
   > nit: use flinkConfiguration.toMap().entrySet()
   
   Adjust as you say.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-36887][hive] Bumping to 1.20.0 [flink-connector-hive]

2024-12-15 Thread via GitHub


lvyanquan opened a new pull request, #19:
URL: https://github.com/apache/flink-connector-hive/pull/19

   Refer to the discussion in 
https://lists.apache.org/thread/v44y2yh4b5wyp1lqrxgxc8qh680wpgfr, bumping to 
1.20 can help us  to better prepare next release.
   
   A following pr to cherry-pick relevant changes in Flink main repo is 
necessary.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36887][hive] Bumping to 1.20.0 [flink-connector-hive]

2024-12-15 Thread via GitHub


boring-cyborg[bot] commented on PR #19:
URL: 
https://github.com/apache/flink-connector-hive/pull/19#issuecomment-2544745765

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36887) Add support for Flink 1.20 in Flink Hive connector

2024-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36887:
---
Labels: pull-request-available  (was: )

> Add support for Flink 1.20 in Flink Hive connector 
> ---
>
> Key: FLINK-36887
> URL: https://issues.apache.org/jira/browse/FLINK-36887
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hive
>Reporter: Yanquan Lv
>Priority: Major
>  Labels: pull-request-available
>
> Flink 1.20 have been released for several months and  Flink 1.20 is the last 
> minor release, we should provide a release that support 1.20 for users who 
> want to bump to the *Long Term Support* version. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36832] Remove Deprecated classes and relevant tests. [flink-connector-kafka]

2024-12-15 Thread via GitHub


AHeise commented on PR #139:
URL: 
https://github.com/apache/flink-connector-kafka/pull/139#issuecomment-2544777216

   Hi @lvyanquan , thank you very much for tackling the release. Unfortunately, 
I don't have time to review it this year and per prior agreement, I expected 
@PatrickRen and @leonardBang to pick this up. I will ping them again on slack.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35555) Serializing List with null values throws NPE

2024-12-15 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-3:
--
Description: 
FLINK-34123 introduced built-in serialization support for java.util.List, which 
relies on the existing {{ListSerializer}} impl. However, {{ListSerializer}} 
does not allow null values, as it is originally designed for serializing 
{{ListState}} only where null value is explicitly forbidden in the contract.

FLINK-23420 is similar to our case here. We can extend ListSerializer to allow 
null values via prefixing each value by a null marker, and rely on 
TypeSerializerSnapshot to deal with state-compatibility.

  was:
FLINK-34123 introduced built-in serialization support for java.util.List, which 
relies on the existing {{ListSerializer}} impl. However, {{ListSerializer}} 
does not allow null values, as it is originally designed for serializing 
{{ListState}} only where null value is explicitly forbidden in the contract.

-Directly adding null marker to allow null values will break backwards state 
compatibility, so we'll need to introduce a new List serializer called 
{{NullableElementListSerializer}} and the corrsponding TypeInformation called 
{{NullableElementListTypeInfo}} that allows null values for serializing user 
objects, and leaves the existing {{ListSerializer}} and {{ListTypeInfo}} for 
Flink's internal state use.-

FLINK-23420 is similar to our case here. We can extend ListSerializer to allow 
null values via  a binary mask for marking null values and rely on 
TypeSerializerSnapshot to deal with state-compatibility.


> Serializing List with null values throws NPE
> 
>
> Key: FLINK-3
> URL: https://issues.apache.org/jira/browse/FLINK-3
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> FLINK-34123 introduced built-in serialization support for java.util.List, 
> which relies on the existing {{ListSerializer}} impl. However, 
> {{ListSerializer}} does not allow null values, as it is originally designed 
> for serializing {{ListState}} only where null value is explicitly forbidden 
> in the contract.
> FLINK-23420 is similar to our case here. We can extend ListSerializer to 
> allow null values via prefixing each value by a null marker, and rely on 
> TypeSerializerSnapshot to deal with state-compatibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-35555) Serializing List with null values throws NPE

2024-12-15 Thread Zhanghao Chen (Jira)


[ https://issues.apache.org/jira/browse/FLINK-3 ]


Zhanghao Chen deleted comment on FLINK-3:
---

was (Author: zhanghao chen):
FLINK-23420 is similar to our case here. We can extend ListSerializer to allow 
null values via  a binary mask for marking null values and rely on 
TypeSerializerSnapshot to deal with state-compatibility.

> Serializing List with null values throws NPE
> 
>
> Key: FLINK-3
> URL: https://issues.apache.org/jira/browse/FLINK-3
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> FLINK-34123 introduced built-in serialization support for java.util.List, 
> which relies on the existing {{ListSerializer}} impl. However, 
> {{ListSerializer}} does not allow null values, as it is originally designed 
> for serializing {{ListState}} only where null value is explicitly forbidden 
> in the contract.
> FLINK-23420 is similar to our case here. We can extend ListSerializer to 
> allow null values via prefixing each value by a null marker, and rely on 
> TypeSerializerSnapshot to deal with state-compatibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36069][runtime/rest] Extending job detail rest API to expose json stream graph [flink]

2024-12-15 Thread via GitHub


flinkbot commented on PR #25798:
URL: https://github.com/apache/flink/pull/25798#issuecomment-2543873996

   
   ## CI report:
   
   * 1f3a7b8dc0a6b1f4bc13c239c804bd466950b906 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36069) Extending job detail rest API to adapt to Incremental JobGraph Generation

2024-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36069:
---
Labels: pull-request-available  (was: )

> Extending job detail rest API to adapt to Incremental JobGraph Generation
> -
>
> Key: FLINK-36069
> URL: https://issues.apache.org/jira/browse/FLINK-36069
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Reporter: Junrui Li
>Assignee: Yu Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-36069][runtime/rest] Extending job detail rest API to expose json stream graph [flink]

2024-12-15 Thread via GitHub


yuchen-ecnu opened a new pull request, #25798:
URL: https://github.com/apache/flink/pull/25798

   
   
   ## What is the purpose of the change
   
 - Extending job detail rest API to adapt to Incremental JobGraph 
Generation in batch mode. 
 - Modify web ui to support job graph auto-refresh for Incremental JobGraph 
Generation.

   ## Brief change log
 - Modify `JobDetailHandler `to return the incrementally updated job graphs.
 - Modify the web ui to display the incremental job graph and the 
unconverted stream node.
 - Add a new checkbox in the webui to select whether to display the 
unconverted stream node or not.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - Added test that validates that `JsonPlanGenerator ` generates 
`JsonStreamGraph` works well
 - Added test that validates that `JobDetailHandler` returns Incremental 
JobGraph as expected
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36069][runtime/rest] Extending job detail rest API to expose json stream graph [flink]

2024-12-15 Thread via GitHub


yuchen-ecnu commented on PR #25798:
URL: https://github.com/apache/flink/pull/25798#issuecomment-2543873799

   Hi @JunRuiLee, can you help to review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21373] Add RabbitMQ SinkV2 Implementation, Port Flink version to Flink 1.19 [flink-connector-rabbitmq]

2024-12-15 Thread via GitHub


alenzo-arch commented on PR #29:
URL: 
https://github.com/apache/flink-connector-rabbitmq/pull/29#issuecomment-2543936042

   Are there any plans to merge this PR? Seem like its there just one open 
question about 1.18 (and now 1.20 as well...)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1883493055


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala:
##
@@ -1140,41 +1140,41 @@ class TemporalTypesTest extends ExpressionTestBase {
 tableConfig.setLocalTimeZone(ZoneId.of("Asia/Shanghai"))
 
 // INT -> TIMESTAMP_LTZ
-testAllApis(toTimestampLtz(100, 0), "TO_TIMESTAMP_LTZ(100, 0)", 
"1970-01-01 08:01:40.000")

Review Comment:
   Synced with @snuyanzin , this has always been the behavior for Flink. 
Therefore, I will revert these changes and make my new function align with this 
existing behaviors of TO_TIMESTAMP_LTZ.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] (FLINK-34932) Translate concepts of Flink-Kubernetes-Operator documentation

2024-12-15 Thread liudu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-34932 ]


liudu deleted comment on FLINK-34932:
---

was (Author: JIRAUSER308089):
I am willing to do this.could someone assign this to me?

> Translate concepts of Flink-Kubernetes-Operator documentation
> -
>
> Key: FLINK-34932
> URL: https://issues.apache.org/jira/browse/FLINK-34932
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Kubernetes Operator
>Affects Versions: 1.9.0
>Reporter: Caican Cai
>Assignee: Caican Cai
>Priority: Minor
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36904] Fix document error on how to programmatically configure serialization [flink]

2024-12-15 Thread via GitHub


reswqa commented on PR #25795:
URL: https://github.com/apache/flink/pull/25795#issuecomment-2544325095

   Do Chinese documents also have this problem?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21373] Add RabbitMQ SinkV2 Implementation, Port Flink version to Flink 1.19 [flink-connector-rabbitmq]

2024-12-15 Thread via GitHub


grahambunce commented on PR #29:
URL: 
https://github.com/apache/flink-connector-rabbitmq/pull/29#issuecomment-2543937396

   Is this still waiting on @MartijnVisser ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36069][runtime/rest] Extending job detail rest API to expose json stream graph [flink]

2024-12-15 Thread via GitHub


JunRuiLee commented on code in PR #25798:
URL: https://github.com/apache/flink/pull/25798#discussion_r1886070318


##
flink-runtime/src/main/java/org/apache/flink/runtime/execution/ExecutionState.java:
##
@@ -70,6 +70,8 @@ public enum ExecutionState {
 
 RECONCILING,
 
+PENDING,

Review Comment:
   How about handle `pending-operators` in the 
`JobDetailsHandler#createJobDetailsInfo`. This way, we wouldn't need to 
introduce any meaningless statuses to a common enum class.



##
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java:
##
@@ -101,6 +102,8 @@ void enableCheckpointing(
 
 void setJsonPlan(String jsonPlan);
 
+void setJsonStreamGraph(JsonStreamGraph jsonStreamGraph);

Review Comment:
   I prefer to retrieve the JSON of the stream graph from 
`ExecutionPlanSchedulingContext`. For example, we could add a method 
`getJsonStreamGraph` in `ExecutionPlanSchedulingContext`. This context will 
return the JSON held by `AdaptiveGraphManager`. When the stream graph is 
updated by the `DefaultStreamGraphContext`, the AdaptiveGraphManager should 
update this JSON.
   
   With this update, we could avoid manually passing this JSON to the execution 
graph and keep this change simple.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-36897) Error executing processElement when inheriting from AbstractAsyncStateStreamOperator

2024-12-15 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan resolved FLINK-36897.
-
Resolution: Won't Do

> Error executing processElement when inheriting from 
> AbstractAsyncStateStreamOperator
> 
>
> Key: FLINK-36897
> URL: https://issues.apache.org/jira/browse/FLINK-36897
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 2.0.0
>Reporter: Wang Qilong
>Priority: Major
>
> When I created the AbstractAsynchronous StateMapBundleOperator and inherited 
> it from the AbstractAsynchronous StateStreamOperator, there was an error in 
> the data passed into the element by the processElement of the 
> AbstractAsynchronous StateMapBundleOperator itself
> The inheritance relationship between asynchronous synchronization and two 
> classes is:
> AbstractMapBundleOperator->AbstractStreamOperator
> AbstractAsyncStateMapBundleOperator->AbstractAsyncStateStreamOperator->AbstractStreamOperator
> The reason for creating this class is to enable KeyedMapBundleOperator to 
> support asynchronous running capability
> Example of incorrect information: For example, the original data format was:
> val data = new mutable.MutableList[(String, Long)]
> data.+=(("x", 1L))
> data.+=(("x", 2L))
> data.+=(("x", 3L))
> data.+=(("y", 1L))
> data.+=(("y", 2L))
> data.+=(("z", 3L))
> So the result of data transmission becomes:
> val data = new mutable.MutableList[(String, Long)]
> data.+=(("x", 1L))
> data.+=(("x", 2L))
> data.+=(("x", 3L))
> data.+=(("x", 1L))
> data.+=(("x", 2L))
> data.+=(("x", 3L))
> How to reproduce:
> Run testOverloadedAccumulator() in sql/AggregateITCase.java in [1]
>  
> [1]  [https://github.com/Au-Miner/flink/tree/FLINK-36882]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-36892) Properly handle the watermark status

2024-12-15 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan resolved FLINK-36892.
-
Resolution: Fixed

> Properly handle the watermark status
> 
>
> Key: FLINK-36892
> URL: https://issues.apache.org/jira/browse/FLINK-36892
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Async State Processing, Runtime / Task
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>
> The watermark status is not properly handle in async state processing. It is 
> emitted to downstream synchronously, while the watermarks will respect the 
> ongoing timer processing. That should be fixed and aligned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886085683


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/TimeFunctionsITCase.java:
##
@@ -783,8 +790,15 @@ private Stream toTimestampLtzTestCases() {
 .testResult(
 toTimestampLtz($("f4"), literal(0)),
 "TO_TIMESTAMP_LTZ(-" + Double.MAX_VALUE + ", 
0)",
-null, // expecting NULL result
-TIMESTAMP_LTZ(0).nullable())
+null,
+TIMESTAMP_LTZ(3).nullable())
+.testResult(

Review Comment:
   To address this comment, added a FLOAT case with precision 3.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886087882


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -2332,14 +2332,24 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
 public static final BuiltInFunctionDefinition TO_TIMESTAMP_LTZ =
 BuiltInFunctionDefinition.newBuilder()
-.name("toTimestampLtz")
-.sqlName("TO_TIMESTAMP_LTZ")
+.name("TO_TIMESTAMP_LTZ")

Review Comment:
   Reverted the change and made all existing tests pass. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2024-12-15 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1886093553


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##
@@ -386,6 +406,18 @@ public static TimestampData toTimestampData(DecimalData v, 
int precision) {
 }
 }
 
+public static TimestampData toTimestampData(long epoch) {
+return toTimestampData(epoch, 3);
+}
+
+public static TimestampData toTimestampData(double epoch) {
+return toTimestampData(epoch, 3);
+}
+
+public static TimestampData toTimestampData(DecimalData epoch) {
+return toTimestampData(epoch, 3);
+}
+

Review Comment:
   addressed here: 
https://github.com/apache/flink/pull/25763/commits/533532547fc7ccf84e3ec6ee163597d3d61afcc2#r1886093369



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36906] Optimize the logic for determining if a split is finished [flink-connector-kafka]

2024-12-15 Thread via GitHub


AHeise commented on PR #141:
URL: 
https://github.com/apache/flink-connector-kafka/pull/141#issuecomment-2544772849

   Afaik this doesn't work and was the main reason for #100. If you last 
message is a transaction marker, then you would never check the stop condition 
on that partition at the point in time.
   
   I'll trigger the CI which should fail for the test that was specifically 
added for that scenario. 
   
   I'll leave this PR open untli we figured out if it can indeed be improved. 
Please double check the linked PR and the respective ticket.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36842] FlinkPipelineComposer allows injecting StreamExecutionEnvironment [flink-cdc]

2024-12-15 Thread via GitHub


sharonx commented on code in PR #3775:
URL: https://github.com/apache/flink-cdc/pull/3775#discussion_r1885836067


##
flink-cdc-composer/src/main/java/org/apache/flink/cdc/composer/PipelineExecution.java:
##
@@ -27,10 +29,18 @@ public interface PipelineExecution {
 class ExecutionInfo {
 private final String id;
 private final String description;
+private final JobClient jobClient;
 
 public ExecutionInfo(String id, String description) {
 this.id = id;
 this.description = description;
+this.jobClient = null;

Review Comment:
I'd try to avoid `null` usage as much as possible to avoid potential NPEs.  
It does look like the `jobClient` is available in all the places that construct 
`ExecutionInfo`. So can we just change the constructor? If not, please add 
`@Nullable` wherever is applicable. 



##
flink-cdc-composer/src/main/java/org/apache/flink/cdc/composer/flink/FlinkPipelineComposer.java:
##
@@ -54,6 +54,11 @@ public class FlinkPipelineComposer implements 
PipelineComposer {
 private final StreamExecutionEnvironment env;
 private final boolean isBlocking;
 
+public static FlinkPipelineComposer ofStreamExecutionEnvironment(
+StreamExecutionEnvironment env) {
+return new FlinkPipelineComposer(env, false);
+}

Review Comment:
   hmm to me this essentially expose the constructor and it's a bit opaque to 
set `isBlocking=false` statically here as you can have blocking mode for stream 
execution env too. My proposal is to make the constructor below public. 
   ```
 private FlinkPipelineComposer(StreamExecutionEnvironment env, boolean 
isBlocking) {
   this.env = env;
   this.isBlocking = isBlocking;
   }
   ```
   
   @PatrickRen I see that you are tagged in the history of the class. Any 
opinions?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36904] Fix document error on how to programmatically configure serialization [flink]

2024-12-15 Thread via GitHub


X-czh commented on PR #25795:
URL: https://github.com/apache/flink/pull/25795#issuecomment-2544332334

   @reswqa I just checked the Chinese doc, the serialization part has not been 
updated for long and does not reflect the new changes brought by FLIP-398. I'll 
create a new issue to update it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36791][table] Remove FlinkPruneEmptyRules. [flink]

2024-12-15 Thread via GitHub


liuyongvs commented on code in PR #25690:
URL: https://github.com/apache/flink/pull/25690#discussion_r1886058245


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/rules/logical/PruneEmptyRulesTest.xml:
##


Review Comment:
   @snuyanzin do same with FlinkLimit0RemoveRule now
   
   Former test for [[FlinkPruneEmptyRules]] which now replaced by Calcite's
* [[PruneEmptyRules.JOIN_RIGHT_INSTANCE]].



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36791][table] Remove FlinkPruneEmptyRules. [flink]

2024-12-15 Thread via GitHub


liuyongvs commented on code in PR #25690:
URL: https://github.com/apache/flink/pull/25690#discussion_r1875162913


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/rules/logical/PruneEmptyRulesTest.xml:
##


Review Comment:
   @snuyanzin test for PruneEmptyRulesTest, you mean we can remove it?
   but the test also has FlinkSubQueryRemoveRule.FILTER.
   
   ```
   programs.addLast(
 "rules",
 FlinkHepRuleSetProgramBuilder.newBuilder
   .setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_SEQUENCE)
   .setHepMatchOrder(HepMatchOrder.BOTTOM_UP)
   .add(RuleSets.ofList(
 FlinkSubQueryRemoveRule.FILTER,
 CoreRules.FILTER_REDUCE_EXPRESSIONS,
 CoreRules.PROJECT_REDUCE_EXPRESSIONS,
 PruneEmptyRules.FILTER_INSTANCE,
 PruneEmptyRules.PROJECT_INSTANCE,
 PruneEmptyRules.JOIN_RIGHT_INSTANCE
   ))
   .build()
   )
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36791][table] Remove FlinkPruneEmptyRules. [flink]

2024-12-15 Thread via GitHub


liuyongvs commented on code in PR #25690:
URL: https://github.com/apache/flink/pull/25690#discussion_r1875168910


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/rules/logical/PruneEmptyRulesTest.xml:
##


Review Comment:
   if you think it should remove the test. i think we also should remove 
JoinPushExpressionsRuleTest this test?
   what do you think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Remove CHARACTER_FILTER.filterCharacters function when notify dimension values. [flink]

2024-12-15 Thread via GitHub


hiliuxg commented on code in PR #25710:
URL: https://github.com/apache/flink/pull/25710#discussion_r1886067914


##
flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/AbstractPrometheusReporter.java:
##
@@ -104,7 +104,7 @@ public void notifyOfAddedMetric(
 final String key = dimension.getKey();
 dimensionKeys.add(
 CHARACTER_FILTER.filterCharacters(key.substring(1, 
key.length() - 1)));
-
dimensionValues.add(labelValueCharactersFilter.filterCharacters(dimension.getValue()));
+dimensionValues.add(dimension.getValue());

Review Comment:
   Hi,
   I have added a unit test. Could you please review the code again?
   Thank you!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Remove CHARACTER_FILTER.filterCharacters function when notify dimension values. [flink]

2024-12-15 Thread via GitHub


hiliuxg commented on code in PR #25710:
URL: https://github.com/apache/flink/pull/25710#discussion_r1886067914


##
flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/AbstractPrometheusReporter.java:
##
@@ -104,7 +104,7 @@ public void notifyOfAddedMetric(
 final String key = dimension.getKey();
 dimensionKeys.add(
 CHARACTER_FILTER.filterCharacters(key.substring(1, 
key.length() - 1)));
-
dimensionValues.add(labelValueCharactersFilter.filterCharacters(dimension.getValue()));
+dimensionValues.add(dimension.getValue());

Review Comment:
   Hi, @davidradl 
   I have added a unit test. Could you please review the code again?
   Thank you!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-36904) Fix document error on how to programmatically configure serialization

2024-12-15 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen resolved FLINK-36904.
---
Fix Version/s: 2.0.0
   Resolution: Fixed

Resolved via d541997954e7b28ffa9df5e4bcaea817c8aa8992.

> Fix document error on how to programmatically configure serialization
> -
>
> Key: FLINK-36904
> URL: https://issues.apache.org/jira/browse/FLINK-36904
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.20.0
>Reporter: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> From user email: 
> https://lists.apache.org/thread/mdokv42k0zxlxdwl7y61hv2pc3fjwmyq
>  
> ??I am trying to use 3rd party serializers as per 
> [https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/datastream/fault-tolerance/serialization/third_party_serializers/]??
> |??[3rd Party Serializers \\| Apache 
> Flink\|https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/datastream/fault-tolerance/serialization/third_party_serializers/]??
> ??3rd Party Serializers # If you use a custom type in your Flink program 
> which cannot be serialized by the Flink type serializer, Flink falls back to 
> using the generic Kryo serializer. You may register your own serializer or a 
> serialization system like Google Protobuf or Apache Thrift with Kryo. To do 
> that, simply register the type class and the serializer via the configuration 
> option pipeline ...??
> ??nightlies.apache.org??|
>  
> ??but the code sample does not compile??
> ??```??
> ??Configuration config = new Configuration();??
> ??// register the class of the serializer as serializer for a type??
> ??config.set(PipelineOptions.SERIALIZATION_CONFIG, ??
> ??    "[org.example.MyCustomType: \\{type: kryo, kryo-type: registered, 
> class: org.example.MyCustomSerializer}]");??
> ??```??
> ??because the expected parameter is a list of strings. I can do??
>  
> ??```??
> ??config.set(PipelineOptions.SERIALIZATION_CONFIG, List.of(??
> ??    "org.example.MyCustomType: \{type: kryo, kryo-type: registered, class: 
> org.example.MyCustomSerializer}"));??
> ??```??



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-36905) Update Chinese document to reflect the changes of FLIP-398

2024-12-15 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-36905:
-

 Summary: Update Chinese document to reflect the changes of FLIP-398
 Key: FLINK-36905
 URL: https://issues.apache.org/jira/browse/FLINK-36905
 Project: Flink
  Issue Type: Sub-task
  Components: API / Type Serialization System, Documentation
Affects Versions: 1.20.0, 2.0.0
Reporter: Zhanghao Chen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36904][BP-1.20] Fix document error on how to programmatically configure serialization [flink]

2024-12-15 Thread via GitHub


flinkbot commented on PR #25799:
URL: https://github.com/apache/flink/pull/25799#issuecomment-2544634558

   
   ## CI report:
   
   * b712aa2a841e8ef2114ecfaa06cc26668628204c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36876] Support external eventLoopGroup for RestClient [flink]

2024-12-15 Thread via GitHub


chenyuzhi459 commented on code in PR #25788:
URL: https://github.com/apache/flink/pull/25788#discussion_r1886236906


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java:
##
@@ -264,15 +278,21 @@ protected void initChannel(SocketChannel socketChannel) {
 }
 };
 
-// No NioEventLoopGroup constructor available that allows passing 
nThreads, threadFactory,
-// and selectStrategyFactory without also passing a SelectorProvider, 
so mimicking its
-// default value seen in other constructors
-NioEventLoopGroup group =
-new NioEventLoopGroup(
-1,
-new ExecutorThreadFactory("flink-rest-client-netty"),
-SelectorProvider.provider(),
-selectStrategyFactory);
+if (group == null) {
+// No NioEventLoopGroup constructor available that allows passing 
nThreads,
+// threadFactory,
+// and selectStrategyFactory without also passing a 
SelectorProvider, so mimicking its
+// default value seen in other constructors
+group =
+new NioEventLoopGroup(
+1,
+new 
ExecutorThreadFactory("flink-rest-client-netty"),
+SelectorProvider.provider(),
+selectStrategyFactory);
+useInternalEventLoopGroup = true;
+} else {
+useInternalEventLoopGroup = false;

Review Comment:
   I think the extenal service doesn't care the type of SocketChannel.  As the 
[jira](https://issues.apache.org/jira/browse/FLINK-36876) says, it just pass a 
shared event group to avoid heap memory leak. 
   
   Maybe it's a better choice to specify the contruct param `group` as 
NioEventLoopGroup, which could  avoid the type error building `Bootstrap` 
instance ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36876] Support external eventLoopGroup for RestClient [flink]

2024-12-15 Thread via GitHub


chenyuzhi459 commented on code in PR #25788:
URL: https://github.com/apache/flink/pull/25788#discussion_r1886236906


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java:
##
@@ -264,15 +278,21 @@ protected void initChannel(SocketChannel socketChannel) {
 }
 };
 
-// No NioEventLoopGroup constructor available that allows passing 
nThreads, threadFactory,
-// and selectStrategyFactory without also passing a SelectorProvider, 
so mimicking its
-// default value seen in other constructors
-NioEventLoopGroup group =
-new NioEventLoopGroup(
-1,
-new ExecutorThreadFactory("flink-rest-client-netty"),
-SelectorProvider.provider(),
-selectStrategyFactory);
+if (group == null) {
+// No NioEventLoopGroup constructor available that allows passing 
nThreads,
+// threadFactory,
+// and selectStrategyFactory without also passing a 
SelectorProvider, so mimicking its
+// default value seen in other constructors
+group =
+new NioEventLoopGroup(
+1,
+new 
ExecutorThreadFactory("flink-rest-client-netty"),
+SelectorProvider.provider(),
+selectStrategyFactory);
+useInternalEventLoopGroup = true;
+} else {
+useInternalEventLoopGroup = false;

Review Comment:
   I think the extenal service doesn't care the type of SocketChannel.  As the 
jira says, it just pass a shared event group to avoid heap memory leak. 
   
   Maybe it's a better choice to specify the contruct param `group` as 
NioEventLoopGroup, which could  avoid the type error building `Bootstrap` 
instance ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Flink 33387 33388 draft1 for CI [flink]

2024-12-15 Thread via GitHub


RocMarshal closed pull request #25737: Flink 33387 33388 draft1 for CI
URL: https://github.com/apache/flink/pull/25737


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org