Re: [PR] [BP-1.20][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


afedulov commented on PR #25794:
URL: https://github.com/apache/flink/pull/25794#issuecomment-2575271473

   @mehdid93 after the new CI images has been merged, this PR's CI is green. 
Could you please rebase the rest of the PRs you lister 
[here](https://issues.apache.org/jira/browse/FLINK-36716?focusedCommentId=17907318&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17907318)
 on the latest release-1.20 and rerun them?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35137][Connectors/JDBC] Update website data for 3.2.0 [flink-connector-jdbc]

2025-01-07 Thread via GitHub


JohnMa123 commented on PR #152:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/152#issuecomment-2575042911

   An incorrect pull request was submitted, please close it. Thank you


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r190186


##
flink-python/pyflink/table/expressions.py:
##
@@ -306,19 +306,38 @@ def to_timestamp(timestamp_str: Union[str, 
Expression[str]],
 return _binary_op("toTimestamp", timestamp_str, format)
 
 
-def to_timestamp_ltz(numeric_epoch_time, precision) -> Expression:
+def to_timestamp_ltz(*args) -> Expression:
 """
-Converts a numeric type epoch time to TIMESTAMP_LTZ.
+Converts a value to a timestamp with local time zone.
 
-The supported precision is 0 or 3:
-0 means the numericEpochTime is in second.
-3 means the numericEpochTime is in millisecond.
+Supported functions:
+1. to_timestamp_ltz(Numeric) -> DataTypes.TIMESTAMP_LTZ
+2. to_timestamp_ltz(Numeric, Integer) -> DataTypes.TIMESTAMP_LTZ

Review Comment:
   The supported functions as documents only include the types; it would be 
good to define what the parameters are expected to be, including the valid 
values for precision. I think this should be defined explicity rather than just 
implied from the examples.  
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r190186


##
flink-python/pyflink/table/expressions.py:
##
@@ -306,19 +306,38 @@ def to_timestamp(timestamp_str: Union[str, 
Expression[str]],
 return _binary_op("toTimestamp", timestamp_str, format)
 
 
-def to_timestamp_ltz(numeric_epoch_time, precision) -> Expression:
+def to_timestamp_ltz(*args) -> Expression:
 """
-Converts a numeric type epoch time to TIMESTAMP_LTZ.
+Converts a value to a timestamp with local time zone.
 
-The supported precision is 0 or 3:
-0 means the numericEpochTime is in second.
-3 means the numericEpochTime is in millisecond.
+Supported functions:
+1. to_timestamp_ltz(Numeric) -> DataTypes.TIMESTAMP_LTZ
+2. to_timestamp_ltz(Numeric, Integer) -> DataTypes.TIMESTAMP_LTZ

Review Comment:
   The supported functions as documents only include the types; it would be 
good to define what the parameters are expected to be, including the valid 
values for precision. I think this should be defined explicitly rather than 
just implied from the examples. for the 3 string function - we should detail 
the supported timezones. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.20][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


mehdid93 commented on PR #25794:
URL: https://github.com/apache/flink/pull/25794#issuecomment-2575497633

   @afedulov Thank a lot! I've rebased this PR should be fine now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-37025) Periodic SQL watermarks can travel back in time

2025-01-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-37025:


Assignee: Dawid Wysakowicz

> Periodic SQL watermarks can travel back in time
> ---
>
> Key: FLINK-37025
> URL: https://issues.apache.org/jira/browse/FLINK-37025
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.20.0, 2.0-preview
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 2.0.0, 1.20.1
>
>
> If watermarks are generated through SQL, e.g. using {{WATERMARK FOR ts AS 
> ts}} is used 
> https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86
> The code in {{onEvent}} can move the {{currentWatermark}} back in time, 
> because it does not check if it advanced. This is fine in case of 
> {{on-event}} watermarks, because underlying  {{WatermarkOutput}} will, but it 
> has confusing outcomes for {{on-periodic}}.
> Example:
> If events with timestamps:
> {code}
> {"id":1,"ts":"2024-01-01 00:00:00"}
> {"id":3,"ts":"2024-01-03 00:00:00"}
> {"id":2,"ts":"2024-01-02 00:00:00"}
> {code}
> come within the periodic emit interval, the generated watermark will be 
> "2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
> watermark for "2024-01-03 00:00:00" is swallowed and a new event is required 
> to progress the processing of that event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36647][transform] Support Timestampdiff and Timestampadd function in cdc pipeline transform [flink-cdc]

2025-01-07 Thread via GitHub


aiwenmo commented on PR #3698:
URL: https://github.com/apache/flink-cdc/pull/3698#issuecomment-2575405739

   PTAL @yuxiqian 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36610] MySQL CDC supports parsing gh-ost / pt-osc generated schema changes [flink-cdc]

2025-01-07 Thread via GitHub


GOODBOY008 commented on code in PR #3668:
URL: https://github.com/apache/flink-cdc/pull/3668#discussion_r1905613729


##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/utils/RecordUtils.java:
##
@@ -384,6 +390,40 @@ public static TableId getTableId(SourceRecord dataRecord) {
 return new TableId(dbName, null, tableName);
 }
 
+public static SourceRecord setTableId(
+SourceRecord dataRecord, TableId originalTableId, TableId tableId) 
{
+Struct value = (Struct) dataRecord.value();
+Document historyRecordDocument;
+try {
+historyRecordDocument = getHistoryRecord(dataRecord).document();
+} catch (IOException e) {
+throw new RuntimeException(e);
+}
+HistoryRecord newHistoryRecord =
+new HistoryRecord(
+historyRecordDocument.set(
+"ddl",
+historyRecordDocument
+.get("ddl")

Review Comment:
   ```suggestion
   
.get(HistoryRecord.Fields.DDL_STATEMENTS)
   ```



##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/utils/RecordUtils.java:
##
@@ -384,6 +390,40 @@ public static TableId getTableId(SourceRecord dataRecord) {
 return new TableId(dbName, null, tableName);
 }
 
+public static SourceRecord setTableId(
+SourceRecord dataRecord, TableId originalTableId, TableId tableId) 
{
+Struct value = (Struct) dataRecord.value();
+Document historyRecordDocument;
+try {
+historyRecordDocument = getHistoryRecord(dataRecord).document();
+} catch (IOException e) {
+throw new RuntimeException(e);
+}
+HistoryRecord newHistoryRecord =
+new HistoryRecord(
+historyRecordDocument.set(
+"ddl",

Review Comment:
   ditto



##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/test/java/org/apache/flink/cdc/connectors/mysql/table/MySqlOnLineSchemaMigrationTableITCase.java:
##
@@ -0,0 +1,598 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.mysql.table;
+
+import org.apache.flink.cdc.common.data.binary.BinaryRecordData;
+import org.apache.flink.cdc.common.data.binary.BinaryStringData;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.event.Event;
+import org.apache.flink.cdc.common.event.TableId;
+import org.apache.flink.cdc.common.schema.Schema;
+import org.apache.flink.cdc.common.types.DataType;
+import org.apache.flink.cdc.common.utils.TestCaseUtils;
+import org.apache.flink.cdc.connectors.mysql.source.MySqlSourceTestBase;
+import org.apache.flink.cdc.connectors.mysql.testutils.MySqlContainer;
+import org.apache.flink.cdc.connectors.mysql.testutils.MySqlVersion;
+import org.apache.flink.cdc.connectors.mysql.testutils.UniqueDatabase;
+import org.apache.flink.cdc.runtime.typeutils.BinaryRecordDataGenerator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.EnvironmentSettings;
+import org.apache.flink.table.api.TableResult;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.planner.factories.TestValuesTableFactory;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.CloseableIterator;
+
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.testcontainers.DockerClientFactory;
+import org.testcontainers.containers.Container;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.output.Slf4jLogConsumer;
+import org.testcontainers.lifecycle.Startables;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.Statement;
+import java.ut

[PR] [FLINK-37024][task] Make cancel watchdog cover tasks stuck in DEPLOYING state [flink]

2025-01-07 Thread via GitHub


X-czh opened a new pull request, #25915:
URL: https://github.com/apache/flink/pull/25915

   
   
   ## What is the purpose of the change
   
   Fix the issue that task can be stuck in deploying state forever when 
canceling job/failover.
   
   ## Brief change log
   
   Make cancel watchdog cover tasks stuck in DEPLOYING state.
   
   ## Verifying this change
   
   Added UT, mocking an invokable stuck in the constructor forever.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-37024) Task can be stuck in deploying state forever when canceling job/failover

2025-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-37024:
---
Labels: pull-request-available  (was: )

> Task can be stuck in deploying state forever when canceling job/failover
> 
>
> Key: FLINK-37024
> URL: https://issues.apache.org/jira/browse/FLINK-37024
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
>
> We observed that task can be stuck in deploying state forever when the task 
> loading/instantiating logic has some issues. Cancelling the job / failover 
> caused by failures of other tasks will also get stuck as the cancel watch dog 
> won't work for tasks in CREATED/DEPLOYING state at present. We should make 
> cancel watch dog cover tasks in DEPLOYING as well (no need for tasks in 
> CREATED state has there is no real logic between  CREATED->DEPLOYING).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-37024) Task can be stuck in deploying state forever when canceling job/failover

2025-01-07 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-37024:
-

 Summary: Task can be stuck in deploying state forever when 
canceling job/failover
 Key: FLINK-37024
 URL: https://issues.apache.org/jira/browse/FLINK-37024
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Task
Affects Versions: 1.20.0
Reporter: Zhanghao Chen


We observed that task can be stuck in deploying state forever when the task 
initializing logic has some issues. Cancelling the job / failover caused by 
failures of other tasks will also get stuck as the cancel watch dog won't work 
for tasks in CREATED/DEPLOYING state at present. We should make cancel watch 
dog cover tasks in DEPLOYING as well (no need for tasks in CREATED state has 
there is no real logic between  CREATED->DEPLOYING).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36576][runtime] Improving amount-based data balancing distribution algorithm for DefaultVertexParallelismAndInputInfosDecider [flink]

2025-01-07 Thread via GitHub


noorall commented on PR #25552:
URL: https://github.com/apache/flink/pull/25552#issuecomment-2575121705

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36610] MySQL CDC supports parsing gh-ost / pt-osc generated schema changes [flink-cdc]

2025-01-07 Thread via GitHub


lvyanquan commented on code in PR #3668:
URL: https://github.com/apache/flink-cdc/pull/3668#discussion_r1905359170


##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/utils/RecordUtils.java:
##
@@ -384,6 +390,40 @@ public static TableId getTableId(SourceRecord dataRecord) {
 return new TableId(dbName, null, tableName);
 }
 
+public static SourceRecord setTableId(
+SourceRecord dataRecord, TableId originalTableId, TableId tableId) 
{

Review Comment:
   Got it, that's because the temporary table is not included in the table 
filter condition.
   We can still get the table name from source struct of schemaChangeEvent, but 
this loses the meaning of doing so.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36610] MySQL CDC supports parsing gh-ost / pt-osc generated schema changes [flink-cdc]

2025-01-07 Thread via GitHub


lvyanquan commented on code in PR #3668:
URL: https://github.com/apache/flink-cdc/pull/3668#discussion_r1905378005


##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/utils/RecordUtils.java:
##
@@ -489,4 +529,75 @@ private static Optional 
getWatermarkKind(SourceRecord record) {
 }
 return Optional.empty();
 }
+
+/**
+ * This utility method checks if given source record is a gh-ost/pt-osc 
initiated schema change
+ * event by checking the "alter" ddl.
+ */
+public static boolean isOnLineSchemaChangeEvent(SourceRecord record) {
+if (!isSchemaChangeEvent(record)) {
+return false;
+}
+Struct value = (Struct) record.value();
+ObjectMapper mapper = new ObjectMapper();
+try {
+// There will be these schema change events generated in total 
during one transaction.
+//
+// gh-ost:
+// DROP TABLE IF EXISTS `db`.`_tb1_gho`
+// DROP TABLE IF EXISTS `db`.`_tb1_del`
+// DROP TABLE IF EXISTS `db`.`_tb1_ghc`
+// create /* gh-ost */ table `db`.`_tb1_ghc` ...
+// create /* gh-ost */ table `db`.`_tb1_gho` like `db`.`tb1`
+// alter /* gh-ost */ table `db`.`_tb1_gho` add column c 
varchar(255)
+// create /* gh-ost */ table `db`.`_tb1_del` ...
+// DROP TABLE IF EXISTS `db`.`_tb1_del`
+// rename /* gh-ost */ table `db`.`tb1` to `db`.`_tb1_del`
+// rename /* gh-ost */ table `db`.`_tb1_gho` to `db`.`tb1`
+// DROP TABLE IF EXISTS `db`.`_tb1_ghc`
+// DROP TABLE IF EXISTS `db`.`_tb1_del`
+//
+// pt-osc:
+// CREATE TABLE `db`.`_test_tb1_new`
+// ALTER TABLE `db`.`_test_tb1_new` add column c varchar(50)
+// CREATE TRIGGER `pt_osc_db_test_tb1_del`...
+// CREATE TRIGGER `pt_osc_db_test_tb1_upd`...
+// CREATE TRIGGER `pt_osc_db_test_tb1_ins`...
+// ANALYZE TABLE `db`.`_test_tb1_new` /* pt-online-schema-change */
+// RENAME TABLE `db`.`test_tb1` TO `db`.`_test_tb1_old`, 
`db`.`_test_tb1_new` TO
+// `db`.`test_tb1`
+// DROP TABLE IF EXISTS `_test_tb1_old` /* generated by server */
+// DROP TRIGGER IF EXISTS `db`.`pt_osc_db_test_tb1_del`
+// DROP TRIGGER IF EXISTS `db`.`pt_osc_db_test_tb1_upd`
+// DROP TRIGGER IF EXISTS `db`.`pt_osc_db_test_tb1_ins`
+//
+// Among all these, we only need the "ALTER" one that happens on 
the `_gho`/`_new`
+// table.
+String ddl =
+mapper.readTree(value.getString(HISTORY_RECORD_FIELD))
+.get("ddl")
+.asText()
+.toLowerCase();
+if (ddl.toLowerCase().startsWith("alter")) {

Review Comment:
   This toLowerCase() is unnecessary?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36701][cdc-runtime] Add steps to get and emit schemaManager's latest evolvedSchema when SinkDataWriterOperator handles FlushEvent [flink-cdc]

2025-01-07 Thread via GitHub


yuxiqian commented on code in PR #3802:
URL: https://github.com/apache/flink-cdc/pull/3802#discussion_r1905043255


##
flink-cdc-e2e-tests/flink-cdc-source-e2e-tests/src/test/java/org/apache/flink/cdc/connectors/tests/OceanBaseE2eITCase.java:
##


Review Comment:
   Maybe this change can be split into an individual commit to keep commit 
history accurate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35737] Prevent Memory Leak by Closing MemoryExecutionGraphInfoStore on MiniCluster Shutdown [flink]

2025-01-07 Thread via GitHub


fengjiajie closed pull request #25009: [FLINK-35737] Prevent Memory Leak by 
Closing MemoryExecutionGraphInfoStore on MiniCluster Shutdown
URL: https://github.com/apache/flink/pull/25009


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36610] MySQL CDC supports parsing gh-ost / pt-osc generated schema changes [flink-cdc]

2025-01-07 Thread via GitHub


lvyanquan commented on code in PR #3668:
URL: https://github.com/apache/flink-cdc/pull/3668#discussion_r1905385783


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-mysql/src/main/java/org/apache/flink/cdc/connectors/mysql/source/MySqlDataSourceOptions.java:
##
@@ -281,4 +281,12 @@ public class MySqlDataSourceOptions {
 .withDescription(
 "List of readable metadata from SourceRecord to be 
passed to downstream, split by `,`. "
 + "Available readable metadata are: 
op_ts.");
+
+@Experimental
+public static final ConfigOption PARSE_ONLINE_SCHEMA_CHANGES =
+ConfigOptions.key("scan.parse.online.schema.changes.enabled")

Review Comment:
   A doc update for this option is welcomed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36282][pipeline-connector][cdc-connector][mysql]fix incorrect data type of TINYINT(1) in mysql pipeline connector [flink-cdc]

2025-01-07 Thread via GitHub


GOODBOY008 commented on PR #3608:
URL: https://github.com/apache/flink-cdc/pull/3608#issuecomment-2575253151

   @qg-lin Can you rebase your branch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.20][FLINK-36740] [WebFrontend] Update frontend dependencies to address vulnerabilities [flink]

2025-01-07 Thread via GitHub


afedulov commented on PR #25830:
URL: https://github.com/apache/flink/pull/25830#issuecomment-2575270221

   @mehdid93 after the new CI images has been merged, this PR's CI is green. 
Could you please rebase the rest of the PRs you lister 
[here](https://issues.apache.org/jira/browse/FLINK-36716?focusedCommentId=17907318&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17907318)
 on the latest release-1.20 and rerun them?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-37025) Periodic SQL watermarks can travel back in time

2025-01-07 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-37025:


 Summary: Periodic SQL watermarks can travel back in time
 Key: FLINK-37025
 URL: https://issues.apache.org/jira/browse/FLINK-37025
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 2.0-preview, 1.20.0
Reporter: Dawid Wysakowicz
 Fix For: 2.0.0, 1.20.1


If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` is 
used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of `on-event` 
watermarks`, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for `on-periodic`.

Example:
If events with timestamps:
```
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
```
come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-37025) Periodic SQL watermarks can travel back in time

2025-01-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-37025:
-
Description: 
If watermarks are generated through SQL, e.g. using {{WATERMARK FOR ts AS ts}} 
is used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of {{on-event}} 
watermarks, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for {{on-periodic}}.

Example:
If events with timestamps:
{code}
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
{code}

come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.

  was:
If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` is 
used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of {{on-event}} 
watermarks, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for {{on-periodic}}.

Example:
If events with timestamps:
{code}
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
{code}

come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.


> Periodic SQL watermarks can travel back in time
> ---
>
> Key: FLINK-37025
> URL: https://issues.apache.org/jira/browse/FLINK-37025
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.20.0, 2.0-preview
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 2.0.0, 1.20.1
>
>
> If watermarks are generated through SQL, e.g. using {{WATERMARK FOR ts AS 
> ts}} is used 
> https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86
> The code in {{onEvent}} can move the {{currentWatermark}} back in time, 
> because it does not check if it advanced. This is fine in case of 
> {{on-event}} watermarks, because underlying  {{WatermarkOutput}} will, but it 
> has confusing outcomes for {{on-periodic}}.
> Example:
> If events with timestamps:
> {code}
> {"id":1,"ts":"2024-01-01 00:00:00"}
> {"id":3,"ts":"2024-01-03 00:00:00"}
> {"id":2,"ts":"2024-01-02 00:00:00"}
> {code}
> come within the periodic emit interval, the generated watermark will be 
> "2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
> watermark for "2024-01-03 00:00:00" is swallowed and a new event is required 
> to progress the processing of that event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-37025) Periodic SQL watermarks can travel back in time

2025-01-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-37025:
-
Description: 
If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` is 
used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of {{on-event}} 
watermarks, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for {{on-periodic}}.

Example:
If events with timestamps:
{code}
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
{code}

come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.

  was:
If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` is 
used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of `on-event` 
watermarks`, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for `on-periodic`.

Example:
If events with timestamps:
{code}
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
{code}

come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.


> Periodic SQL watermarks can travel back in time
> ---
>
> Key: FLINK-37025
> URL: https://issues.apache.org/jira/browse/FLINK-37025
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.20.0, 2.0-preview
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 2.0.0, 1.20.1
>
>
> If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` 
> is used 
> https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86
> The code in {{onEvent}} can move the {{currentWatermark}} back in time, 
> because it does not check if it advanced. This is fine in case of 
> {{on-event}} watermarks, because underlying  {{WatermarkOutput}} will, but it 
> has confusing outcomes for {{on-periodic}}.
> Example:
> If events with timestamps:
> {code}
> {"id":1,"ts":"2024-01-01 00:00:00"}
> {"id":3,"ts":"2024-01-03 00:00:00"}
> {"id":2,"ts":"2024-01-02 00:00:00"}
> {code}
> come within the periodic emit interval, the generated watermark will be 
> "2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
> watermark for "2024-01-03 00:00:00" is swallowed and a new event is required 
> to progress the processing of that event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36608][table-runtime] Support dynamic StreamGraph optimization for AdaptiveBroadcastJoinOperator [flink]

2025-01-07 Thread via GitHub


SinBex commented on code in PR #25822:
URL: https://github.com/apache/flink/pull/25822#discussion_r1905533782


##
flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/DefaultStreamGraphContext.java:
##
@@ -140,6 +152,33 @@ public boolean 
modifyStreamEdge(List requestInfos)
 if (newPartitioner != null) {
 modifyOutputPartitioner(targetEdge, newPartitioner);
 }
+if (targetEdge != null && requestInfo.getTypeNumber() != 0) {

Review Comment:
   I haven't thought of it yet either; it's mainly defensive programming here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-37010][Runtime] Unify KeyedProcessFunction and the async one [flink]

2025-01-07 Thread via GitHub


Zakelly commented on PR #25901:
URL: https://github.com/apache/flink/pull/25901#issuecomment-2575644980

   I'm merging this since the #25717 is waiting on this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.20][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


afedulov commented on PR #25794:
URL: https://github.com/apache/flink/pull/25794#issuecomment-2575610818

   Well, I actually already took the liberty to rebase your remote branch hence 
the CI for this particular PR is already green:
   `30b7551 Azure: SUCCESS`
   My ask is more for the remaining open PRs: 
https://issues.apache.org/jira/browse/FLINK-36716?focusedCommentId=17907318&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17907318


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.20][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


mehdid93 commented on PR #25794:
URL: https://github.com/apache/flink/pull/25794#issuecomment-2575617240

   @afedulov Ah sorry I've misunderstand you, yes once this PR is merged I'll 
rebase both of PRs since this is a must for the next step


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BP-1.19][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


mehdid93 opened a new pull request, #25912:
URL: https://github.com/apache/flink/pull/25912

   Contribute-to: https://issues.apache.org/jira/browse/FLINK-36901
   
   
   
   ## What is the purpose of the change
   
   This pull request raises the version of the NodeJS module which is a 
pre-requisite to upgrade dependencies in order to address vulnerabilities in 
Runtime/WebFrontend component. 
   There will be another PR to fix vulnerabilities once this is merged.
   
   
   ## Brief change log
   
   NodeJS is v22.11.0/Npm is 10.9.0
   
   
   ## Verifying this change
   
   The Flink UI should be accessible without any difference
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BP-1.19][FLINK-34194] Update CI to Ubuntu 22.04 (Jammy) [flink]

2025-01-07 Thread via GitHub


mehdid93 opened a new pull request, #25911:
URL: https://github.com/apache/flink/pull/25911

   ## What is the purpose of the change
   
   This PR backport the changes done of the PR made by @zentol 
(https://github.com/apache/flink/pull/25708) in master for version 1.19.X to be 
used in dependencies upgrade.
   
   
   ## Brief change log
   
   - Ubuntu CI image version is no more v18 but v22
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-37023) Rest API returns 500 error with CancellationException

2025-01-07 Thread amjidc (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

amjidc updated FLINK-37023:
---
Description: 
We have probes that call JobManager rest API regularly, and these API calls are 
failing intermittently with 500 errors. Rest handler is throwing 
CancellationException at the same time.

The same intermittent errors are also observed on the Flink dashboard.

>From the stacktrace and source code, it looks when rest handler invalidates 
>job graph cache, it also cancels pending API requests that are still waiting 
>on the old cache, causing the exception and 500 errors. We believe this is 
>likely a bug in the rest handler.

Rest API logs:
{code:java}
172.x.x.x - 06CxHXoDkZp4cycZteGxS8b [29/Nov/2024:15:01:20 +] "GET 
/jobs/7acd3c4af372bc0a12dd34dee9c5e6bd/exceptions HTTP/1.1" 500 6453 "-" 
"go-resty/1.11.0 (https://github.com/go-resty/resty)"
172.x.x.x - 06CxHXoDkZp4cycZteGxS8b [29/Nov/2024:15:01:15 +] "GET 
/jobs/7acd3c4af372bc0a12dd34dee9c5e6bd/execution-times HTTP/1.1" 500 6453 "-" 
"go-resty/1.11.0 (https://github.com/go-resty/resty)"
172.x.x.x - 06CxHXoDkZp4cycZteGxS8b [02/Dec/2024:07:20:10 +] "GET 
/jobs/7acd3c4af372bc0a12dd34dee9c5e6bd HTTP/1.1" 500 6453 "-" "go-resty/1.11.0 
(https://github.com/go-resty/resty)" {code}
Stacktrace:
{code:java}
Unhandled exception.

java.util.concurrent.CancellationException 
at 
java.base/java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396)
 
at 
org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraphInternal(DefaultExecutionGraphCache.java:98)
 
at 
org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraphInfo(DefaultExecutionGraphCache.java:67)
 
at 
org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.handleRequest(AbstractExecutionGraphHandler.java:81)
 
at 
org.apache.flink.runtime.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:83)
 
at 
org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:196)
 
at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:88)
 
at java.base/java.util.Optional.ifPresent(Optional.java:183) 
at 
org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:45) 
at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:85)
 
at 
org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:50)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 
at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115)
 
at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94)
 
at 
org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
 
at 
org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:233)
 
at 
org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:70)
 
at 
org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channe

Re: [PR] [FLINK-36898][Table] Support SQL FLOOR and CEIL functions with NanoSecond and MicroSecond for TIMESTAMP_TLZ [flink]

2025-01-07 Thread via GitHub


snuyanzin commented on PR #25897:
URL: https://github.com/apache/flink/pull/25897#issuecomment-2575312634

   @hanyuzheng7 thanks for the PR
   
   would be also great if you fill the PR form


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905566167


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   some negative tests would be good. For example with precision 10, would it 
give an error relating to format or precision 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905760221


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Expressions.java:
##
@@ -366,6 +366,61 @@ public static ApiExpression toTimestampLtz(Object 
numericEpochTime, Object preci
 return apiCall(BuiltInFunctionDefinitions.TO_TIMESTAMP_LTZ, 
numericEpochTime, precision);
 }
 
+/**
+ * Converts the given time string with the specified format to {@link
+ * DataTypes#TIMESTAMP_LTZ(int)}.
+ *
+ * @param timestampStr The timestamp string to convert.
+ * @param format The format of the string.
+ * @return The timestamp value with {@link DataTypes#TIMESTAMP_LTZ(int)} 
type.
+ */
+public static ApiExpression toTimestampLtz(String timestampStr, String 
format) {
+return apiCall(BuiltInFunctionDefinitions.TO_TIMESTAMP_LTZ, 
timestampStr, format);
+}
+
+/**
+ * Converts a timestamp to {@link DataTypes#TIMESTAMP_LTZ(int)}.
+ *
+ * This method takes an object representing a timestamp and converts it 
to a TIMESTAMP_LTZ

Review Comment:
   nit: object -> String



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905775954


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ToTimestampLtzTypeStrategy.java:
##
@@ -20,22 +20,79 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.inference.CallContext;
 import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
+import java.util.List;
 import java.util.Optional;
 
 /** Type strategy of {@code TO_TIMESTAMP_LTZ}. */
 @Internal
 public class ToTimestampLtzTypeStrategy implements TypeStrategy {
 
+private static final int DEFAULT_PRECISION = 3;
+
 @Override
 public Optional inferType(CallContext callContext) {
-if (callContext.isArgumentLiteral(1)) {
-final int precision = callContext.getArgumentValue(1, 
Integer.class).get();
-return Optional.of(DataTypes.TIMESTAMP_LTZ(precision));
+List argumentTypes = callContext.getArgumentDataTypes();
+int argCount = argumentTypes.size();
+
+if (argCount < 1 || argCount > 3) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ requires 1 to 3 arguments, but 
"
++ argCount
++ " were provided.");
+}
+
+LogicalType firstType = argumentTypes.get(0).getLogicalType();
+LogicalTypeRoot firstTypeRoot = firstType.getTypeRoot();
+
+if (argCount == 1) {
+if (!isCharacterType(firstTypeRoot) && 
!firstType.is(LogicalTypeFamily.NUMERIC)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 1 argument, TO_TIMESTAMP_LTZ 
accepts  or .");
+}
+} else if (argCount == 2) {
+LogicalType secondType = argumentTypes.get(1).getLogicalType();
+LogicalTypeRoot secondTypeRoot = secondType.getTypeRoot();
+if (firstType.is(LogicalTypeFamily.NUMERIC)) {
+if (secondTypeRoot != LogicalTypeRoot.INTEGER) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, ) 
requires the second argument to be .");
+}
+} else if (isCharacterType(firstTypeRoot)) {
+if (!isCharacterType(secondTypeRoot)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, 
) requires the second argument to be .");
+}
+} else {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 2 arguments, TO_TIMESTAMP_LTZ 
requires the first argument to be  or .");
+}
+} else if (argCount == 3) {
+if (!isCharacterType(firstTypeRoot)
+|| 
!isCharacterType(argumentTypes.get(1).getLogicalType().getTypeRoot())
+|| 
!isCharacterType(argumentTypes.get(2).getLogicalType().getTypeRoot())) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 3 arguments, TO_TIMESTAMP_LTZ 
requires all three arguments to be of type .");

Review Comment:
   what does  mean - I assume we mean the Flink logical types  
or  . I assume that STRING resolves to VARCHAR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-37025) Periodic SQL watermarks can travel back in time

2025-01-07 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-37025:
-
Description: 
If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` is 
used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of `on-event` 
watermarks`, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for `on-periodic`.

Example:
If events with timestamps:
{code}
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
{code}

come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.

  was:
If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` is 
used 
https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86

The code in {{onEvent}} can move the {{currentWatermark}} back in time, because 
it does not check if it advanced. This is fine in case of `on-event` 
watermarks`, because underlying  {{WatermarkOutput}} will, but it has confusing 
outcomes for `on-periodic`.

Example:
If events with timestamps:
```
{"id":1,"ts":"2024-01-01 00:00:00"}
{"id":3,"ts":"2024-01-03 00:00:00"}
{"id":2,"ts":"2024-01-02 00:00:00"}
```
come within the periodic emit interval, the generated watermark will be 
"2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
watermark for "2024-01-03 00:00:00" is swallowed and a new event is required to 
progress the processing of that event.


> Periodic SQL watermarks can travel back in time
> ---
>
> Key: FLINK-37025
> URL: https://issues.apache.org/jira/browse/FLINK-37025
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.20.0, 2.0-preview
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 2.0.0, 1.20.1
>
>
> If watermarks are generated through SQL, e.g. using `WATERMARK FOR ts AS ts` 
> is used 
> https://github.com/apache/flink/blob/e9353319ad625baa5b2c20fa709ab5b23f83c0f4/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/generated/GeneratedWatermarkGeneratorSupplier.java#L86
> The code in {{onEvent}} can move the {{currentWatermark}} back in time, 
> because it does not check if it advanced. This is fine in case of `on-event` 
> watermarks`, because underlying  {{WatermarkOutput}} will, but it has 
> confusing outcomes for `on-periodic`.
> Example:
> If events with timestamps:
> {code}
> {"id":1,"ts":"2024-01-01 00:00:00"}
> {"id":3,"ts":"2024-01-03 00:00:00"}
> {"id":2,"ts":"2024-01-02 00:00:00"}
> {code}
> come within the periodic emit interval, the generated watermark will be 
> "2024-01-02 00:00:00" instead of "2024-01-03 00:00:00". As a result the 
> watermark for "2024-01-03 00:00:00" is swallowed and a new event is required 
> to progress the processing of that event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905549176


##
docs/data/sql_functions_zh.yml:
##
@@ -805,11 +805,12 @@ temporal:
   - sql: TO_DATE(string1[, string2])
 table: toDate(STRING1[, STRING2])
 description: 将格式为 string2(默认为 '-MM-dd')的字符串 string1 转换为日期。
-  - sql: TO_TIMESTAMP_LTZ(numeric, precision)
-table: toTimestampLtz(numeric, PRECISION)
-description: |
-  将纪元秒或纪元毫秒转换为 TIMESTAMP_LTZ,有效精度为 0 或 3,0 代表 
`TO_TIMESTAMP_LTZ(epochSeconds, 0)`,
-  3 代表` TO_TIMESTAMP_LTZ(epochMilliseconds, 3)`。
+  - sql: TO_TIMESTAMP_LTZ(numeric[, precision])
+table: toTimestampLtz(NUMERIC, PRECISION)
+description: Converts an epoch seconds or epoch milliseconds to a 
TIMESTAMP_LTZ, the valid precision is 0 or 3, the 0 represents 
TO_TIMESTAMP_LTZ(epochSeconds, 0), the 3 represents 
TO_TIMESTAMP_LTZ(epochMilliseconds, 3). If no precision is provided, the 
default precision is 3. If any input is null, the function will return null.
+  - sql: TO_TIMESTAMP_LTZ(string1[, string2[, string3]])
+table: toTimestampLtz(STRING1[, STRING2[, STRING3]])
+description: Converts a timestamp string string1 with format string2 (by 
default '-MM-dd HH:mm:ss.SSS') in time zone string3 (by default 'UTC') to a 
TIMESTAMP_LTZ. If any input is null, the function will return null.

Review Comment:
   is the translation to chinese going to be tracked else where?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905566167


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   some negative tests would be good at the Python level. I know we have good 
coverage at the Java level. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905775954


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ToTimestampLtzTypeStrategy.java:
##
@@ -20,22 +20,79 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.inference.CallContext;
 import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
+import java.util.List;
 import java.util.Optional;
 
 /** Type strategy of {@code TO_TIMESTAMP_LTZ}. */
 @Internal
 public class ToTimestampLtzTypeStrategy implements TypeStrategy {
 
+private static final int DEFAULT_PRECISION = 3;
+
 @Override
 public Optional inferType(CallContext callContext) {
-if (callContext.isArgumentLiteral(1)) {
-final int precision = callContext.getArgumentValue(1, 
Integer.class).get();
-return Optional.of(DataTypes.TIMESTAMP_LTZ(precision));
+List argumentTypes = callContext.getArgumentDataTypes();
+int argCount = argumentTypes.size();
+
+if (argCount < 1 || argCount > 3) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ requires 1 to 3 arguments, but 
"
++ argCount
++ " were provided.");
+}
+
+LogicalType firstType = argumentTypes.get(0).getLogicalType();
+LogicalTypeRoot firstTypeRoot = firstType.getTypeRoot();
+
+if (argCount == 1) {
+if (!isCharacterType(firstTypeRoot) && 
!firstType.is(LogicalTypeFamily.NUMERIC)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 1 argument, TO_TIMESTAMP_LTZ 
accepts  or .");
+}
+} else if (argCount == 2) {
+LogicalType secondType = argumentTypes.get(1).getLogicalType();
+LogicalTypeRoot secondTypeRoot = secondType.getTypeRoot();
+if (firstType.is(LogicalTypeFamily.NUMERIC)) {
+if (secondTypeRoot != LogicalTypeRoot.INTEGER) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, ) 
requires the second argument to be .");
+}
+} else if (isCharacterType(firstTypeRoot)) {
+if (!isCharacterType(secondTypeRoot)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, 
) requires the second argument to be .");
+}
+} else {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 2 arguments, TO_TIMESTAMP_LTZ 
requires the first argument to be  or .");
+}
+} else if (argCount == 3) {
+if (!isCharacterType(firstTypeRoot)
+|| 
!isCharacterType(argumentTypes.get(1).getLogicalType().getTypeRoot())
+|| 
!isCharacterType(argumentTypes.get(2).getLogicalType().getTypeRoot())) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 3 arguments, TO_TIMESTAMP_LTZ 
requires all three arguments to be of type .");

Review Comment:
   what does `` mean - I assume we mean the Flink logical types 
`` or `` . I assume that STRING resolves to VARCHAR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36836][Autoscaler] Supports config the upper and lower limits of target utilization [flink-kubernetes-operator]

2025-01-07 Thread via GitHub


gyfora commented on code in PR #921:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/921#discussion_r1905783988


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/config/AutoScalerOptions.java:
##
@@ -89,21 +89,41 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 + "seconds suffix, daily expression's 
formation is startTime-endTime, such as 9:30:30-10:50:20, when exclude from 
9:30:30-10:50:20 in Monday and Thursday "
 + "we can express it as 9:30:30-10:50:20 
&& * * * ? * 2,5");
 
-public static final ConfigOption TARGET_UTILIZATION =
-autoScalerConfig("target.utilization")
+public static final ConfigOption UTILIZATION_TARGET =
+autoScalerConfig("utilization.target")
 .doubleType()
 .defaultValue(0.7)
-
.withFallbackKeys(oldOperatorConfigKey("target.utilization"))
+
.withDeprecatedKeys(autoScalerConfigKey("target.utilization"))
+.withFallbackKeys(
+oldOperatorConfigKey("utilization.target"),
+oldOperatorConfigKey("target.utilization"))
 .withDescription("Target vertex utilization");
 
-public static final ConfigOption TARGET_UTILIZATION_BOUNDARY =
-autoScalerConfig("target.utilization.boundary")
+public static final ConfigOption UTILIZATION_TARGET_BOUNDARY =
+autoScalerConfig("utilization.target.boundary")

Review Comment:
   we should not rename this option also, the name you set is incorrect 
anyways. Keys should not be prefixes of each other (not yaml compliant)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36836][Autoscaler] Supports config the upper and lower limits of target utilization [flink-kubernetes-operator]

2025-01-07 Thread via GitHub


gyfora commented on code in PR #921:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/921#discussion_r1905782327


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/config/AutoScalerOptions.java:
##
@@ -89,21 +89,41 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 + "seconds suffix, daily expression's 
formation is startTime-endTime, such as 9:30:30-10:50:20, when exclude from 
9:30:30-10:50:20 in Monday and Thursday "
 + "we can express it as 9:30:30-10:50:20 
&& * * * ? * 2,5");
 
-public static final ConfigOption TARGET_UTILIZATION =
-autoScalerConfig("target.utilization")
+public static final ConfigOption UTILIZATION_TARGET =
+autoScalerConfig("utilization.target")
 .doubleType()
 .defaultValue(0.7)
-
.withFallbackKeys(oldOperatorConfigKey("target.utilization"))
+
.withDeprecatedKeys(autoScalerConfigKey("target.utilization"))
+.withFallbackKeys(
+oldOperatorConfigKey("utilization.target"),
+oldOperatorConfigKey("target.utilization"))
 .withDescription("Target vertex utilization");
 
-public static final ConfigOption TARGET_UTILIZATION_BOUNDARY =
-autoScalerConfig("target.utilization.boundary")
+public static final ConfigOption UTILIZATION_TARGET_BOUNDARY =
+autoScalerConfig("utilization.target.boundary")
 .doubleType()
 .defaultValue(0.3)
 
.withFallbackKeys(oldOperatorConfigKey("target.utilization.boundary"))
+.withFallbackKeys(
+
oldOperatorConfigKey("target.utilization.boundary"),
+
oldOperatorConfigKey("utilization.target.boundary"))
 .withDescription(
 "Target vertex utilization boundary. Scaling won't 
be performed if the processing capacity is within [target_rate / 
(target_utilization - boundary), (target_rate / (target_utilization + 
boundary)]");
 
+public static final ConfigOption UTILIZATION_MAX =
+autoScalerConfig("utilization.max")
+.doubleType()
+.noDefaultValue()
+.withFallbackKeys(oldOperatorConfigKey("utilization.max"))
+.withDescription("Max vertex utilization");
+
+public static final ConfigOption UTILIZATION_MIN =
+autoScalerConfig("utilization.min")
+.doubleType()
+.noDefaultValue()
+.withFallbackKeys(oldOperatorConfigKey("utilization.min"))
+.withDescription("Min vertex utilization");

Review Comment:
   I think we should remove the boundary at the end (deprecate now), but still 
min/max should not have fixed defaults as the default would be derived from the 
current target.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36486] [TABLE SQL/API] Remove deprecated methods StreamTableEnvironment.registerDataStream from flink-table-api-java-bridge module [flink]

2025-01-07 Thread via GitHub


sn-12-3 commented on PR #25527:
URL: https://github.com/apache/flink/pull/25527#issuecomment-2575797787

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-36486) Remove deprecated method `StreamTableEnvironment#registerDataStream`

2025-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-36486:
---
Labels: pull-request-available  (was: )

> Remove deprecated method `StreamTableEnvironment#registerDataStream`
> 
>
> Key: FLINK-36486
> URL: https://issues.apache.org/jira/browse/FLINK-36486
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905775954


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ToTimestampLtzTypeStrategy.java:
##
@@ -20,22 +20,79 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.inference.CallContext;
 import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
+import java.util.List;
 import java.util.Optional;
 
 /** Type strategy of {@code TO_TIMESTAMP_LTZ}. */
 @Internal
 public class ToTimestampLtzTypeStrategy implements TypeStrategy {
 
+private static final int DEFAULT_PRECISION = 3;
+
 @Override
 public Optional inferType(CallContext callContext) {
-if (callContext.isArgumentLiteral(1)) {
-final int precision = callContext.getArgumentValue(1, 
Integer.class).get();
-return Optional.of(DataTypes.TIMESTAMP_LTZ(precision));
+List argumentTypes = callContext.getArgumentDataTypes();
+int argCount = argumentTypes.size();
+
+if (argCount < 1 || argCount > 3) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ requires 1 to 3 arguments, but 
"
++ argCount
++ " were provided.");
+}
+
+LogicalType firstType = argumentTypes.get(0).getLogicalType();
+LogicalTypeRoot firstTypeRoot = firstType.getTypeRoot();
+
+if (argCount == 1) {
+if (!isCharacterType(firstTypeRoot) && 
!firstType.is(LogicalTypeFamily.NUMERIC)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 1 argument, TO_TIMESTAMP_LTZ 
accepts  or .");
+}
+} else if (argCount == 2) {
+LogicalType secondType = argumentTypes.get(1).getLogicalType();
+LogicalTypeRoot secondTypeRoot = secondType.getTypeRoot();
+if (firstType.is(LogicalTypeFamily.NUMERIC)) {
+if (secondTypeRoot != LogicalTypeRoot.INTEGER) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, ) 
requires the second argument to be .");
+}
+} else if (isCharacterType(firstTypeRoot)) {
+if (!isCharacterType(secondTypeRoot)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, 
) requires the second argument to be .");
+}
+} else {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 2 arguments, TO_TIMESTAMP_LTZ 
requires the first argument to be  or .");
+}
+} else if (argCount == 3) {
+if (!isCharacterType(firstTypeRoot)
+|| 
!isCharacterType(argumentTypes.get(1).getLogicalType().getTypeRoot())
+|| 
!isCharacterType(argumentTypes.get(2).getLogicalType().getTypeRoot())) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 3 arguments, TO_TIMESTAMP_LTZ 
requires all three arguments to be of type .");

Review Comment:
   what does `` mean - I assume we mean the Flink logical types 
`` or `` . I assume that STRING resolves to VARCHAR.
   
   Because it is in angle brackets it looks like a logical Flink type  name to 
me. I suggest changing  and  to numeric and character in 
the messages.; so it does not seem like a Flink logical type name.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


davidradl commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905775954


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ToTimestampLtzTypeStrategy.java:
##
@@ -20,22 +20,79 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.inference.CallContext;
 import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
+import java.util.List;
 import java.util.Optional;
 
 /** Type strategy of {@code TO_TIMESTAMP_LTZ}. */
 @Internal
 public class ToTimestampLtzTypeStrategy implements TypeStrategy {
 
+private static final int DEFAULT_PRECISION = 3;
+
 @Override
 public Optional inferType(CallContext callContext) {
-if (callContext.isArgumentLiteral(1)) {
-final int precision = callContext.getArgumentValue(1, 
Integer.class).get();
-return Optional.of(DataTypes.TIMESTAMP_LTZ(precision));
+List argumentTypes = callContext.getArgumentDataTypes();
+int argCount = argumentTypes.size();
+
+if (argCount < 1 || argCount > 3) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ requires 1 to 3 arguments, but 
"
++ argCount
++ " were provided.");
+}
+
+LogicalType firstType = argumentTypes.get(0).getLogicalType();
+LogicalTypeRoot firstTypeRoot = firstType.getTypeRoot();
+
+if (argCount == 1) {
+if (!isCharacterType(firstTypeRoot) && 
!firstType.is(LogicalTypeFamily.NUMERIC)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 1 argument, TO_TIMESTAMP_LTZ 
accepts  or .");
+}
+} else if (argCount == 2) {
+LogicalType secondType = argumentTypes.get(1).getLogicalType();
+LogicalTypeRoot secondTypeRoot = secondType.getTypeRoot();
+if (firstType.is(LogicalTypeFamily.NUMERIC)) {
+if (secondTypeRoot != LogicalTypeRoot.INTEGER) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, ) 
requires the second argument to be .");
+}
+} else if (isCharacterType(firstTypeRoot)) {
+if (!isCharacterType(secondTypeRoot)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, 
) requires the second argument to be .");
+}
+} else {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 2 arguments, TO_TIMESTAMP_LTZ 
requires the first argument to be  or .");
+}
+} else if (argCount == 3) {
+if (!isCharacterType(firstTypeRoot)
+|| 
!isCharacterType(argumentTypes.get(1).getLogicalType().getTypeRoot())
+|| 
!isCharacterType(argumentTypes.get(2).getLogicalType().getTypeRoot())) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 3 arguments, TO_TIMESTAMP_LTZ 
requires all three arguments to be of type .");

Review Comment:
   what does `` mean - I assume we mean the Flink logical types 
`` or `` . I assume that STRING resolves to VARCHAR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36610] MySQL CDC supports parsing gh-ost / pt-osc generated schema changes [flink-cdc]

2025-01-07 Thread via GitHub


lvyanquan commented on PR #3668:
URL: https://github.com/apache/flink-cdc/pull/3668#issuecomment-2575213081

   Hi @leonardBang, @ruanhang1993 could you please help to trigger CI check?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-37024) Task can be stuck in deploying state forever when canceling job/failover

2025-01-07 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-37024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-37024:
--
Description: We observed that task can be stuck in deploying state forever 
when the task loading/instantiating logic has some issues. Cancelling the job / 
failover caused by failures of other tasks will also get stuck as the cancel 
watch dog won't work for tasks in CREATED/DEPLOYING state at present. We should 
make cancel watch dog cover tasks in DEPLOYING as well (no need for tasks in 
CREATED state has there is no real logic between  CREATED->DEPLOYING).  (was: 
We observed that task can be stuck in deploying state forever when the task 
initializing logic has some issues. Cancelling the job / failover caused by 
failures of other tasks will also get stuck as the cancel watch dog won't work 
for tasks in CREATED/DEPLOYING state at present. We should make cancel watch 
dog cover tasks in DEPLOYING as well (no need for tasks in CREATED state has 
there is no real logic between  CREATED->DEPLOYING).)

> Task can be stuck in deploying state forever when canceling job/failover
> 
>
> Key: FLINK-37024
> URL: https://issues.apache.org/jira/browse/FLINK-37024
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> We observed that task can be stuck in deploying state forever when the task 
> loading/instantiating logic has some issues. Cancelling the job / failover 
> caused by failures of other tasks will also get stuck as the cancel watch dog 
> won't work for tasks in CREATED/DEPLOYING state at present. We should make 
> cancel watch dog cover tasks in DEPLOYING as well (no need for tasks in 
> CREATED state has there is no real logic between  CREATED->DEPLOYING).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36898][Table] Support SQL FLOOR and CEIL functions with NanoSecond and MicroSecond for TIMESTAMP_TLZ [flink]

2025-01-07 Thread via GitHub


snuyanzin commented on code in PR #25897:
URL: https://github.com/apache/flink/pull/25897#discussion_r1905510096


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/TimeFunctionsITCase.java:
##
@@ -588,34 +588,50 @@ private Stream ceilTestCases() {
 LocalDateTime.of(3001, 1, 1, 0, 0),
 TIMESTAMP().nullable())
 .testResult(
-$("f3").cast(TIMESTAMP_LTZ(3))
+$("f3").cast(TIMESTAMP_LTZ(4))
 .ceil(TimeIntervalUnit.HOUR)
 .cast(STRING()),
-"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(3)) TO 
HOUR) AS STRING)",
+"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(4)) TO 
HOUR) AS STRING)",
 LocalDateTime.of(2021, 9, 24, 10, 0, 0, 0)
 .format(TIMESTAMP_FORMATTER),
 STRING().nullable())
 .testResult(
-$("f3").cast(TIMESTAMP_LTZ(3))
+$("f3").cast(TIMESTAMP_LTZ(4))
 .ceil(TimeIntervalUnit.MINUTE)
 .cast(STRING()),
-"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(3)) TO 
MINUTE) AS STRING)",
+"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(4)) TO 
MINUTE) AS STRING)",
 LocalDateTime.of(2021, 9, 24, 9, 21, 0, 0)
 .format(TIMESTAMP_FORMATTER),
 STRING().nullable())
 .testResult(
-$("f3").cast(TIMESTAMP_LTZ(3))
+$("f3").cast(TIMESTAMP_LTZ(4))
 .ceil(TimeIntervalUnit.SECOND)
 .cast(STRING()),
-"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(3)) TO 
SECOND) AS STRING)",
+"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(4)) TO 
SECOND) AS STRING)",
 LocalDateTime.of(2021, 9, 24, 9, 20, 51, 0)
 .format(TIMESTAMP_FORMATTER),
 STRING().nullable())
 .testResult(
-$("f3").cast(TIMESTAMP_LTZ(3))
+$("f3").cast(TIMESTAMP_LTZ(4))
 .ceil(TimeIntervalUnit.MILLISECOND)
 .cast(STRING()),
-"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(3)) TO 
MILLISECOND) AS STRING)",
+"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(4)) TO 
MILLISECOND) AS STRING)",
+LocalDateTime.of(2021, 9, 24, 9, 20, 50, 
925_000_000)
+.format(TIMESTAMP_FORMATTER),
+STRING().nullable())
+.testResult(
+$("f3").cast(TIMESTAMP_LTZ(4))
+.ceil(TimeIntervalUnit.MICROSECOND)
+.cast(STRING()),
+"CAST(CEIL(CAST(f3 AS TIMESTAMP_LTZ(4)) TO 
MICROSECOND) AS STRING)",
+LocalDateTime.of(2021, 9, 24, 9, 20, 50, 
924_000_000)
+.format(TIMESTAMP_FORMATTER),

Review Comment:
   Why are we trying to check `TO MICROSECOND` with millis only?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BP-1.19][FLINK-36689][Runtime/Web Frontend] Update ng-zorro-antd to v18 [flink]

2025-01-07 Thread via GitHub


mehdid93 opened a new pull request, #25913:
URL: https://github.com/apache/flink/pull/25913

   ## What is the purpose of the change
   
   This PR backport the changes done of the PR made by @simplejason 
(https://github.com/apache/flink/pull/25713) in master for version 1.19.X to be 
used in dependencies upgrade.
   
   
   ## Brief change log
   
   - Angular: v18.2.13
   - ng-zorro-antd: v18.2.1
   - typescript: v5.4.5
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.19][FLINK-34194] Update CI to Ubuntu 22.04 (Jammy) [flink]

2025-01-07 Thread via GitHub


flinkbot commented on PR #25911:
URL: https://github.com/apache/flink/pull/25911#issuecomment-2575635100

   
   ## CI report:
   
   * b40da18b765c3307b3d0ee7aa39f072aedc7a4ac UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.19][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


flinkbot commented on PR #25912:
URL: https://github.com/apache/flink/pull/25912#issuecomment-2575635372

   
   ## CI report:
   
   * c95dfa1d057017df4226527dea675db6d2d69445 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [BP-1.19][FLINK-36740] [WebFrontend] Update frontend dependencies to address vulnerabilities [flink]

2025-01-07 Thread via GitHub


mehdid93 opened a new pull request, #25914:
URL: https://github.com/apache/flink/pull/25914

   ## What is the purpose of the change
   
   This PR backport the changes done of the PR made by me in 
(https://github.com/apache/flink/pull/25718) in master for version 1.20.X to be 
used in dependencies upgrade and vulnerabilities fixes.
   
   
   ## Brief change log
   
   - Update of the dependencies
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.19][FLINK-36689][Runtime/Web Frontend] Update ng-zorro-antd to v18 [flink]

2025-01-07 Thread via GitHub


flinkbot commented on PR #25913:
URL: https://github.com/apache/flink/pull/25913#issuecomment-2575660028

   
   ## CI report:
   
   * 6b81d20ec3c688cee63816cfb48a98f4a89cbf3e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-37026) Enable Stale PR Github Action

2025-01-07 Thread Thomas Cooper (Jira)
Thomas Cooper created FLINK-37026:
-

 Summary: Enable Stale PR Github Action
 Key: FLINK-37026
 URL: https://issues.apache.org/jira/browse/FLINK-37026
 Project: Flink
  Issue Type: Improvement
Reporter: Thomas Cooper


As part of the Community Health Initiative (CHI) we made a 
[proposal|https://cwiki.apache.org/confluence/display/FLINK/Stale+PR+Cleanup] 
to clean up old PRs which have seen no activity in a long time (stale PRs).

After 
[discussion|https://lists.apache.org/thread/6yoclzmvymxors8vlpt4nn9r7t3stcsz] 
on the mailing list the proposal was 
[voted|https://lists.apache.org/thread/kc90254wvo5q934doh8o4sbj1fgwvy76] 
through.

We now need to enable the Stale PR GitHub action with a stale period of 6 
months, with 3 months to refresh (interact in any way with) the PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.19][FLINK-36740] [WebFrontend] Update frontend dependencies to address vulnerabilities [flink]

2025-01-07 Thread via GitHub


flinkbot commented on PR #25914:
URL: https://github.com/apache/flink/pull/25914#issuecomment-2575673473

   
   ## CI report:
   
   * e0cbfe6ff83e452be18527edbbaf3812686f0174 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-37024][task] Make cancel watchdog cover tasks stuck in DEPLOYING state [flink]

2025-01-07 Thread via GitHub


flinkbot commented on PR #25915:
URL: https://github.com/apache/flink/pull/25915#issuecomment-2575864735

   
   ## CI report:
   
   * 787e6aa43e7dd3f591169b979b4df181613e30d9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-37018) Adaptive scheduler triggers multiple internal restarts for a single rescale event

2025-01-07 Thread Sai Sharath Dandi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-37018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910724#comment-17910724
 ] 

Sai Sharath Dandi commented on FLINK-37018:
---

Hi [~heigebupahei] , thanks a lot for the quick response, FLIP-472 does seem to 
solve this problem. Do you know how much effort is needed to apply the solution 
to 1.18/1.19 versions. Meanwhile, I will try to study the FLIP and the related 
PRs in more detail. Thanks!

> Adaptive scheduler triggers multiple internal restarts for a single rescale 
> event
> -
>
> Key: FLINK-37018
> URL: https://issues.apache.org/jira/browse/FLINK-37018
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Priority: Major
> Attachments: jobmanager.log
>
>
> We observe that a single rescale event from autoscaler triggers multiple 
> internal restarts by the adaptive scheduler despite the job not having any 
> other reason/exception for internal restarts. There can be 2-3 restarts over 
> a very short period (1-2 mins) before the job stabilizes.
> In the attached job manager logs, we can see there are
>  # Can change the parallelism of job. Restarting job. (17 times)
>  # Received resource requirements from job (7 times).
>  
> The job was internal restarted 17 times despite receiving only 7 requests 
> from the autoscaler for rescalings



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905955530


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ToTimestampLtzTypeStrategy.java:
##
@@ -20,22 +20,79 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.types.DataType;
 import org.apache.flink.table.types.inference.CallContext;
 import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
+import java.util.List;
 import java.util.Optional;
 
 /** Type strategy of {@code TO_TIMESTAMP_LTZ}. */
 @Internal
 public class ToTimestampLtzTypeStrategy implements TypeStrategy {
 
+private static final int DEFAULT_PRECISION = 3;
+
 @Override
 public Optional inferType(CallContext callContext) {
-if (callContext.isArgumentLiteral(1)) {
-final int precision = callContext.getArgumentValue(1, 
Integer.class).get();
-return Optional.of(DataTypes.TIMESTAMP_LTZ(precision));
+List argumentTypes = callContext.getArgumentDataTypes();
+int argCount = argumentTypes.size();
+
+if (argCount < 1 || argCount > 3) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ requires 1 to 3 arguments, but 
"
++ argCount
++ " were provided.");
+}
+
+LogicalType firstType = argumentTypes.get(0).getLogicalType();
+LogicalTypeRoot firstTypeRoot = firstType.getTypeRoot();
+
+if (argCount == 1) {
+if (!isCharacterType(firstTypeRoot) && 
!firstType.is(LogicalTypeFamily.NUMERIC)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 1 argument, TO_TIMESTAMP_LTZ 
accepts  or .");
+}
+} else if (argCount == 2) {
+LogicalType secondType = argumentTypes.get(1).getLogicalType();
+LogicalTypeRoot secondTypeRoot = secondType.getTypeRoot();
+if (firstType.is(LogicalTypeFamily.NUMERIC)) {
+if (secondTypeRoot != LogicalTypeRoot.INTEGER) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, ) 
requires the second argument to be .");
+}
+} else if (isCharacterType(firstTypeRoot)) {
+if (!isCharacterType(secondTypeRoot)) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "TO_TIMESTAMP_LTZ(, 
) requires the second argument to be .");
+}
+} else {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 2 arguments, TO_TIMESTAMP_LTZ 
requires the first argument to be  or .");
+}
+} else if (argCount == 3) {
+if (!isCharacterType(firstTypeRoot)
+|| 
!isCharacterType(argumentTypes.get(1).getLogicalType().getTypeRoot())
+|| 
!isCharacterType(argumentTypes.get(2).getLogicalType().getTypeRoot())) {
+throw new ValidationException(
+"Unsupported argument type. "
++ "When taking 3 arguments, TO_TIMESTAMP_LTZ 
requires all three arguments to be of type .");

Review Comment:
   Will modify. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905956028


##
docs/data/sql_functions_zh.yml:
##
@@ -805,11 +805,12 @@ temporal:
   - sql: TO_DATE(string1[, string2])
 table: toDate(STRING1[, STRING2])
 description: 将格式为 string2(默认为 '-MM-dd')的字符串 string1 转换为日期。
-  - sql: TO_TIMESTAMP_LTZ(numeric, precision)
-table: toTimestampLtz(numeric, PRECISION)
-description: |
-  将纪元秒或纪元毫秒转换为 TIMESTAMP_LTZ,有效精度为 0 或 3,0 代表 
`TO_TIMESTAMP_LTZ(epochSeconds, 0)`,
-  3 代表` TO_TIMESTAMP_LTZ(epochMilliseconds, 3)`。
+  - sql: TO_TIMESTAMP_LTZ(numeric[, precision])
+table: toTimestampLtz(NUMERIC, PRECISION)
+description: Converts an epoch seconds or epoch milliseconds to a 
TIMESTAMP_LTZ, the valid precision is 0 or 3, the 0 represents 
TO_TIMESTAMP_LTZ(epochSeconds, 0), the 3 represents 
TO_TIMESTAMP_LTZ(epochMilliseconds, 3). If no precision is provided, the 
default precision is 3. If any input is null, the function will return null.
+  - sql: TO_TIMESTAMP_LTZ(string1[, string2[, string3]])
+table: toTimestampLtz(STRING1[, STRING2[, STRING3]])
+description: Converts a timestamp string string1 with format string2 (by 
default '-MM-dd HH:mm:ss.SSS') in time zone string3 (by default 'UTC') to a 
TIMESTAMP_LTZ. If any input is null, the function will return null.

Review Comment:
   Hi @davidradl, thanks for the comment. Will create a Jira for people to pick 
it up. 
   
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36181] Use Java 17 by default [flink]

2025-01-07 Thread via GitHub


MartijnVisser commented on code in PR #25898:
URL: https://github.com/apache/flink/pull/25898#discussion_r1906000962


##
pom.xml:
##
@@ -124,7 +124,7 @@ under the License.

2.15.3

2.7.0
true
-   11
+   17

Review Comment:
   I've added `

Re: [PR] [BP-1.20][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


afedulov merged PR #25794:
URL: https://github.com/apache/flink/pull/25794


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905915205


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   Hi @davidradl , I checked those negatives cases in Java tests. To reduce the 
python overhead in Flink, we usually don't duplicate tests in python tests. 
   
   Also, Flink tests are very light-weight, and only check function 
**signatures**, not the function themselves. In other words, even a negative 
test won't return error here. If we add one here, it might lead to some 
confusion. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905915205


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   Hi @davidradl , I checked those negatives cases in Java tests. To reduce the 
python overhead in Flink, we usually don't duplicate tests in python tests. 
   
   Also, Flink tests are very light-weight, and only check function 
**signatures**, not the function themselves. In other words, even a negative 
test won't return error here, as long as the input are of the correct type. If 
we add one here, it might lead to some confusion. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-3154) Update Kryo version from 2.24.0 to latest Kryo LTS version

2025-01-07 Thread Kurt Ostfeld (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910793#comment-17910793
 ] 

Kurt Ostfeld commented on FLINK-3154:
-

The PR is ready: [https://github.com/apache/flink/pull/25896]

This is based on the Flink 2.0 master branch as of a few days ago. This passes 
all CI tests. There are no merge conflicts as of this moment.

This is a simpler PR, as it doesn't need compatibility with existing saved 
Flink 1.x state as Flink 2.0 isn't providing that.

If there is review feedback or requested changes, I can address those. 
Otherwise, this should be ready to merge.

> Update Kryo version from 2.24.0 to latest Kryo LTS version
> --
>
> Key: FLINK-3154
> URL: https://issues.apache.org/jira/browse/FLINK-3154
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Affects Versions: 1.0.0
>Reporter: Maximilian Michels
>Priority: Not a Priority
>  Labels: pull-request-available
>
> Flink's Kryo version is outdated and could be updated to a newer version, 
> e.g. kryo-3.0.3.
> From ML: we cannot bumping the Kryo version easily - the serialization format 
> changed (that's why they have a new major version), which would render all 
> Flink savepoints and checkpoints incompatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36227] Restore compatibility with Logback 1.2 [flink]

2025-01-07 Thread via GitHub


piotrp commented on code in PR #25813:
URL: https://github.com/apache/flink/pull/25813#discussion_r1906122497


##
flink-core/src/main/java/org/apache/flink/util/MdcUtils.java:
##
@@ -45,7 +45,13 @@ public class MdcUtils {
 public static MdcCloseable withContext(Map context) {
 final Map orig = MDC.getCopyOfContextMap();
 MDC.setContextMap(context);
-return () -> MDC.setContextMap(orig);
+return () -> {

Review Comment:
   ... and this is something I already hit and forgot, thanks for the heads up 
:D



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36227] Restore compatibility with Logback 1.2 [flink]

2025-01-07 Thread via GitHub


piotrp commented on code in PR #25813:
URL: https://github.com/apache/flink/pull/25813#discussion_r1906123080


##
flink-core/pom.xml:
##
@@ -180,6 +180,13 @@ under the License.
test

 
+   

Review Comment:
   Thanks for checking, I fully expected this holiday delay :)
   
   Reflection works, I added the `@Isolated` annotation to ensure this won't 
interfere with other tests.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36227] Restore compatibility with Logback 1.2 [flink]

2025-01-07 Thread via GitHub


piotrp commented on code in PR #25813:
URL: https://github.com/apache/flink/pull/25813#discussion_r1906118440


##
flink-core/src/main/java/org/apache/flink/util/MdcUtils.java:
##
@@ -45,7 +45,13 @@ public class MdcUtils {
 public static MdcCloseable withContext(Map context) {
 final Map orig = MDC.getCopyOfContextMap();
 MDC.setContextMap(context);
-return () -> MDC.setContextMap(orig);
+return () -> {

Review Comment:
   And now a simpler test with reflection



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36227] Restore compatibility with Logback 1.2 [flink]

2025-01-07 Thread via GitHub


afedulov commented on code in PR #25813:
URL: https://github.com/apache/flink/pull/25813#discussion_r1906119278


##
flink-core/src/main/java/org/apache/flink/util/MdcUtils.java:
##
@@ -45,7 +45,13 @@ public class MdcUtils {
 public static MdcCloseable withContext(Map context) {
 final Map orig = MDC.getCopyOfContextMap();
 MDC.setContextMap(context);
-return () -> MDC.setContextMap(orig);
+return () -> {

Review Comment:
   CI is going to fail on the missing Apache header :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36976] Upgrade jackson from 2.15.3 to 2.18.2 [flink]

2025-01-07 Thread via GitHub


snuyanzin merged PR #25865:
URL: https://github.com/apache/flink/pull/25865


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36181] Use Java 17 by default [flink]

2025-01-07 Thread via GitHub


snuyanzin commented on code in PR #25898:
URL: https://github.com/apache/flink/pull/25898#discussion_r1906139229


##
pom.xml:
##
@@ -124,7 +124,7 @@ under the License.

2.15.3

2.7.0
true
-   11
+   17

Review Comment:
   no it is different
   
   I tested locally with this PR branch
   1. add any java 12-17 feature, e.g. any constant like 
   ```java
   final String t =
   """
   text block
   """;
   ``` 
   this text block feature came with java 14
   2. build it with `./mvnw clean install -DskipTests -Dscala-2.12 -Pfast 
-Pskip-webui-build -U -T4`
   by default there is java 17 so everything is ok
   3. now do same with java11 (also put it into JAVA_HOME) like 
   `./mvnw clean install -DskipTests -Dscala-2.12 -Pfast -Pskip-webui-build -U 
-T4 -Pjava11 -Pjava11-target`
   and it fails like
   ```
   [ERROR] COMPILATION ERROR : 
   [INFO] -
   [ERROR] 
flink/flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java:[122,19]
 unclosed string literal
   [ERROR] 
flink/flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java:[123,5]
 not a statement
   [ERROR] 
flink/flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java:[123,11]
 ';' expected
   [ERROR] 
flink/flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcSplitReaderUtil.java:[124,7]
 unclosed string literal
   [INFO] 4 errors 
   ```
   
   since I put constant in `OrcSplitReaderUtil` it complains about this class



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1906146745


##
flink-python/pyflink/table/expressions.py:
##
@@ -306,19 +306,38 @@ def to_timestamp(timestamp_str: Union[str, 
Expression[str]],
 return _binary_op("toTimestamp", timestamp_str, format)
 
 
-def to_timestamp_ltz(numeric_epoch_time, precision) -> Expression:
+def to_timestamp_ltz(*args) -> Expression:
 """
-Converts a numeric type epoch time to TIMESTAMP_LTZ.
+Converts a value to a timestamp with local time zone.
 
-The supported precision is 0 or 3:
-0 means the numericEpochTime is in second.
-3 means the numericEpochTime is in millisecond.
+Supported functions:
+1. to_timestamp_ltz(Numeric) -> DataTypes.TIMESTAMP_LTZ
+2. to_timestamp_ltz(Numeric, Integer) -> DataTypes.TIMESTAMP_LTZ

Review Comment:
   Addressed. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36181] Use Java 17 by default [flink]

2025-01-07 Thread via GitHub


snuyanzin commented on code in PR #25898:
URL: https://github.com/apache/flink/pull/25898#discussion_r1906152062


##
pom.xml:
##
@@ -124,7 +124,7 @@ under the License.

2.15.3

2.7.0
true
-   11
+   17

Review Comment:
   ```suggestion
11
17
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36181] Use Java 17 by default [flink]

2025-01-07 Thread via GitHub


snuyanzin commented on code in PR #25898:
URL: https://github.com/apache/flink/pull/25898#discussion_r1906152062


##
pom.xml:
##
@@ -124,7 +124,7 @@ under the License.

2.15.3

2.7.0
true
-   11
+   17

Review Comment:
   ```suggestion
11
17
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36181] Use Java 17 by default [flink]

2025-01-07 Thread via GitHub


snuyanzin commented on code in PR #25898:
URL: https://github.com/apache/flink/pull/25898#discussion_r1906153824


##
pom.xml:
##
@@ -124,7 +124,7 @@ under the License.

2.15.3

2.7.0
true
-   11
+   17

Review Comment:
   Also played and created a branch where this issue is solved (just 3 lines)
   
   
https://github.com/snuyanzin/flink/commit/ba570320efd5edd04acdd6d8a137fc5e19c794f9



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905915205


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   Hi @davidradl , thanks for the feedback. As you mentioned, negative cases 
are covered Java tests. To reduce the python overhead in Flink, we usually 
don't duplicate Java tests in python tests. 
   
   Also, Flink python tests are light-weight API tests, and only check function 
** signatures**, not the function themselves. In other words, even a negative 
test won't return error here, as long as the input are of the correct type. If 
we add a negative test here, it will lead to some confusion. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905915205


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   Hi @davidradl , thanks for the feedback. As you mentioned, ,negative cases 
are covered Java tests. To reduce the python overhead in Flink, we usually 
don't duplicate Java tests in python tests. 
   
   Also, Flink python tests are light-weight API tests, and only check function 
**API signatures**, not the function themselves. In other words, even a 
negative test won't return error here, as long as the input are of the correct 
type. If we add a negative test here, it will lead to some confusion. 



##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   Hi @davidradl , thanks for the feedback. As you mentioned, negative cases 
are covered Java tests. To reduce the python overhead in Flink, we usually 
don't duplicate Java tests in python tests. 
   
   Also, Flink python tests are light-weight API tests, and only check function 
**API signatures**, not the function themselves. In other words, even a 
negative test won't return error here, as long as the input are of the correct 
type. If we add a negative test here, it will lead to some confusion. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905915205


##
flink-python/pyflink/table/tests/test_expression.py:
##
@@ -285,7 +285,14 @@ def test_expressions(self):
 self.assertEqual("toDate('2018-03-18')", str(to_date('2018-03-18')))
 self.assertEqual("toDate('2018-03-18', '-MM-dd')",
  str(to_date('2018-03-18', '-MM-dd')))
-self.assertEqual('toTimestampLtz(123, 0)', str(to_timestamp_ltz(123, 
0)))

Review Comment:
   Hi @davidradl , thanks for the feedback. As you mentioned, negative cases 
are covered Java tests. To reduce the python overhead in Flink, we usually 
don't duplicate Java tests in python tests. 
   
   Also, Flink python tests are light-weight API tests, and only check function 
**signatures**, not the function themselves. In other words, even a negative 
test won't return error here, as long as the input are of the correct type. If 
we add a negative test here, it will lead to some confusion. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-37027) Chinese document translation for additional TO_TIMESTAMP_LTZ() functions

2025-01-07 Thread Yiyu Tian (Jira)
Yiyu Tian created FLINK-37027:
-

 Summary: Chinese document translation for additional 
TO_TIMESTAMP_LTZ() functions
 Key: FLINK-37027
 URL: https://issues.apache.org/jira/browse/FLINK-37027
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation
Reporter: Yiyu Tian


https://issues.apache.org/jira/browse/FLINK-36862



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35721) I found out that in the Flink SQL documentation it says that Double type cannot be converted to Boolean type, but in reality, it can.

2025-01-07 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov updated FLINK-35721:
--
Component/s: Documentation

> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> -
>
> Key: FLINK-35721
> URL: https://issues.apache.org/jira/browse/FLINK-35721
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / Planner
>Affects Versions: 1.19.1
>Reporter: jinzhuguang
>Priority: Minor
> Fix For: 1.19.2, 1.20.1
>
> Attachments: image-2024-06-28-16-57-54-354.png
>
>
> I found out that in the Flink SQL documentation it says that Double type 
> cannot be converted to Boolean type, but in reality, it can.
> Ralated code : 
> org.apache.flink.table.planner.functions.casting.NumericToBooleanCastRule#generateExpression
> Ralated document url : 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/]
> !image-2024-06-28-16-57-54-354.png|width=378,height=342!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-37007][table] Add missing createView methods to TableEnvironment [flink]

2025-01-07 Thread via GitHub


snuyanzin closed pull request #25907: [FLINK-37007][table] Add missing 
createView methods to TableEnvironment 
URL: https://github.com/apache/flink/pull/25907


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-36113) Condition 'transform instanceof PhysicalTransformation' is always 'false'

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910816#comment-17910816
 ] 

Alexander Fedulov edited comment on FLINK-36113 at 1/7/25 9:48 PM:
---

[~tiancx] please provide some additional details in the ticket - what is 
exactly wrong and why? I can only see that PhysicalTransformation triggers an 
exception under very specific conditions.


was (Author: afedulov):
[~tiancx] please provide some additional details in the ticket - what is 
exactly wrong and why? I can only see that PhysicalTransformation triggers an 
exception under specific conditions.

> Condition 'transform instanceof PhysicalTransformation' is always 'false' 
> --
>
> Key: FLINK-36113
> URL: https://issues.apache.org/jira/browse/FLINK-36113
> Project: Flink
>  Issue Type: Improvement
>Reporter: tiancx
>Priority: Major
> Fix For: 1.20.1
>
> Attachments: Snipaste_2024-08-21_01-29-52.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The legacyTransform method in the StreamGraphGenerator class has a judgment: 
> the transform instance of PhysicalTransformation is always false
> {code:java}
> //代码占位符
> private Collection legacyTransform(Transformation transform) {
> Collection transformedIds;
> if (transform instanceof FeedbackTransformation) {
> transformedIds = transformFeedback((FeedbackTransformation) 
> transform);
> } else if (transform instanceof CoFeedbackTransformation) {
> transformedIds = transformCoFeedback((CoFeedbackTransformation) 
> transform);
> } else if (transform instanceof SourceTransformationWrapper) {
> transformedIds = transform(((SourceTransformationWrapper) 
> transform).getInput());
> } else {
> throw new IllegalStateException("Unknown transformation: " + 
> transform);
> }
> if (transform.getBufferTimeout() >= 0) {
> streamGraph.setBufferTimeout(transform.getId(), 
> transform.getBufferTimeout());
> } else {
> streamGraph.setBufferTimeout(transform.getId(), getBufferTimeout());
> }
> if (transform.getUid() != null) {
> streamGraph.setTransformationUID(transform.getId(), 
> transform.getUid());
> }
> if (transform.getUserProvidedNodeHash() != null) {
> streamGraph.setTransformationUserHash(
> transform.getId(), transform.getUserProvidedNodeHash());
> }
> if (!streamGraph.getExecutionConfig().hasAutoGeneratedUIDsEnabled()) {
> if (transform instanceof PhysicalTransformation
> && transform.getUserProvidedNodeHash() == null
> && transform.getUid() == null) {
> throw new IllegalStateException(
> "Auto generated UIDs have been disabled "
> + "but no UID or hash has been assigned to 
> operator "
> + transform.getName());
> }
> }
> if (transform.getMinResources() != null && 
> transform.getPreferredResources() != null) {
> streamGraph.setResources(
> transform.getId(),
> transform.getMinResources(),
> transform.getPreferredResources());
> }
> streamGraph.setManagedMemoryUseCaseWeights(
> transform.getId(),
> transform.getManagedMemoryOperatorScopeUseCaseWeights(),
> transform.getManagedMemorySlotScopeUseCases());
> return transformedIds;
> } {code}
> If this needs optimization, I am willing to do so
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36113) Condition 'transform instanceof PhysicalTransformation' is always 'false'

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910816#comment-17910816
 ] 

Alexander Fedulov commented on FLINK-36113:
---

[~tiancx] please provide some additional details in the ticket - what is 
exactly wrong and why? I can only see that PhysicalTransformation triggers an 
exception under specific conditions.

> Condition 'transform instanceof PhysicalTransformation' is always 'false' 
> --
>
> Key: FLINK-36113
> URL: https://issues.apache.org/jira/browse/FLINK-36113
> Project: Flink
>  Issue Type: Improvement
>Reporter: tiancx
>Priority: Major
> Fix For: 1.20.1
>
> Attachments: Snipaste_2024-08-21_01-29-52.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The legacyTransform method in the StreamGraphGenerator class has a judgment: 
> the transform instance of PhysicalTransformation is always false
> {code:java}
> //代码占位符
> private Collection legacyTransform(Transformation transform) {
> Collection transformedIds;
> if (transform instanceof FeedbackTransformation) {
> transformedIds = transformFeedback((FeedbackTransformation) 
> transform);
> } else if (transform instanceof CoFeedbackTransformation) {
> transformedIds = transformCoFeedback((CoFeedbackTransformation) 
> transform);
> } else if (transform instanceof SourceTransformationWrapper) {
> transformedIds = transform(((SourceTransformationWrapper) 
> transform).getInput());
> } else {
> throw new IllegalStateException("Unknown transformation: " + 
> transform);
> }
> if (transform.getBufferTimeout() >= 0) {
> streamGraph.setBufferTimeout(transform.getId(), 
> transform.getBufferTimeout());
> } else {
> streamGraph.setBufferTimeout(transform.getId(), getBufferTimeout());
> }
> if (transform.getUid() != null) {
> streamGraph.setTransformationUID(transform.getId(), 
> transform.getUid());
> }
> if (transform.getUserProvidedNodeHash() != null) {
> streamGraph.setTransformationUserHash(
> transform.getId(), transform.getUserProvidedNodeHash());
> }
> if (!streamGraph.getExecutionConfig().hasAutoGeneratedUIDsEnabled()) {
> if (transform instanceof PhysicalTransformation
> && transform.getUserProvidedNodeHash() == null
> && transform.getUid() == null) {
> throw new IllegalStateException(
> "Auto generated UIDs have been disabled "
> + "but no UID or hash has been assigned to 
> operator "
> + transform.getName());
> }
> }
> if (transform.getMinResources() != null && 
> transform.getPreferredResources() != null) {
> streamGraph.setResources(
> transform.getId(),
> transform.getMinResources(),
> transform.getPreferredResources());
> }
> streamGraph.setManagedMemoryUseCaseWeights(
> transform.getId(),
> transform.getManagedMemoryOperatorScopeUseCaseWeights(),
> transform.getManagedMemorySlotScopeUseCases());
> return transformedIds;
> } {code}
> If this needs optimization, I am willing to do so
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905971151


##
docs/data/sql_functions_zh.yml:
##
@@ -805,11 +805,12 @@ temporal:
   - sql: TO_DATE(string1[, string2])
 table: toDate(STRING1[, STRING2])
 description: 将格式为 string2(默认为 '-MM-dd')的字符串 string1 转换为日期。
-  - sql: TO_TIMESTAMP_LTZ(numeric, precision)
-table: toTimestampLtz(numeric, PRECISION)
-description: |
-  将纪元秒或纪元毫秒转换为 TIMESTAMP_LTZ,有效精度为 0 或 3,0 代表 
`TO_TIMESTAMP_LTZ(epochSeconds, 0)`,
-  3 代表` TO_TIMESTAMP_LTZ(epochMilliseconds, 3)`。
+  - sql: TO_TIMESTAMP_LTZ(numeric[, precision])
+table: toTimestampLtz(NUMERIC, PRECISION)
+description: Converts an epoch seconds or epoch milliseconds to a 
TIMESTAMP_LTZ, the valid precision is 0 or 3, the 0 represents 
TO_TIMESTAMP_LTZ(epochSeconds, 0), the 3 represents 
TO_TIMESTAMP_LTZ(epochMilliseconds, 3). If no precision is provided, the 
default precision is 3. If any input is null, the function will return null.
+  - sql: TO_TIMESTAMP_LTZ(string1[, string2[, string3]])
+table: toTimestampLtz(STRING1[, STRING2[, STRING3]])
+description: Converts a timestamp string string1 with format string2 (by 
default '-MM-dd HH:mm:ss.SSS') in time zone string3 (by default 'UTC') to a 
TIMESTAMP_LTZ. If any input is null, the function will return null.

Review Comment:
   Created Jira: https://issues.apache.org/jira/browse/FLINK-37027



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36862][table] Implement additional TO_TIMESTAMP_LTZ() functions [flink]

2025-01-07 Thread via GitHub


yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1905956028


##
docs/data/sql_functions_zh.yml:
##
@@ -805,11 +805,12 @@ temporal:
   - sql: TO_DATE(string1[, string2])
 table: toDate(STRING1[, STRING2])
 description: 将格式为 string2(默认为 '-MM-dd')的字符串 string1 转换为日期。
-  - sql: TO_TIMESTAMP_LTZ(numeric, precision)
-table: toTimestampLtz(numeric, PRECISION)
-description: |
-  将纪元秒或纪元毫秒转换为 TIMESTAMP_LTZ,有效精度为 0 或 3,0 代表 
`TO_TIMESTAMP_LTZ(epochSeconds, 0)`,
-  3 代表` TO_TIMESTAMP_LTZ(epochMilliseconds, 3)`。
+  - sql: TO_TIMESTAMP_LTZ(numeric[, precision])
+table: toTimestampLtz(NUMERIC, PRECISION)
+description: Converts an epoch seconds or epoch milliseconds to a 
TIMESTAMP_LTZ, the valid precision is 0 or 3, the 0 represents 
TO_TIMESTAMP_LTZ(epochSeconds, 0), the 3 represents 
TO_TIMESTAMP_LTZ(epochMilliseconds, 3). If no precision is provided, the 
default precision is 3. If any input is null, the function will return null.
+  - sql: TO_TIMESTAMP_LTZ(string1[, string2[, string3]])
+table: toTimestampLtz(STRING1[, STRING2[, STRING3]])
+description: Converts a timestamp string string1 with format string2 (by 
default '-MM-dd HH:mm:ss.SSS') in time zone string3 (by default 'UTC') to a 
TIMESTAMP_LTZ. If any input is null, the function will return null.

Review Comment:
   Hi @davidradl, thanks for the comment. Will create a Jira for other people 
in the community to pick it up. 
   
   
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35453) StreamReader Charset fix with UTF8 in core files

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910818#comment-17910818
 ] 

Alexander Fedulov commented on FLINK-35453:
---

[~xuzifu] is this still relevant? I see that the PR got closed without 
addressing the comments. Did you run into a specific bug/issue? The changes in 
the PR seem to be a bit unrelated (flink config, csv utils etc).

> StreamReader Charset fix with UTF8 in core files
> 
>
> Key: FLINK-35453
> URL: https://issues.apache.org/jira/browse/FLINK-35453
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.19.0, 1.18.1
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.2
>
>
> StreamReader Charset fix with UTF8 in core files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-36835][flink-s3-fs-hadoop] Fix memory lease in flink-s3-fs-hadoop [flink]

2025-01-07 Thread via GitHub


afedulov commented on PR #25725:
URL: https://github.com/apache/flink/pull/25725#issuecomment-2576308940

   This seems like something that needs to be fixed in upstream Hadoop. Closing 
unless there are further details provided why Flink has to contain this class.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-36835][flink-s3-fs-hadoop] Fix memory lease in flink-s3-fs-hadoop [flink]

2025-01-07 Thread via GitHub


afedulov closed pull request #25725: [FLINK-36835][flink-s3-fs-hadoop] Fix 
memory lease in flink-s3-fs-hadoop
URL: https://github.com/apache/flink/pull/25725


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-36835) Memory lease in flink-s3-fs module due to hadoop aws version 3.x

2025-01-07 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov closed FLINK-36835.
-
Fix Version/s: (was: 1.20.1)
   Resolution: Won't Fix

> Memory lease in flink-s3-fs module due to hadoop aws version 3.x
> 
>
> Key: FLINK-36835
> URL: https://issues.apache.org/jira/browse/FLINK-36835
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.16.3, 1.17.2, 1.18.1, 1.20.0, 1.19.1
>Reporter: JinxinTang
>Priority: Major
>  Labels: pull-request-available
>
> From https://issues.apache.org/jira/browse/HADOOP-13028
> is introduced org.apache.hadoop.fs.s3a.S3AInstrumentation, it will be created 
> every time org.apache.hadoop.fs.s3a.S3AFileSystem#initialize 
> and be registerd to static field meticsystem by 
> org.apache.hadoop.fs.s3a.S3AInstrumentation#registerAsMetricsSource,but no 
> one call 
> metricsSystem.unregisterSource(metricsSourceName); during runtime, it will 
> cause memory lease when use filesystem plugin to write s3 storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36835) Memory lease in flink-s3-fs module due to hadoop aws version 3.x

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910825#comment-17910825
 ] 

Alexander Fedulov commented on FLINK-36835:
---

This looks like something that needs to be fixed in upstream Hadoop. Closing 
unless there are further details provided why Flink is the appropriate place to 
fix this.

> Memory lease in flink-s3-fs module due to hadoop aws version 3.x
> 
>
> Key: FLINK-36835
> URL: https://issues.apache.org/jira/browse/FLINK-36835
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.16.3, 1.17.2, 1.18.1, 1.20.0, 1.19.1
>Reporter: JinxinTang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.1
>
>
> From https://issues.apache.org/jira/browse/HADOOP-13028
> is introduced org.apache.hadoop.fs.s3a.S3AInstrumentation, it will be created 
> every time org.apache.hadoop.fs.s3a.S3AFileSystem#initialize 
> and be registerd to static field meticsystem by 
> org.apache.hadoop.fs.s3a.S3AInstrumentation#registerAsMetricsSource,but no 
> one call 
> metricsSystem.unregisterSource(metricsSourceName); during runtime, it will 
> cause memory lease when use filesystem plugin to write s3 storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36735) SupportsRowLevelUpdate gives sink only the values of required columns when specifying RowLevelUpdateInfo#requiredColumns

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910827#comment-17910827
 ] 

Alexander Fedulov commented on FLINK-36735:
---

[~jark] [~luoyuxia] since there are no open PRs for the fix and we are 
finalizing 1.20.1 I am moving this to the next patch release.
Please let me know if you still want to target 1.20.1 for the fix.

> SupportsRowLevelUpdate gives sink only the values of required columns when 
> specifying RowLevelUpdateInfo#requiredColumns
> 
>
> Key: FLINK-36735
> URL: https://issues.apache.org/jira/browse/FLINK-36735
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: luoyuxia
>Priority: Critical
> Fix For: 1.20.1
>
>
> When {{RowLevelUpdateInfo#requiredColumns}} is specified to some columns 
> (e.g., primary keys), sink function expects to receive the columns values 
> including both the updated columns and primary key column. But sink function 
> only receives the values of primary key columns, which makes update 
> impossible without the updated columns. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.20][FLINK-36689][Runtime/Web Frontend] Update ng-zorro-antd to v18 [flink]

2025-01-07 Thread via GitHub


afedulov commented on PR #25829:
URL: https://github.com/apache/flink/pull/25829#issuecomment-2576206371

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.20][FLINK-36740] [WebFrontend] Update frontend dependencies to address vulnerabilities [flink]

2025-01-07 Thread via GitHub


afedulov commented on PR #25830:
URL: https://github.com/apache/flink/pull/25830#issuecomment-2576210755

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-36739) Update NodeJS to v22 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910799#comment-17910799
 ] 

Alexander Fedulov edited comment on FLINK-36739 at 1/7/25 8:45 PM:
---

*Backported* to release-1.20: bcc44d2d3b8c6de1de074cf0b3ca21b2c38ff1ac


was (Author: afedulov):
*Backported* {*}to release-1.20{*}: bcc44d2d3b8c6de1de074cf0b3ca21b2c38ff1ac

> Update NodeJS to v22 (LTS)
> --
>
> Key: FLINK-36739
> URL: https://issues.apache.org/jira/browse/FLINK-36739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-36739) Update NodeJS to v22 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov updated FLINK-36739:
--
Fix Version/s: 1.20.1

> Update NodeJS to v22 (LTS)
> --
>
> Key: FLINK-36739
> URL: https://issues.apache.org/jira/browse/FLINK-36739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0, 1.20.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36739) Update NodeJS to v22 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910799#comment-17910799
 ] 

Alexander Fedulov commented on FLINK-36739:
---

*Backported* {*}to release-1.20{*}: bcc44d2d3b8c6de1de074cf0b3ca21b2c38ff1ac

> Update NodeJS to v22 (LTS)
> --
>
> Key: FLINK-36739
> URL: https://issues.apache.org/jira/browse/FLINK-36739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36901) [Backport] Update the NodeJS to v22.11.0 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910801#comment-17910801
 ] 

Alexander Fedulov commented on FLINK-36901:
---

[~mehdidb1993] fyi, we usually do not create explicit backport tickets, unless 
there is an explicit reason for doing so (example 
https://issues.apache.org/jira/browse/FLINK-22324)

> [Backport] Update the NodeJS to v22.11.0 (LTS)
> --
>
> Key: FLINK-36901
> URL: https://issues.apache.org/jira/browse/FLINK-36901
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.20.1
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
> Fix For: 1.20.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-36739) Update NodeJS to v22 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910799#comment-17910799
 ] 

Alexander Fedulov edited comment on FLINK-36739 at 1/7/25 8:48 PM:
---

*Backported* {*}to release-1.20{*}: bcc44d2d3b8c6de1de074cf0b3ca21b2c38ff1ac


was (Author: afedulov):
*Backported* to release-1.20: bcc44d2d3b8c6de1de074cf0b3ca21b2c38ff1ac

> Update NodeJS to v22 (LTS)
> --
>
> Key: FLINK-36739
> URL: https://issues.apache.org/jira/browse/FLINK-36739
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0, 1.20.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-36901) [Backport] Update the NodeJS to v22.11.0 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-36901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov closed FLINK-36901.
-
  Assignee: Mehdi
Resolution: Fixed

> [Backport] Update the NodeJS to v22.11.0 (LTS)
> --
>
> Key: FLINK-36901
> URL: https://issues.apache.org/jira/browse/FLINK-36901
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.20.1
>Reporter: Mehdi
>Assignee: Mehdi
>Priority: Major
> Fix For: 1.20.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-36901) [Backport] Update the NodeJS to v22.11.0 (LTS)

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910800#comment-17910800
 ] 

Alexander Fedulov commented on FLINK-36901:
---

Backport tracked in https://issues.apache.org/jira/browse/FLINK-36901

> [Backport] Update the NodeJS to v22.11.0 (LTS)
> --
>
> Key: FLINK-36901
> URL: https://issues.apache.org/jira/browse/FLINK-36901
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.20.1
>Reporter: Mehdi
>Priority: Major
> Fix For: 1.20.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.19][FLINK-36739] Update the NodeJS to v22.11.0 (LTS) [flink]

2025-01-07 Thread via GitHub


davidradl commented on PR #25912:
URL: https://github.com/apache/flink/pull/25912#issuecomment-2576238743

   @mehdid93 CI is failing in the web ui , I assume the pipeline change is not 
there for 1.19


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.20][FLINK-36740] [WebFrontend] Update frontend dependencies to address vulnerabilities [flink]

2025-01-07 Thread via GitHub


davidradl commented on PR #25830:
URL: https://github.com/apache/flink/pull/25830#issuecomment-2576241527

   waiting for a clean CI then will approve


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-36835) Memory lease in flink-s3-fs module due to hadoop aws version 3.x

2025-01-07 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-36835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17910825#comment-17910825
 ] 

Alexander Fedulov edited comment on FLINK-36835 at 1/7/25 10:11 PM:


This looks like something that needs to be fixed in upstream Hadoop. Closing 
until there are further details provided why Flink is the appropriate place to 
fix this.


was (Author: afedulov):
This looks like something that needs to be fixed in upstream Hadoop. Closing 
unless there are further details provided why Flink is the appropriate place to 
fix this.

> Memory lease in flink-s3-fs module due to hadoop aws version 3.x
> 
>
> Key: FLINK-36835
> URL: https://issues.apache.org/jira/browse/FLINK-36835
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.16.3, 1.17.2, 1.18.1, 1.20.0, 1.19.1
>Reporter: JinxinTang
>Priority: Major
>  Labels: pull-request-available
>
> From https://issues.apache.org/jira/browse/HADOOP-13028
> is introduced org.apache.hadoop.fs.s3a.S3AInstrumentation, it will be created 
> every time org.apache.hadoop.fs.s3a.S3AFileSystem#initialize 
> and be registerd to static field meticsystem by 
> org.apache.hadoop.fs.s3a.S3AInstrumentation#registerAsMetricsSource,but no 
> one call 
> metricsSystem.unregisterSource(metricsSourceName); during runtime, it will 
> cause memory lease when use filesystem plugin to write s3 storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >