[jira] [Created] (FLINK-35588) flink sql redis connector

2024-06-13 Thread cris niu (Jira)
cris niu created FLINK-35588:


 Summary: flink sql redis connector
 Key: FLINK-35588
 URL: https://issues.apache.org/jira/browse/FLINK-35588
 Project: Flink
  Issue Type: New Feature
Reporter: cris niu


flink sql have not redis connector. I think we should develop a sql redis 
connector for our easier development.

I have writen little code about sql redis connector and my thoughts are as 
follows:

 

source:

            1.writing a factory class implement DynamicTableSourceFactory

            2.writing a class implement ScanTableSource and how to wrtie it's 
schema

            3.writing a class extends RichSourceFunction and ResultTypeQueryable

            4.using JAVA SIP mode to use factory class and related code

Finally, redis sink has the same step like source ,but it extends source factory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35164][table] Support `ALTER CATALOG RESET` syntax [flink]

2024-06-13 Thread via GitHub


liyubin117 commented on code in PR #24763:
URL: https://github.com/apache/flink/pull/24763#discussion_r1637666706


##
flink-table/flink-sql-client/src/test/resources/sql/catalog_database.q:
##
@@ -769,3 +769,28 @@ desc catalog extended cat2;
 +-+---+
 4 rows in set
 !ok
+

Review Comment:
   done :)



##
flink-table/flink-sql-gateway/src/test/resources/sql/catalog_database.q:
##
@@ -911,3 +911,35 @@ desc catalog extended cat2;
 +-+---+
 4 rows in set
 !ok
+
+alter catalog cat2 reset ('default-database', 'k1');

Review Comment:
   done :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35164][table] Support `ALTER CATALOG RESET` syntax [flink]

2024-06-13 Thread via GitHub


liyubin117 commented on code in PR #24763:
URL: https://github.com/apache/flink/pull/24763#discussion_r1637667029


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlDdlToOperationConverterTest.java:
##
@@ -124,6 +127,26 @@ public void testAlterCatalog() {
 "cat2",
 "ALTER CATALOG cat2\n  SET 'K1' = 'V1',\n  SET 'k2' = 
'v2_new'",
 expectedOptions);
+
+// test alter catalog reset
+final Set expectedResetKeys = new HashSet<>();
+expectedResetKeys.add("K1");

Review Comment:
   done :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Update pulsar-1.19.0 [flink-connector-pulsar]

2024-06-13 Thread via GitHub


thinker0 opened a new pull request, #94:
URL: https://github.com/apache/flink-connector-pulsar/pull/94

   
   
   ## Purpose of the change
   
   *For example: Add dynamic sink topic support for Pulsar connector.*
   
   ## Brief change log
   
   - *Change the internal design of `ProducerRegister`.*
   - *Expose topic metadata query in `PulsarSinkContext`.*
   - *Change the internal metadata cache in `MetadataListener`.*
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality
   guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
   
   - *Added unit tests*
   - *Added integration tests for end-to-end deployment*
   - *Manually verified by running the Pulsar connector on a local Flink 
cluster.*
   
   ## Significant changes
   
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for
   convenience.)*
   
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   - If yes, how is this documented? (not applicable / docs / JavaDocs / 
not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Update pulsar-1.19.0 [flink-connector-pulsar]

2024-06-13 Thread via GitHub


boring-cyborg[bot] commented on PR #94:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/94#issuecomment-2164765271

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-35569:
--
Fix Version/s: 1.20.0

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Zakelly Lan
>Priority: Major
> Fix For: 1.20.0
>
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35237) Allow Sink to Choose HashFunction in PrePartitionOperator

2024-06-13 Thread LvYanquan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854655#comment-17854655
 ] 

LvYanquan commented on FLINK-35237:
---

I've met the same demand in Paimon Sink, and I agree that this is a reasonable 
requirement, and this change is acceptable to me.

> Allow Sink to Choose HashFunction in PrePartitionOperator
> -
>
> Key: FLINK-35237
> URL: https://issues.apache.org/jira/browse/FLINK-35237
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: zhangdingxin
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> The {{PrePartitionOperator}} in its current implementation only supports a 
> fixed {{HashFunction}} 
> ({{{}org.apache.flink.cdc.runtime.partitioning.PrePartitionOperator.HashFunction{}}}).
>  This limits the ability of Sink implementations to customize the 
> partitioning logic for {{{}DataChangeEvent{}}}s. For example, in the case of 
> partitioned tables, it would be advantageous to allow hashing based on 
> partition keys, hashing according to table names, or using the database 
> engine's internal primary key hash functions (such as with MaxCompute 
> DataSink).
> When users require such custom partitioning logic, they are compelled to 
> implement their PartitionOperator, which undermines the utility of 
> {{{}PrePartitionOperator{}}}.
> To address this limitation, it would be highly desirable to enable the 
> {{PrePartitionOperator}} to support user-specified custom 
> {{{}HashFunction{}}}s (Function). A possible 
> solution could involve a mechanism analogous to the {{DataSink}} interface, 
> allowing the specification of a {{HashFunctionProvider}} class path in the 
> configuration file. This enhancement would greatly facilitate users in 
> tailoring partition strategies to meet their specific application needs.
> In this case, I want to create new class {{HashFunctionProvider}} and 
> {{{}HashFunction{}}}:
> {code:java}
> public interface HashFunctionProvider {
> HashFunction getHashFunction(Schema schema);
> }
> public interface HashFunction extends Function {
> Integer apply(DataChangeEvent event);
> } {code}
> add {{getHashFunctionProvider}} method to {{DataSink}}
>  
> {code:java}
> public interface DataSink {
> /** Get the {@link EventSinkProvider} for writing changed data to 
> external systems. */
> EventSinkProvider getEventSinkProvider();
> /** Get the {@link MetadataApplier} for applying metadata changes to 
> external systems. */
> MetadataApplier getMetadataApplier();
> default HashFunctionProvider getHashFunctionProvider() {
> return new DefaultHashFunctionProvider();
> }
> } {code}
> and re-implement {{PrePartitionOperator}} {{recreateHashFunction}} method.
> {code:java}
> private HashFunction recreateHashFunction(TableId tableId) {
> return 
> hashFunctionProvider.getHashFunction(loadLatestSchemaFromRegistry(tableId));
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35371][security] Add configuration for SSL keystore and truststore type [flink]

2024-06-13 Thread via GitHub


gaborgsomogyi commented on PR #24919:
URL: https://github.com/apache/flink/pull/24919#issuecomment-2164843469

   I'm fine with merging it on 14th of June EOB. Let's wait on other voices.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-20398][e2e] Migrate test_batch_sql.sh to Java e2e tests framework [flink]

2024-06-13 Thread via GitHub


JingGe commented on code in PR #24471:
URL: https://github.com/apache/flink/pull/24471#discussion_r1637750136


##
flink-end-to-end-tests/flink-batch-sql-test/src/test/java/org/apache/flink/sql/tests/Generator.java:
##
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.tests;
+
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.types.Row;
+
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.time.ZoneOffset;
+import java.util.ArrayList;
+import java.util.List;
+
+class Generator implements FromElementsSource.ElementsSupplier {
+private static final long serialVersionUID = -8455653458083514261L;
+private final List elements;
+
+static Generator create(
+int numKeys, float rowsPerKeyAndSecond, int durationSeconds, int 
offsetSeconds) {
+final int stepMs = (int) (1000 / rowsPerKeyAndSecond);
+final long durationMs = durationSeconds * 1000L;
+final long offsetMs = offsetSeconds * 2000L;
+final List elements = new ArrayList<>();
+int keyIndex = 0;
+long ms = 0;
+while (ms < durationMs) {
+elements.add(createRow(keyIndex++, ms, offsetMs));

Review Comment:
   I still think we should keep the on-the-fly implementation. But we can move 
forward for now and modify it once we have the performance issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35071][cdc-connector][cdc-base] Shade guava31 to avoid dependency conflict with flink below 1.18 [flink-cdc]

2024-06-13 Thread via GitHub


yuxiqian commented on PR #3083:
URL: https://github.com/apache/flink-cdc/pull/3083#issuecomment-2164936973

   Seems there's some conflicts between `master` branch, could @loserwang1024 
please rebase it when you're available?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35570) Consider PlaceholderStreamStateHandle in checkpoint file merging

2024-06-13 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-35570:

Fix Version/s: 1.20.0

> Consider PlaceholderStreamStateHandle in checkpoint file merging
> 
>
> Key: FLINK-35570
> URL: https://issues.apache.org/jira/browse/FLINK-35570
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> In checkpoint file merging, we should take {{PlaceholderStreamStateHandle}} 
> into account during lifecycle, since it can be a file merged one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Introduce comment for CatalogStore

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
Summary: Introduce comment for CatalogStore  (was: Introduce comment for 
Catalog)

> Introduce comment for CatalogStore
> --
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34918][table] Introduce comment for CatalogStore [flink]

2024-06-13 Thread via GitHub


liyubin117 opened a new pull request, #24932:
URL: https://github.com/apache/flink/pull/24932

   ## What is the purpose of the change
   
   Provide the ability to set comment for the catalog.
   
   ## Brief change log
   
   * add comment instance in `CatalogDescriptor` and expose setter/getter 
function.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogManagerTest#testCatalogStore
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? no


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-35378) [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-06-13 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-35378.
--
Fix Version/s: 1.20.0
   Resolution: Fixed

Fixed in apache/flink:master f0b01277dd23dd0edc2a65c2634370936f95c136

> [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction
> ---
>
> Key: FLINK-35378
> URL: https://issues.apache.org/jira/browse/FLINK-35378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> https://cwiki.apache.org/confluence/pages/resumedraft.action?draftId=303794871&draftShareId=af4ace88-98b7-4a53-aece-cd67d2f91a15&;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Introduce comment for CatalogStore

2024-06-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34918:
---
Labels: pull-request-available  (was: )

> Introduce comment for CatalogStore
> --
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35590) Cleanup deprecated options usage in docs about state and checkpoint

2024-06-13 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-35590:
---

 Summary: Cleanup deprecated options usage in docs about state and 
checkpoint 
 Key: FLINK-35590
 URL: https://issues.apache.org/jira/browse/FLINK-35590
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.20.0
Reporter: Zakelly Lan
Assignee: Zakelly Lan


Currently, there is remaining usage of deprecated options in docs, such as 
'state.backend', which should be replaced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-13 Thread via GitHub


MartijnVisser merged PR #24805:
URL: https://github.com/apache/flink/pull/24805


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore [flink]

2024-06-13 Thread via GitHub


flinkbot commented on PR #24932:
URL: https://github.com/apache/flink/pull/24932#issuecomment-2165051805

   
   ## CI report:
   
   * 9e10cf9d38d624b4982a20ab863098df408358cf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35589) Support MemorySize type in FlinkCDC ConfigOptions

2024-06-13 Thread LvYanquan (Jira)
LvYanquan created FLINK-35589:
-

 Summary: Support MemorySize type in FlinkCDC ConfigOptions 
 Key: FLINK-35589
 URL: https://issues.apache.org/jira/browse/FLINK-35589
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.2.0
Reporter: LvYanquan
 Fix For: cdc-3.2.0


This allow user to set MemorySize config type like Flink.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax

2024-06-13 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854686#comment-17854686
 ] 

Weijie Guo commented on FLINK-34914:


I will mark this to `won't make it` in 1.20 release as RMs were unable to 
contact the contributors.

> FLIP-436: Introduce Catalog-related Syntax
> --
>
> Key: FLINK-34914
> URL: https://issues.apache.org/jira/browse/FLINK-34914
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> Umbrella issue for: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35164) Support `ALTER CATALOG RESET` syntax

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-35164:
-

Assignee: Yubin Li

> Support `ALTER CATALOG RESET` syntax
> 
>
> Key: FLINK-35164
> URL: https://issues.apache.org/jira/browse/FLINK-35164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-18-23-26-59-854.png
>
>
> h3. ALTER CATALOG catalog_name RESET (key1, key2, ...)
> Reset one or more properties to its default value in the specified catalog.
> !image-2024-04-18-23-26-59-854.png|width=781,height=527!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35591:
---
Description: 
No Azure CI builds are triggered for master since 
4a852fee28f2d87529dc05f5ba2e79202a0e00b6.

The PR CI workflows appear to be not affected. I suspect some problem with the 
repo-sync process.

> Azure Pipelines not running for master since c9def981
> -
>
> Key: FLINK-35591
> URL: https://issues.apache.org/jira/browse/FLINK-35591
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> No Azure CI builds are triggered for master since 
> 4a852fee28f2d87529dc05f5ba2e79202a0e00b6.
> The PR CI workflows appear to be not affected. I suspect some problem with 
> the repo-sync process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35591:
---
Priority: Blocker  (was: Major)

> Azure Pipelines not running for master since c9def981
> -
>
> Key: FLINK-35591
> URL: https://issues.apache.org/jira/browse/FLINK-35591
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> No Azure CI builds are triggered for master since 
> [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary].
>  The PR CI workflows appear to be not affected. 
> Might be the same reason as FLINK-34026.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
Summary: Support `ALTER CATALOG COMMENT` syntax  (was: Introduce comment 
for CatalogStore)

> Support `ALTER CATALOG COMMENT` syntax
> --
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35592) MysqlDebeziumTimeConverter miss timezone convert to timestamp

2024-06-13 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-35592:
-

Assignee: ZhengYu Chen

> MysqlDebeziumTimeConverter miss timezone convert to timestamp
> -
>
> Key: FLINK-35592
> URL: https://issues.apache.org/jira/browse/FLINK-35592
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: ZhengYu Chen
>Assignee: ZhengYu Chen
>Priority: Major
> Fix For: cdc-3.1.1
>
>
> MysqlDebeziumTimeConverter miss timezone convert to timestamp.if use 
> timestamp to mmddhhmmss.it will be lost timezone convert



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35591:
--

 Summary: Azure Pipelines not running for master since c9def981
 Key: FLINK-35591
 URL: https://issues.apache.org/jira/browse/FLINK-35591
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Introduce comment for CatalogStore & Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
Summary: Introduce comment for CatalogStore & Support `ALTER CATALOG 
COMMENT` syntax  (was: Support `ALTER CATALOG COMMENT` syntax)

> Introduce comment for CatalogStore & Support `ALTER CATALOG COMMENT` syntax
> ---
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> !image-2024-05-26-02-11-30-070.png|width=575,height=415!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35592) MysqlDebeziumTimeConverter miss timezone convert to timestamp

2024-06-13 Thread ZhengYu Chen (Jira)
ZhengYu Chen created FLINK-35592:


 Summary: MysqlDebeziumTimeConverter miss timezone convert to 
timestamp
 Key: FLINK-35592
 URL: https://issues.apache.org/jira/browse/FLINK-35592
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: ZhengYu Chen
 Fix For: cdc-3.1.1


MysqlDebeziumTimeConverter miss timezone convert to timestamp.if use timestamp 
to mmddhhmmss.it will be lost timezone convert



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35591:
---
Description: 
No Azure CI builds are triggered for master since 
[c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary].
 The PR CI workflows appear to be not affected. 

Might be the same reason as FLINK-34026.

  was:
No Azure CI builds are triggered for master since c9def981. The PR CI workflows 
appear to be not affected. 

Might be the same reason as FLINK-34026.


> Azure Pipelines not running for master since c9def981
> -
>
> Key: FLINK-35591
> URL: https://issues.apache.org/jira/browse/FLINK-35591
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> No Azure CI builds are triggered for master since 
> [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary].
>  The PR CI workflows appear to be not affected. 
> Might be the same reason as FLINK-34026.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35121][pipeline-connector][cdc-base] CDC pipeline connector provide ability to verify requiredOptions and optionalOptions [flink-cdc]

2024-06-13 Thread via GitHub


loserwang1024 commented on PR #3412:
URL: https://github.com/apache/flink-cdc/pull/3412#issuecomment-2165162601

   >  @loserwang1024 mind if I cherry pick your commit?
   
   Just do it.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Lorenzo Affetti (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854705#comment-17854705
 ] 

Lorenzo Affetti commented on FLINK-35591:
-

Back to normality

!image-2024-06-13-12-31-18-076.png|width=988,height=263!

> Azure Pipelines not running for master since c9def981
> -
>
> Key: FLINK-35591
> URL: https://issues.apache.org/jira/browse/FLINK-35591
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
> Attachments: image-2024-06-13-12-31-18-076.png
>
>
> No Azure CI builds are triggered for master since 
> [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary].
>  The PR CI workflows appear to be not affected. 
> Might be the same reason as FLINK-34026.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Lorenzo Affetti (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lorenzo Affetti updated FLINK-35591:

Attachment: image-2024-06-13-12-31-18-076.png

> Azure Pipelines not running for master since c9def981
> -
>
> Key: FLINK-35591
> URL: https://issues.apache.org/jira/browse/FLINK-35591
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
> Attachments: image-2024-06-13-12-31-18-076.png
>
>
> No Azure CI builds are triggered for master since 
> [c9def981|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary].
>  The PR CI workflows appear to be not affected. 
> Might be the same reason as FLINK-34026.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35574) Setup base branch for FrocksDB-8.10

2024-06-13 Thread Yue Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yue Ma updated FLINK-35574:
---
Description: 
As the first part of FLINK-35573, we need to prepare a base branch for 
FRocksDB-8.10.0 first. Mainly, it needs to be checked out from version 8.10.0 
of the Rocksdb community. Then check pick the commit which used by Flink from 
FRocksDB-6.20.3 to 8.10.0

*Details:*
|*JIRA*|*FrocksDB-6.20.3*|*Commit ID in FrocksDB-8.10.0*|*Plan*|
|[[FLINK-10471] Add Apache Flink specific compaction filter to evict expired 
state which has 
time-to-live|https://github.com/ververica/frocksdb/commit/3da8249d50c8a3a6ea229f43890d37e098372786]|3da8249d50c8a3a6ea229f43890d37e098372786|d606c9450bef7d2a22c794f406d7940d9d2f29a4|Already
 in *FrocksDB-8.10.0*|
|+[[FLINK-19710] Revert implementation of PerfContext back to __thread to avoid 
performance 
regression|https://github.com/ververica/frocksdb/commit/d6f50f33064f1d24480dfb3c586a7bd7a7dbac01]+|d6f50f33064f1d24480dfb3c586a7bd7a7dbac01|
 |Fix in FLINK-35575|
|[FRocksDB release guide and helping 
scripts|https://github.com/ververica/frocksdb/commit/2673de8e5460af8d23c0c7e1fb0c3258ea283419]|2673de8e5460af8d23c0c7e1fb0c3258ea283419|b58ba05a380d9bf0c223bc707f14897ce392ce1b|Already
 in *FrocksDB-8.10.0*|
|+[Add content related to ARM building in the FROCKSDB-RELEASE 
documentation|https://github.com/ververica/frocksdb/commit/ec27ca01db5ff579dd7db1f70cf3a4677b63d589]+|ec27ca01db5ff579dd7db1f70cf3a4677b63d589|6cae002662a45131a0cd90dd84f5d3d3cb958713|Already
 in *FrocksDB-8.10.0*|
|[[FLINK-23756] Update FrocksDB release document with more 
info|https://github.com/ververica/frocksdb/commit/f75e983045f4b64958dc0e93e8b94a7cfd7663be]|f75e983045f4b64958dc0e93e8b94a7cfd7663be|bac6aeb6e012e19d9d5e3a5ee22b84c1e4a1559c|Already
 in *FrocksDB-8.10.0*|
|[Add support for Apple Silicon to RocksJava 
(#9254)|https://github.com/ververica/frocksdb/commit/dac2c60bc31b596f445d769929abed292878cac1]|dac2c60bc31b596f445d769929abed292878cac1|#9254|Already
 in *FrocksDB-8.10.0*|
|[Fix RocksJava releases for macOS 
(#9662)|https://github.com/ververica/frocksdb/commit/22637e11968a627a06a3ac8aa78126e3ae6d1368]|22637e11968a627a06a3ac8aa78126e3ae6d1368|#9662|Already
 in *FrocksDB-8.10.0*|
|+[Fix clang13 build error 
(#9374)|https://github.com/ververica/frocksdb/commit/a20fb9fa96af7b18015754cf44463e22fc123222]+|a20fb9fa96af7b18015754cf44463e22fc123222|#9374|Already
 in *FrocksDB-8.10.0*|
|+[[hotfix] Resolve brken make 
format|https://github.com/ververica/frocksdb/commit/cf0acdc08fb1b8397ef29f3b7dc7e0400107555e]+|7a87e0bf4d59cc48f40ce69cf7b82237c5e8170c|
 |Already in *FrocksDB-8.10.0*|
|+[Update circleci xcode version 
(#9405)|https://github.com/ververica/frocksdb/commit/f24393bdc8d44b79a9be7a58044e5fd01cf50df7]+|cf0acdc08fb1b8397ef29f3b7dc7e0400107555e|#9405|Already
 in *FrocksDB-8.10.0*|
|+[Upgrade to Ubuntu 20.04 in our CircleCI 
config|https://github.com/ververica/frocksdb/commit/1fecfda040745fc508a0ea0bcbb98c970f89ee3e]+|1fecfda040745fc508a0ea0bcbb98c970f89ee3e|
 |Fix in 
[FLINK-35577|https://github.com/facebook/rocksdb/pull/9481/files#diff-78a8a19706dbd2a4425dd72bdab0502ed7a2cef16365ab7030a5a0588927bf47]
fixed in 
https://github.com/facebook/rocksdb/pull/9481/files#diff-78a8a19706dbd2a4425dd72bdab0502ed7a2cef16365ab7030a5a0588927bf47|
|[Disable useless broken tests due to ci-image 
upgraded|https://github.com/ververica/frocksdb/commit/9fef987e988c53a33b7807b85a56305bd9dede81]|9fef987e988c53a33b7807b85a56305bd9dede81|
 |Fix in FLINK-35577|
|[[hotfix] Use zlib's fossils page to replace 
web.archive|https://github.com/ververica/frocksdb/commit/cbc35db93f312f54b49804177ca11dea44b4d98e]|cbc35db93f312f54b49804177ca11dea44b4d98e|8fff7bb9947f9036021f99e3463c9657e80b71ae|Already
 in *FrocksDB-8.10.0*|
|+[[hotfix] Change the resource request when running 
CI|https://github.com/ververica/frocksdb/commit/2ec1019fd0433cb8ea5365b58faa2262ea0014e9]+|2ec1019fd0433cb8ea5365b58faa2262ea0014e9|174639cf1e6080a8f8f37aec132b3a500428f913|Already
 in *FrocksDB-8.10.0*|
|{+}[[FLINK-30321] Upgrade ZLIB of FRocksDB to 1.2.13 
(|https://github.com/ververica/frocksdb/commit/3eac409606fcd9ce44a4bf7686db29c06c205039]{+}[#56|https://github.com/ververica/frocksdb/pull/56]
 
[)|https://github.com/ververica/frocksdb/commit/3eac409606fcd9ce44a4bf7686db29c06c205039]|3eac409606fcd9ce44a4bf7686db29c06c205039|
 |*FrocksDB-8.10.0 has upgrade to 1.3*|
|[fix(CompactionFilter): avoid expensive ToString call when not in 
Debug`|https://github.com/ververica/frocksdb/commit/698c9ca2c419c72145a2e6f5282a7860225b27a0]|698c9ca2c419c72145a2e6f5282a7860225b27a0|927b17e10d2112270ac30c4566238950baba4b7b|Already
 in *FrocksDB-8.10.0*|
|[[FLINK-30457] Add periodic_compaction_seconds option to 
RocksJava|https://github.com/ververica/frocksdb/commit/ebed4b1326ca4c5c684b46813bdcb1164a669da1]|ebed4b1326ca4c5c684b46813bdcb1164a669da1|#8579|Already
 in *FrocksDB-8.10.0*|
|[

[PR] [FLINK-32086][checkpointing] Cleanup useless file-merging managed directory on exit of TM [flink]

2024-06-13 Thread via GitHub


zoltar9264 opened a new pull request, #24933:
URL: https://github.com/apache/flink/pull/24933

   ## What is the purpose of the change
   
   Clean up useless file-merging managed directory (which never be include in 
any checkpoint) on exit of TM.
   
   ## Brief change log
   
   1. Normalize file-merging sub dir
 - format file-merging subtask dir to 
`job_{jobId}_op_{operatorId}_{subtaskIndex}_{parallelism}`
 - format file-merging exclusive dir to `job_{jobId}_tm_{tmResourceId}`
   2. Track managed directory reference and clean up useless one when exit.
   
   ## Verifying this change
   
   Normalize file-merging sub dir can be verified by 
*FileMergingSnapshotManagerTestBase#testCreateFileMergingSnapshotManager*.
   
   And useless file-merging managed dir clean up can be verified by tests add 
in this change : *FileMergingSnapshotManagerTestBase#testManagedDirCleanup*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   
   Please help review this change @Zakelly , thanks.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32086) Cleanup non-reported managed directory on exit of TM

2024-06-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32086:
---
Labels: pull-request-available  (was: )

> Cleanup non-reported managed directory on exit of TM
> 
>
> Key: FLINK-32086
> URL: https://issues.apache.org/jira/browse/FLINK-32086
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Zakelly Lan
>Assignee: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32086][checkpointing] Cleanup useless file-merging managed directory on exit of TM [flink]

2024-06-13 Thread via GitHub


flinkbot commented on PR #24933:
URL: https://github.com/apache/flink/pull/24933#issuecomment-2165343946

   
   ## CI report:
   
   * 4e54756da848b4b6febc23f70175a563c8a95795 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35164][table] Support `ALTER CATALOG RESET` syntax [flink]

2024-06-13 Thread via GitHub


LadyForest merged PR #24763:
URL: https://github.com/apache/flink/pull/24763


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-35164) Support `ALTER CATALOG RESET` syntax

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-35164.
-

> Support `ALTER CATALOG RESET` syntax
> 
>
> Key: FLINK-35164
> URL: https://issues.apache.org/jira/browse/FLINK-35164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-04-18-23-26-59-854.png
>
>
> h3. ALTER CATALOG catalog_name RESET (key1, key2, ...)
> Reset one or more properties to its default value in the specified catalog.
> !image-2024-04-18-23-26-59-854.png|width=781,height=527!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35164) Support `ALTER CATALOG RESET` syntax

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-35164.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

Fixed in master 9d1690387849303b27050bb0cefaa1bad6e3fb98

> Support `ALTER CATALOG RESET` syntax
> 
>
> Key: FLINK-35164
> URL: https://issues.apache.org/jira/browse/FLINK-35164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-04-18-23-26-59-854.png
>
>
> h3. ALTER CATALOG catalog_name RESET (key1, key2, ...)
> Reset one or more properties to its default value in the specified catalog.
> !image-2024-04-18-23-26-59-854.png|width=781,height=527!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax

2024-06-13 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854723#comment-17854723
 ] 

Yubin Li commented on FLINK-34914:
--

[~Weijie Guo] Hi, sorry for the late reply. we are pushing the FLIP forward as 
soon as possible, we have completed most part of works, so we could make it 
done in 1.20 :)

> FLIP-436: Introduce Catalog-related Syntax
> --
>
> Key: FLINK-34914
> URL: https://issues.apache.org/jira/browse/FLINK-34914
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> Umbrella issue for: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax

2024-06-13 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854723#comment-17854723
 ] 

Yubin Li edited comment on FLINK-34914 at 6/13/24 11:43 AM:


[~Weijie Guo] Hi, sorry for the late reply. we are pushing the FLIP forward as 
soon as possible, and most part of works have been finished, so we could make 
it done in 1.20 :)


was (Author: liyubin117):
[~Weijie Guo] Hi, sorry for the late reply. we are pushing the FLIP forward as 
soon as possible, we have completed most part of works, so we could make it 
done in 1.20 :)

> FLIP-436: Introduce Catalog-related Syntax
> --
>
> Key: FLINK-34914
> URL: https://issues.apache.org/jira/browse/FLINK-34914
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> Umbrella issue for: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax

2024-06-13 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854723#comment-17854723
 ] 

Yubin Li edited comment on FLINK-34914 at 6/13/24 11:49 AM:


[~Weijie Guo] Hi, sorry for the late reply. we are pushing the FLIP forward as 
soon as possible, and the most parts of works have been finished, so we could 
make it done in 1.20 :)


was (Author: liyubin117):
[~Weijie Guo] Hi, sorry for the late reply. we are pushing the FLIP forward as 
soon as possible, and most part of works have been finished, so we could make 
it done in 1.20 :)

> FLIP-436: Introduce Catalog-related Syntax
> --
>
> Key: FLINK-34914
> URL: https://issues.apache.org/jira/browse/FLINK-34914
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> Umbrella issue for: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35543][HIVE] Upgrade Hive 2.3 connector to version 2.3.10 [flink]

2024-06-13 Thread via GitHub


pan3793 commented on PR #24905:
URL: https://github.com/apache/flink/pull/24905#issuecomment-2165447683

   ping @1996fanrui can we include this in 1.20?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35591:
---
Description: 
No Azure CI builds are triggered for master since c9def981. The PR CI workflows 
appear to be not affected. 

Might be the same reason as FLINK-34026.

  was:
No Azure CI builds are triggered for master since 
4a852fee28f2d87529dc05f5ba2e79202a0e00b6.

The PR CI workflows appear to be not affected. I suspect some problem with the 
repo-sync process.


> Azure Pipelines not running for master since c9def981
> -
>
> Key: FLINK-35591
> URL: https://issues.apache.org/jira/browse/FLINK-35591
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> No Azure CI builds are triggered for master since c9def981. The PR CI 
> workflows appear to be not affected. 
> Might be the same reason as FLINK-34026.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
Description: 
We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
reasons are as follows.

1. For the sake of design consistency, follow the design of FLIP-295 [1] which 
introduced `CatalogStore` component, `CatalogDescriptor` includes names and 
attributes, both of which are used to describe the catalog, and `comment` can 
be added smoothly.

2. Extending the existing class rather than add new method to the existing 
interface, Especially, the `Catalog` interface, as a core interface, is used by 
a series of important components such as `CatalogFactory`, `CatalogManager` and 
`FactoryUtil`, and is implemented by a large number of connectors such as JDBC, 
Paimon, and Hive. Adding methods to it will greatly increase the implementation 
complexity, and more importantly, increase the cost of iteration, maintenance, 
and verification.

 

!image-2024-05-26-02-11-30-070.png|width=575,height=415!

  was:
We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
reasons are as follows.

1. For the sake of design consistency, follow the design of FLIP-295 [1] which 
introduced `CatalogStore` component, `CatalogDescriptor` includes names and 
attributes, both of which are used to describe the catalog, and `comment` can 
be added smoothly.

2. Extending the existing class rather than add new method to the existing 
interface, Especially, the `Catalog` interface, as a core interface, is used by 
a series of important components such as `CatalogFactory`, `CatalogManager` and 
`FactoryUtil`, and is implemented by a large number of connectors such as JDBC, 
Paimon, and Hive. Adding methods to it will greatly increase the implementation 
complexity, and more importantly, increase the cost of iteration, maintenance, 
and verification.


> Support `ALTER CATALOG COMMENT` syntax
> --
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> !image-2024-05-26-02-11-30-070.png|width=575,height=415!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Introduce comment for CatalogStore & Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
 Attachment: image-2024-06-13-18-01-34-910.png
Description: 
We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
reasons are as follows.

1. For the sake of design consistency, follow the design of FLIP-295 [1] which 
introduced `CatalogStore` component, `CatalogDescriptor` includes names and 
attributes, both of which are used to describe the catalog, and `comment` can 
be added smoothly.

2. Extending the existing class rather than add new method to the existing 
interface, Especially, the `Catalog` interface, as a core interface, is used by 
a series of important components such as `CatalogFactory`, `CatalogManager` and 
`FactoryUtil`, and is implemented by a large number of connectors such as JDBC, 
Paimon, and Hive. Adding methods to it will greatly increase the implementation 
complexity, and more importantly, increase the cost of iteration, maintenance, 
and verification.

 

Set comment in the specified catalog. If the comment is already set in the 
catalog, override the old value with the new one.

!image-2024-06-13-18-01-34-910.png|width=715,height=523!

  was:
We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
reasons are as follows.

1. For the sake of design consistency, follow the design of FLIP-295 [1] which 
introduced `CatalogStore` component, `CatalogDescriptor` includes names and 
attributes, both of which are used to describe the catalog, and `comment` can 
be added smoothly.

2. Extending the existing class rather than add new method to the existing 
interface, Especially, the `Catalog` interface, as a core interface, is used by 
a series of important components such as `CatalogFactory`, `CatalogManager` and 
`FactoryUtil`, and is implemented by a large number of connectors such as JDBC, 
Paimon, and Hive. Adding methods to it will greatly increase the implementation 
complexity, and more importantly, increase the cost of iteration, maintenance, 
and verification.

 

!image-2024-05-26-02-11-30-070.png|width=575,height=415!


> Introduce comment for CatalogStore & Support `ALTER CATALOG COMMENT` syntax
> ---
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-06-13-18-01-34-910.png
>
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> Set comment in the specified catalog. If the comment is already set in the 
> catalog, override the old value with the new one.
> !image-2024-06-13-18-01-34-910.png|width=715,height=523!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [DRAFT][FLINK-34440][formats][protobuf-confluent] add support for protobuf-confluent [flink]

2024-06-13 Thread via GitHub


anupamaggarwal commented on PR #24482:
URL: https://github.com/apache/flink/pull/24482#issuecomment-2165510538

   Hi @klam-shop, apologies, I missed your comment earlier. I am sorry for 
leaving this PR hanging in the middle, I had to context switch to focus on some 
other priorities. I might be able to look into this again in a couple of months.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-20398][e2e] Migrate test_batch_sql.sh to Java e2e tests framework [flink]

2024-06-13 Thread via GitHub


affo commented on PR #24471:
URL: https://github.com/apache/flink/pull/24471#issuecomment-2165511898

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35451) Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-35451:
-
Parent: (was: FLINK-34914)
Issue Type: New Feature  (was: Sub-task)

> Support `ALTER CATALOG COMMENT` syntax
> --
>
> Key: FLINK-35451
> URL: https://issues.apache.org/jira/browse/FLINK-35451
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Yubin Li
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: image-2024-05-26-02-11-30-070.png
>
>
> Set comment in the specified catalog. If the comment is already set in the 
> catalog, override the old value with the new one.
> !image-2024-05-26-02-11-30-070.png|width=575,height=415!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35451) Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li closed FLINK-35451.

Resolution: Duplicate

> Support `ALTER CATALOG COMMENT` syntax
> --
>
> Key: FLINK-35451
> URL: https://issues.apache.org/jira/browse/FLINK-35451
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: image-2024-05-26-02-11-30-070.png
>
>
> Set comment in the specified catalog. If the comment is already set in the 
> catalog, override the old value with the new one.
> !image-2024-05-26-02-11-30-070.png|width=575,height=415!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35593) Apache Kubernetes Operator Docker image does not contain Apache LICENSE

2024-06-13 Thread Anupam Aggarwal (Jira)
Anupam Aggarwal created FLINK-35593:
---

 Summary: Apache Kubernetes Operator Docker image does not contain 
Apache LICENSE
 Key: FLINK-35593
 URL: https://issues.apache.org/jira/browse/FLINK-35593
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Affects Versions: 1.8.0
Reporter: Anupam Aggarwal


The Apache 
[LICENSE|https://github.com/apache/flink-kubernetes-operator/blob/main/LICENSE] 
is not bundled along with the Apache Flink Kubernetes Operator docker image.


{code:java}
❯ docker run -it  apache/flink-kubernetes-operator:1.8.0 bash
flink@cc372b31d067:/flink-kubernetes-operator$ ls -latr
total 104732
-rw-r--r-- 1 flink flink     40962 Mar 14 15:19 
flink-kubernetes-standalone-1.8.0.jar
-rw-r--r-- 1 flink flink 107055161 Mar 14 15:21 
flink-kubernetes-operator-1.8.0-shaded.jar
-rw-r--r-- 1 flink flink     62402 Mar 14 15:21 
flink-kubernetes-webhook-1.8.0-shaded.jar
-rw-r--r-- 1 flink flink     63740 Mar 14 15:21 NOTICE
drwxr-xr-x 2 flink flink      4096 Mar 14 15:21 licenses
drwxr-xr-x 1 root  root       4096 Mar 14 15:21 .
drwxr-xr-x 1 root  root       4096 Jun 13 12:49 .. {code}

The Apache Flink docker image by contrast bundles the license (LICENSE)
{code:java}
❯ docker run -it apache/flink:latest bash
sed: can't read /config.yaml: No such file or directory
lflink@24c2dff32a45:~$ ls -latr
total 224
-rw-r--r--  1 flink flink   1309 Mar  4 15:34 README.txt
drwxrwxr-x  2 flink flink   4096 Mar  4 15:34 log
-rw-r--r--  1 flink flink  11357 Mar  4 15:34 LICENSE
drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 lib
drwxrwxr-x  6 flink flink   4096 Mar  7 05:49 examples
drwxrwxr-x  1 flink flink   4096 Mar  7 05:49 conf
drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 bin
drwxrwxr-x 10 flink flink   4096 Mar  7 05:49 plugins
drwxrwxr-x  3 flink flink   4096 Mar  7 05:49 opt
-rw-rw-r--  1 flink flink 156327 Mar  7 05:49 NOTICE
drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 licenses
drwxr-xr-x  1 root  root    4096 Mar 19 05:01 ..
drwxr-xr-x  1 flink flink   4096 Mar 19 05:02 .
flink@24c2dff32a45:~$ {code}




 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
Summary: Support `ALTER CATALOG COMMENT` syntax  (was: Introduce comment 
for CatalogStore & Support `ALTER CATALOG COMMENT` syntax)

> Support `ALTER CATALOG COMMENT` syntax
> --
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-06-13-18-01-34-910.png
>
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> Set comment in the specified catalog. If the comment is already set in the 
> catalog, override the old value with the new one.
> !image-2024-06-13-18-01-34-910.png|width=715,height=523!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34917) Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34917:
-
Summary: Introduce comment for CatalogStore & Support enhanced `CREATE 
CATALOG` syntax  (was: Support enhanced `CREATE CATALOG` syntax)

> Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax
> -
>
> Key: FLINK-34917
> URL: https://issues.apache.org/jira/browse/FLINK-34917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
> Attachments: image-2024-03-22-18-31-59-632.png
>
>
> {{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.
> {{COMMENT}} clause: An optional string literal. The description for the 
> catalog.
> NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' 
> clause to the 'create catalog' statement.
> !image-2024-03-22-18-31-59-632.png|width=795,height=87!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34917) Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34917:
-
Description: 
We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
reasons are as follows.

1. For the sake of design consistency, follow the design of FLIP-295 [1] which 
introduced `CatalogStore` component, `CatalogDescriptor` includes names and 
attributes, both of which are used to describe the catalog, and `comment` can 
be added smoothly.

2. Extending the existing class rather than add new method to the existing 
interface, Especially, the `Catalog` interface, as a core interface, is used by 
a series of important components such as `CatalogFactory`, `CatalogManager` and 
`FactoryUtil`, and is implemented by a large number of connectors such as JDBC, 
Paimon, and Hive. Adding methods to it will greatly increase the implementation 
complexity, and more importantly, increase the cost of iteration, maintenance, 
and verification.

 

{{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.

{{COMMENT}} clause: An optional string literal. The description for the catalog.

NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' clause 
to the 'create catalog' statement.

!image-2024-03-22-18-31-59-632.png|width=795,height=87!

  was:
{{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.

{{COMMENT}} clause: An optional string literal. The description for the catalog.

NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' clause 
to the 'create catalog' statement.

!image-2024-03-22-18-31-59-632.png|width=795,height=87!


> Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax
> -
>
> Key: FLINK-34917
> URL: https://issues.apache.org/jira/browse/FLINK-34917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
> Attachments: image-2024-03-22-18-31-59-632.png
>
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> {{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.
> {{COMMENT}} clause: An optional string literal. The description for the 
> catalog.
> NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' 
> clause to the 'create catalog' statement.
> !image-2024-03-22-18-31-59-632.png|width=795,height=87!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34918) Support `ALTER CATALOG COMMENT` syntax

2024-06-13 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-34918:
-
Description: 
Set comment in the specified catalog. If the comment is already set in the 
catalog, override the old value with the new one.

!image-2024-06-13-18-01-34-910.png|width=715,height=523!

  was:
We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
reasons are as follows.

1. For the sake of design consistency, follow the design of FLIP-295 [1] which 
introduced `CatalogStore` component, `CatalogDescriptor` includes names and 
attributes, both of which are used to describe the catalog, and `comment` can 
be added smoothly.

2. Extending the existing class rather than add new method to the existing 
interface, Especially, the `Catalog` interface, as a core interface, is used by 
a series of important components such as `CatalogFactory`, `CatalogManager` and 
`FactoryUtil`, and is implemented by a large number of connectors such as JDBC, 
Paimon, and Hive. Adding methods to it will greatly increase the implementation 
complexity, and more importantly, increase the cost of iteration, maintenance, 
and verification.

 

Set comment in the specified catalog. If the comment is already set in the 
catalog, override the old value with the new one.

!image-2024-06-13-18-01-34-910.png|width=715,height=523!


> Support `ALTER CATALOG COMMENT` syntax
> --
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-06-13-18-01-34-910.png
>
>
> Set comment in the specified catalog. If the comment is already set in the 
> catalog, override the old value with the new one.
> !image-2024-06-13-18-01-34-910.png|width=715,height=523!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35594) Downscaling doesn't release TaskManagers.

2024-06-13 Thread Aviv Dozorets (Jira)
Aviv Dozorets created FLINK-35594:
-

 Summary: Downscaling doesn't release TaskManagers.
 Key: FLINK-35594
 URL: https://issues.apache.org/jira/browse/FLINK-35594
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: 1.18.1
 Environment: * Flink 1.18.1 (Java 11, Temurin).
 * Kubernetes Operator 1.8
 * Kubernetes version v1.28.9-eks-036c24b (AWS EKS).

 

Autoscaling configuration:
{code:java}
jobmanager.scheduler: adaptive
job.autoscaler.enabled: "true"
job.autoscaler.metrics.window: 15m
job.autoscaler.stabilization.interval: 15m
job.autoscaler.scaling.effectiveness.threshold: 0.2
job.autoscaler.target.utilization: "0.75"
job.autoscaler.target.utilization.boundary: "0.25"
job.autoscaler.metrics.busy-time.aggregator: "AVG"
job.autoscaler.restart.time-tracking.enabled: "true"{code}
Reporter: Aviv Dozorets
 Attachments: Screenshot 2024-06-10 at 12.50.37 PM.png

(Follow-up of Slack conversation on #troubleshooting channel).

Recently I've observed a behavior, that should be improved:

A Flink DataStream that runs with autoscaler (backed by Kubernetes operator) 
and Adaptive scheduler doesn't release a node (TaskManager) when scaling down. 
In my example job started with initial parallelism of 64, while having 4 TM 
with 16 cores each (1:1 core:slot) and scaled down to 16.

My expectation: 1 TaskManager should be up and running.

Reality: All 4 initial TaskManagers are running, with multiple and unequal 
amount of available slots.

 

Didn't find an existing configuration to change the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35593) Apache Kubernetes Operator Docker image does not contain Apache LICENSE

2024-06-13 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-35593:
--

Assignee: Anupam Aggarwal

> Apache Kubernetes Operator Docker image does not contain Apache LICENSE
> ---
>
> Key: FLINK-35593
> URL: https://issues.apache.org/jira/browse/FLINK-35593
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.8.0
>Reporter: Anupam Aggarwal
>Assignee: Anupam Aggarwal
>Priority: Minor
>
> The Apache 
> [LICENSE|https://github.com/apache/flink-kubernetes-operator/blob/main/LICENSE]
>  is not bundled along with the Apache Flink Kubernetes Operator docker image.
> {code:java}
> ❯ docker run -it  apache/flink-kubernetes-operator:1.8.0 bash
> flink@cc372b31d067:/flink-kubernetes-operator$ ls -latr
> total 104732
> -rw-r--r-- 1 flink flink     40962 Mar 14 15:19 
> flink-kubernetes-standalone-1.8.0.jar
> -rw-r--r-- 1 flink flink 107055161 Mar 14 15:21 
> flink-kubernetes-operator-1.8.0-shaded.jar
> -rw-r--r-- 1 flink flink     62402 Mar 14 15:21 
> flink-kubernetes-webhook-1.8.0-shaded.jar
> -rw-r--r-- 1 flink flink     63740 Mar 14 15:21 NOTICE
> drwxr-xr-x 2 flink flink      4096 Mar 14 15:21 licenses
> drwxr-xr-x 1 root  root       4096 Mar 14 15:21 .
> drwxr-xr-x 1 root  root       4096 Jun 13 12:49 .. {code}
> The Apache Flink docker image by contrast bundles the license (LICENSE)
> {code:java}
> ❯ docker run -it apache/flink:latest bash
> sed: can't read /config.yaml: No such file or directory
> lflink@24c2dff32a45:~$ ls -latr
> total 224
> -rw-r--r--  1 flink flink   1309 Mar  4 15:34 README.txt
> drwxrwxr-x  2 flink flink   4096 Mar  4 15:34 log
> -rw-r--r--  1 flink flink  11357 Mar  4 15:34 LICENSE
> drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 lib
> drwxrwxr-x  6 flink flink   4096 Mar  7 05:49 examples
> drwxrwxr-x  1 flink flink   4096 Mar  7 05:49 conf
> drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 bin
> drwxrwxr-x 10 flink flink   4096 Mar  7 05:49 plugins
> drwxrwxr-x  3 flink flink   4096 Mar  7 05:49 opt
> -rw-rw-r--  1 flink flink 156327 Mar  7 05:49 NOTICE
> drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 licenses
> drwxr-xr-x  1 root  root    4096 Mar 19 05:01 ..
> drwxr-xr-x  1 flink flink   4096 Mar 19 05:02 .
> flink@24c2dff32a45:~$ {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35593) Apache Kubernetes Operator Docker image does not contain Apache LICENSE

2024-06-13 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854751#comment-17854751
 ] 

Robert Metzger commented on FLINK-35593:


+1 to fix this. I'll assign you to the ticket.

> Apache Kubernetes Operator Docker image does not contain Apache LICENSE
> ---
>
> Key: FLINK-35593
> URL: https://issues.apache.org/jira/browse/FLINK-35593
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: 1.8.0
>Reporter: Anupam Aggarwal
>Priority: Minor
>
> The Apache 
> [LICENSE|https://github.com/apache/flink-kubernetes-operator/blob/main/LICENSE]
>  is not bundled along with the Apache Flink Kubernetes Operator docker image.
> {code:java}
> ❯ docker run -it  apache/flink-kubernetes-operator:1.8.0 bash
> flink@cc372b31d067:/flink-kubernetes-operator$ ls -latr
> total 104732
> -rw-r--r-- 1 flink flink     40962 Mar 14 15:19 
> flink-kubernetes-standalone-1.8.0.jar
> -rw-r--r-- 1 flink flink 107055161 Mar 14 15:21 
> flink-kubernetes-operator-1.8.0-shaded.jar
> -rw-r--r-- 1 flink flink     62402 Mar 14 15:21 
> flink-kubernetes-webhook-1.8.0-shaded.jar
> -rw-r--r-- 1 flink flink     63740 Mar 14 15:21 NOTICE
> drwxr-xr-x 2 flink flink      4096 Mar 14 15:21 licenses
> drwxr-xr-x 1 root  root       4096 Mar 14 15:21 .
> drwxr-xr-x 1 root  root       4096 Jun 13 12:49 .. {code}
> The Apache Flink docker image by contrast bundles the license (LICENSE)
> {code:java}
> ❯ docker run -it apache/flink:latest bash
> sed: can't read /config.yaml: No such file or directory
> lflink@24c2dff32a45:~$ ls -latr
> total 224
> -rw-r--r--  1 flink flink   1309 Mar  4 15:34 README.txt
> drwxrwxr-x  2 flink flink   4096 Mar  4 15:34 log
> -rw-r--r--  1 flink flink  11357 Mar  4 15:34 LICENSE
> drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 lib
> drwxrwxr-x  6 flink flink   4096 Mar  7 05:49 examples
> drwxrwxr-x  1 flink flink   4096 Mar  7 05:49 conf
> drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 bin
> drwxrwxr-x 10 flink flink   4096 Mar  7 05:49 plugins
> drwxrwxr-x  3 flink flink   4096 Mar  7 05:49 opt
> -rw-rw-r--  1 flink flink 156327 Mar  7 05:49 NOTICE
> drwxrwxr-x  2 flink flink   4096 Mar  7 05:49 licenses
> drwxr-xr-x  1 root  root    4096 Mar 19 05:01 ..
> drwxr-xr-x  1 flink flink   4096 Mar 19 05:02 .
> flink@24c2dff32a45:~$ {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support `ALTER CATALOG COMMENT` syntax [flink]

2024-06-13 Thread via GitHub


LadyForest commented on code in PR #24932:
URL: https://github.com/apache/flink/pull/24932#discussion_r1638176317


##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlAlterCatalogComment.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl;
+
+import org.apache.calcite.sql.SqlCharStringLiteral;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.util.ImmutableNullableList;
+
+import java.util.List;
+
+import static java.util.Objects.requireNonNull;
+
+/** ALTER CATALOG catalog_name COMMENT 'comment'. */
+public class SqlAlterCatalogComment extends SqlAlterCatalog {
+
+private final SqlCharStringLiteral comment;
+
+public SqlAlterCatalogComment(
+SqlParserPos position, SqlIdentifier catalogName, 
SqlCharStringLiteral comment) {
+super(position, catalogName);
+this.comment = requireNonNull(comment, "comment cannot be null");
+}
+
+@Override
+public List getOperandList() {
+return ImmutableNullableList.of(catalogName, comment);
+}
+
+public SqlCharStringLiteral getComment() {

Review Comment:
   I didn't see any reference to this method; remove it?



##
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl:
##
@@ -176,6 +176,15 @@ SqlAlterCatalog SqlAlterCatalog() :
catalogName,
propertyList);
 }
+|
+ 
+{
+String p = SqlParserUtil.parseString(token.image);
+comment = SqlLiteral.createCharString(p, getPos());

Review Comment:
   Use `StringLiteral()` instead



##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlAlterCatalogComment.java:
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl;
+
+import org.apache.calcite.sql.SqlCharStringLiteral;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+import org.apache.calcite.util.ImmutableNullableList;
+
+import java.util.List;
+
+import static java.util.Objects.requireNonNull;
+
+/** ALTER CATALOG catalog_name COMMENT 'comment'. */
+public class SqlAlterCatalogComment extends SqlAlterCatalog {
+
+private final SqlCharStringLiteral comment;
+
+public SqlAlterCatalogComment(
+SqlParserPos position, SqlIdentifier catalogName, 
SqlCharStringLiteral comment) {
+super(position, catalogName);
+this.comment = requireNonNull(comment, "comment cannot be null");
+}
+
+@Override
+public List getOperandList() {
+return ImmutableNullableList.of(catalogName, comment);
+}
+
+public SqlCharStringLiteral getComment() {
+return comment;
+}
+
+public String getCommentAsString() {
+return comment.getValueAs(String.class);
+}

Review Comment:
   ```suggestion
   public String getComment() {
   return comment.getValueAs(NlsString.class).getValue();
   }
   ```



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogDescriptor.ja

[jira] [Updated] (FLINK-35594) Downscaling doesn't release TaskManagers.

2024-06-13 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-35594:

Component/s: Runtime / Coordination
 (was: Kubernetes Operator)

> Downscaling doesn't release TaskManagers.
> -
>
> Key: FLINK-35594
> URL: https://issues.apache.org/jira/browse/FLINK-35594
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.1
> Environment: * Flink 1.18.1 (Java 11, Temurin).
>  * Kubernetes Operator 1.8
>  * Kubernetes version v1.28.9-eks-036c24b (AWS EKS).
>  
> Autoscaling configuration:
> {code:java}
> jobmanager.scheduler: adaptive
> job.autoscaler.enabled: "true"
> job.autoscaler.metrics.window: 15m
> job.autoscaler.stabilization.interval: 15m
> job.autoscaler.scaling.effectiveness.threshold: 0.2
> job.autoscaler.target.utilization: "0.75"
> job.autoscaler.target.utilization.boundary: "0.25"
> job.autoscaler.metrics.busy-time.aggregator: "AVG"
> job.autoscaler.restart.time-tracking.enabled: "true"{code}
>Reporter: Aviv Dozorets
>Priority: Major
> Attachments: Screenshot 2024-06-10 at 12.50.37 PM.png
>
>
> (Follow-up of Slack conversation on #troubleshooting channel).
> Recently I've observed a behavior, that should be improved:
> A Flink DataStream that runs with autoscaler (backed by Kubernetes operator) 
> and Adaptive scheduler doesn't release a node (TaskManager) when scaling 
> down. In my example job started with initial parallelism of 64, while having 
> 4 TM with 16 cores each (1:1 core:slot) and scaled down to 16.
> My expectation: 1 TaskManager should be up and running.
> Reality: All 4 initial TaskManagers are running, with multiple and unequal 
> amount of available slots.
>  
> Didn't find an existing configuration to change the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35594) Downscaling doesn't release TaskManagers.

2024-06-13 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854757#comment-17854757
 ] 

Rui Fan commented on FLINK-35594:
-

This Jira may be duplicated with 
https://issues.apache.org/jira/browse/FLINK-33977

> Downscaling doesn't release TaskManagers.
> -
>
> Key: FLINK-35594
> URL: https://issues.apache.org/jira/browse/FLINK-35594
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.1
> Environment: * Flink 1.18.1 (Java 11, Temurin).
>  * Kubernetes Operator 1.8
>  * Kubernetes version v1.28.9-eks-036c24b (AWS EKS).
>  
> Autoscaling configuration:
> {code:java}
> jobmanager.scheduler: adaptive
> job.autoscaler.enabled: "true"
> job.autoscaler.metrics.window: 15m
> job.autoscaler.stabilization.interval: 15m
> job.autoscaler.scaling.effectiveness.threshold: 0.2
> job.autoscaler.target.utilization: "0.75"
> job.autoscaler.target.utilization.boundary: "0.25"
> job.autoscaler.metrics.busy-time.aggregator: "AVG"
> job.autoscaler.restart.time-tracking.enabled: "true"{code}
>Reporter: Aviv Dozorets
>Priority: Major
> Attachments: Screenshot 2024-06-10 at 12.50.37 PM.png
>
>
> (Follow-up of Slack conversation on #troubleshooting channel).
> Recently I've observed a behavior, that should be improved:
> A Flink DataStream that runs with autoscaler (backed by Kubernetes operator) 
> and Adaptive scheduler doesn't release a node (TaskManager) when scaling 
> down. In my example job started with initial parallelism of 64, while having 
> 4 TM with 16 cores each (1:1 core:slot) and scaled down to 16.
> My expectation: 1 TaskManager should be up and running.
> Reality: All 4 initial TaskManagers are running, with multiple and unequal 
> amount of available slots.
>  
> Didn't find an existing configuration to change the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33130]reuse source and sink operator io metrics for task [flink]

2024-06-13 Thread via GitHub


xbthink commented on PR #23454:
URL: https://github.com/apache/flink/pull/23454#issuecomment-2165733120

   @littleeleventhwolf Can you post your code? I'll use your code to check it 
again


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-3.1][FLINK-35592] Fix MysqlDebeziumTimeConverter miss timezone convert to timestamp [flink-cdc]

2024-06-13 Thread via GitHub


PatrickRen merged PR #3380:
URL: https://github.com/apache/flink-cdc/pull/3380


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34172] Add support for altering a distribution via ALTER TABLE [flink]

2024-06-13 Thread via GitHub


twalthr merged PR #24886:
URL: https://github.com/apache/flink/pull/24886


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35585] Add documentation for distribution [flink]

2024-06-13 Thread via GitHub


twalthr commented on code in PR #24929:
URL: https://github.com/apache/flink/pull/24929#discussion_r1638286027


##
docs/content/docs/dev/table/sql/create.md:
##
@@ -181,10 +182,16 @@ CREATE TABLE [IF NOT EXISTS] 
[catalog_name.][db_name.]table_name
 
 :
 {
-   { INCLUDING | EXCLUDING } { ALL | CONSTRAINTS | PARTITIONS }
+   { INCLUDING | EXCLUDING } { ALL | CONSTRAINTS | DISTRIBUTION | PARTITIONS }
  | { INCLUDING | EXCLUDING | OVERWRITING } { GENERATED | OPTIONS | WATERMARKS 
} 
 }[, ...]
 
+:

Review Comment:
   nit: rename to ``, everything in this snippet is a definition



##
docs/content/docs/dev/table/sql/create.md:
##
@@ -465,6 +480,7 @@ You can control the merging behavior of:
 * GENERATED - computed columns
 * METADATA - metadata columns
 * OPTIONS - connector options that describe connector and format properties
+* DISTRIBUTION - distribution definition

Review Comment:
   Can you update the ALTER docs as well?



##
docs/content/docs/dev/table/sql/create.md:
##
@@ -406,6 +413,14 @@ Flink will assume correctness of the primary key by 
assuming that the columns nu
 
 Partition the created table by the specified columns. A directory is created 
for each partition if this table is used as a filesystem sink.
 
+### `DISTRIBUTED BY / DISTRIBUTED INTO`

Review Comment:
   Copy the text from `SupportsBucketing` including examples.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34172) Add support for altering a distribution via ALTER TABLE

2024-06-13 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-34172.

Resolution: Fixed

Fixed in master: d0f9bb40a614c3c52c7abc9e608391e4bca9a3ca

> Add support for altering a distribution via ALTER TABLE 
> 
>
> Key: FLINK-34172
> URL: https://issues.apache.org/jira/browse/FLINK-34172
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35318) incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during predicate pushdown

2024-06-13 Thread linshangquan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854773#comment-17854773
 ] 

linshangquan edited comment on FLINK-35318 at 6/13/24 2:55 PM:
---

Thanks,  [~qingyue] , do you have time to help review this PR ?


was (Author: linshangquan):
Thanks,  [~qingyue] , Do you have time to help review this PR ?

> incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during 
> predicate pushdown
> -
>
> Key: FLINK-35318
> URL: https://issues.apache.org/jira/browse/FLINK-35318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.1
> Environment: flink version 1.18.1
> iceberg version 1.15.1
>Reporter: linshangquan
>Assignee: linshangquan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-05-09-14-06-58-007.png, 
> image-2024-05-09-14-09-38-453.png, image-2024-05-09-14-11-38-476.png, 
> image-2024-05-09-14-22-14-417.png, image-2024-05-09-14-22-59-370.png, 
> image-2024-05-09-18-52-03-741.png, image-2024-05-09-18-52-28-584.png
>
>
> In our scenario, we have an Iceberg table that contains a column named 'time' 
> of the {{timestamptz}} data type. This column has 10 rows of data where the 
> 'time' value is {{'2024-04-30 07:00:00'}} expressed in the "Asia/Shanghai" 
> timezone.
> !image-2024-05-09-14-06-58-007.png!
>  
> We encountered a strange phenomenon when accessing the table using 
> Iceberg-flink.
> When the {{WHERE}} clause includes the {{time}} column, the results are 
> incorrect.
> ZoneId.{_}systemDefault{_}() = "Asia/Shanghai" 
> !image-2024-05-09-18-52-03-741.png!
> When there is no {{WHERE}} clause, the results are correct.
> !image-2024-05-09-18-52-28-584.png!
> During debugging, we found that when a {{WHERE}} clause is present, a 
> {{FilterPushDownSpec}} is generated, and this {{FilterPushDownSpec}} utilizes 
> {{RexNodeToExpressionConverter}} for translation.
> !image-2024-05-09-14-11-38-476.png!
> !image-2024-05-09-14-22-59-370.png!
> When {{RexNodeToExpressionConverter#visitLiteral}} encounters a 
> {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type, it uses the specified timezone 
> "Asia/Shanghai" to convert the {{TimestampString}} type to an {{Instant}} 
> type. However, the upstream {{TimestampString}} data has already been 
> processed in UTC timezone. By applying the local timezone processing here, an 
> error occurs due to the mismatch in timezones.
> Whether the handling of {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type of data in 
> {{RexNodeToExpressionConverter#visitLiteral}} is a bug, and whether it should 
> process the data in UTC timezone.
>  
> Please help confirm if this is the issue, and if so, we can submit a patch to 
> fix it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26951) Add HASH supported in SQL & Table API

2024-06-13 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-26951.
---
Resolution: Won't Do

> Add HASH supported in SQL & Table API
> -
>
> Key: FLINK-26951
> URL: https://issues.apache.org/jira/browse/FLINK-26951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.20.0
>
>
> Returns a hash value of the arguments.
> Syntax:
> {code:java}
> hash(expr1, ...) {code}
> Arguments:
>  * {{{}exprN{}}}: An expression of any type.
> Returns:
> An INTEGER.
> Examples:
> {code:java}
> > SELECT hash('Flink', array(123), 2);
>  -1321691492 {code}
> See more:
>  * [Spark|https://spark.apache.org/docs/latest/api/sql/index.html#hash]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-13 Thread via GitHub


caicancai commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1638395305


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -93,16 +90,18 @@ spec:
 ```
 
 {{< hint info >}}
-When using the operator with Flink native Kubernetes integration, please refer 
to [pod template field precedence](
-https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink).
+当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级](
+https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink)。
 {{< /hint >}}
 
+
 ## Array Merging Behaviour
 
-When layering pod templates (defining both a top level and jobmanager specific 
podtemplate for example) the corresponding yamls are merged together.
+
+
+当分层 pod templates(例如同时定义顶级和任务管理器特定的 pod 模板)时,相应的 yaml 会合并在一起。
 
-The default behaviour of the pod template mechanism is to merge array arrays 
by merging the objects in the respective array positions.
-This requires that containers in the podTemplates are defined in the same 
order otherwise the results may be undefined.
+Pod 模板机制的默认行为是通过合并相应数组位置的对象合并 json 类型的数组。
 
 Default behaviour (merge by position):

Review Comment:
   fix



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35022][Connector/DynamoDB] Add TypeInformed DDB Element Converter as default element converter [flink-connector-aws]

2024-06-13 Thread via GitHub


vahmed-hamdy commented on code in PR #136:
URL: 
https://github.com/apache/flink-connector-aws/pull/136#discussion_r1638374818


##
flink-connector-aws/flink-connector-dynamodb/src/main/java/org/apache/flink/connector/dynamodb/sink/DynamoDbTypeInformedElementConverter.java:
##
@@ -0,0 +1,380 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.dynamodb.sink;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.BasicArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.NumericTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.apache.flink.api.connector.sink2.SinkWriter;
+import org.apache.flink.api.java.tuple.Tuple;
+import org.apache.flink.api.java.typeutils.ObjectArrayTypeInfo;
+import org.apache.flink.api.java.typeutils.PojoTypeInfo;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import 
org.apache.flink.connector.dynamodb.table.converter.ArrayAttributeConverterProvider;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.FlinkRuntimeException;
+import org.apache.flink.util.Preconditions;
+
+import software.amazon.awssdk.core.SdkBytes;
+import software.amazon.awssdk.enhanced.dynamodb.AttributeConverter;
+import software.amazon.awssdk.enhanced.dynamodb.AttributeConverterProvider;
+import software.amazon.awssdk.enhanced.dynamodb.AttributeValueType;
+import software.amazon.awssdk.enhanced.dynamodb.EnhancedType;
+import software.amazon.awssdk.enhanced.dynamodb.TableSchema;
+import 
software.amazon.awssdk.enhanced.dynamodb.internal.mapper.BeanAttributeGetter;
+import software.amazon.awssdk.enhanced.dynamodb.mapper.StaticTableSchema;
+import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
+
+import java.beans.BeanInfo;
+import java.beans.IntrospectionException;
+import java.beans.Introspector;
+import java.beans.PropertyDescriptor;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * A {@link ElementConverter} that converts an element to a {@link 
DynamoDbWriteRequest} using
+ * TypeInformation provided.
+ */
+@PublicEvolving
+public class DynamoDbTypeInformedElementConverter
+implements ElementConverter {
+
+private final CompositeType typeInfo;
+private final boolean ignoreNulls;
+private final TableSchema tableSchema;
+
+/**
+ * Creates a {@link DynamoDbTypeInformedElementConverter} that converts an 
element to a {@link
+ * DynamoDbWriteRequest} using the provided {@link CompositeType}. Usage: 
{@code new
+ * 
DynamoDbTypeInformedElementConverter<>(TypeInformation.of(MyPojoClass.class))}
+ *
+ * @param typeInfo The {@link CompositeType} that provides the type 
information for the element.
+ */
+public DynamoDbTypeInformedElementConverter(CompositeType typeInfo) {
+this(typeInfo, true);
+}
+
+public DynamoDbTypeInformedElementConverter(CompositeType typeInfo, 
boolean ignoreNulls) {
+
+try {
+this.typeInfo = typeInfo;
+this.ignoreNulls = ignoreNulls;
+this.tableSchema = createTableSchema(typeInfo);
+} catch (IntrospectionException | IllegalStateException | 
IllegalArgumentException e) {
+throw new FlinkRuntimeException("Failed to extract DynamoDb table 
schema", e);
+}
+}
+
+@Override
+public DynamoDbWriteRequest apply(T input, SinkWriter.Context context) {
+Preconditions.checkNotNull(tableSchema, "TableSchema is not 
initialized");
+try {
+return DynamoDbWriteRequest.builder()
+.setType(DynamoDbWriteRequestType.PUT)
+.setItem(tableSchema.itemToMap(input, ignoreNulls))
+.build();
+} catch (ClassCastEx

[jira] [Commented] (FLINK-26951) Add HASH supported in SQL & Table API

2024-06-13 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854769#comment-17854769
 ] 

lincoln lee commented on FLINK-26951:
-

[~kartikeypant] Thank you again for your willingness to contribute to the 
community!

I'll close this Jira based on above conclusion.

> Add HASH supported in SQL & Table API
> -
>
> Key: FLINK-26951
> URL: https://issues.apache.org/jira/browse/FLINK-26951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.20.0
>
>
> Returns a hash value of the arguments.
> Syntax:
> {code:java}
> hash(expr1, ...) {code}
> Arguments:
>  * {{{}exprN{}}}: An expression of any type.
> Returns:
> An INTEGER.
> Examples:
> {code:java}
> > SELECT hash('Flink', array(123), 2);
>  -1321691492 {code}
> See more:
>  * [Spark|https://spark.apache.org/docs/latest/api/sql/index.html#hash]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35022][Connector/DynamoDB] Add TypeInformed DDB Element Converter as default element converter [flink-connector-aws]

2024-06-13 Thread via GitHub


vahmed-hamdy commented on PR #136:
URL: 
https://github.com/apache/flink-connector-aws/pull/136#issuecomment-2165938144

   @hlteoh37 Thanks, for the feedback, the reason we took this approach is that 
We are trying to couple it as much as possible with Flink's TypeInfo Class, 
Using `AttributeConverterProvider` is closer to DDB's `EnhancedType` rather 
than Flink's


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35042) Streaming File Sink s3 end-to-end test failed as TM lost

2024-06-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854789#comment-17854789
 ] 

Matthias Pohl commented on FLINK-35042:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60237&view=logs&j=ef799394-2d67-5ff4-b2e5-410b80c9c0af&t=9e5768bc-daae-5f5f-1861-e58617922c7a&l=9817

> Streaming File Sink s3 end-to-end test failed as TM lost
> 
>
> Key: FLINK-35042
> URL: https://issues.apache.org/jira/browse/FLINK-35042
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=14344
> FAIL 'Streaming File Sink s3 end-to-end test' failed after 15 minutes and 20 
> seconds! Test exited with exit code 1
> I have checked the JM log, it seems that a taskmanager is no longer reachable:
> {code:java}
> 2024-04-08T01:12:04.3922210Z Apr 08 01:12:04 2024-04-08 00:58:15,517 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Sink: 
> Unnamed (4/4) 
> (14b44f534745ffb2f1ef03fca34f7f0d_0a448493b4782967b150582570326227_3_0) 
> switched from RUNNING to FAILED on localhost:44987-47f5af @ localhost 
> (dataPort=34489).
> 2024-04-08T01:12:04.3924522Z Apr 08 01:12:04 
> org.apache.flink.runtime.jobmaster.JobMasterException: TaskManager with id 
> localhost:44987-47f5af is no longer reachable.
> 2024-04-08T01:12:04.3925421Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyTargetUnreachable(JobMaster.java:1511)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926185Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.reportHeartbeatRpcFailure(DefaultHeartbeatMonitor.java:126)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926925Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.runIfHeartbeatMonitorExists(HeartbeatManagerImpl.java:275)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3929898Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.reportHeartbeatTargetUnreachable(HeartbeatManagerImpl.java:267)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3930692Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.handleHeartbeatRpcFailure(HeartbeatManagerImpl.java:262)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3931442Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.lambda$handleHeartbeatRpc$0(HeartbeatManagerImpl.java:248)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3931917Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3934759Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3935252Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3935989Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRunAsync$4(PekkoRpcActor.java:460)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3936731Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3938103Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:460)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3942549Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:225)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3945371Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3946244Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3946960Z Apr 08 01:12:04  at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
> [flin

[jira] [Updated] (FLINK-26951) Add HASH supported in SQL & Table API

2024-06-13 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-26951:

Fix Version/s: (was: 1.20.0)

> Add HASH supported in SQL & Table API
> -
>
> Key: FLINK-26951
> URL: https://issues.apache.org/jira/browse/FLINK-26951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>
> Returns a hash value of the arguments.
> Syntax:
> {code:java}
> hash(expr1, ...) {code}
> Arguments:
>  * {{{}exprN{}}}: An expression of any type.
> Returns:
> An INTEGER.
> Examples:
> {code:java}
> > SELECT hash('Flink', array(123), 2);
>  -1321691492 {code}
> See more:
>  * [Spark|https://spark.apache.org/docs/latest/api/sql/index.html#hash]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35167][cdc-connector] Introduce MaxCompute pipeline DataSink [flink-cdc]

2024-06-13 Thread via GitHub


lvyanquan commented on code in PR #3254:
URL: https://github.com/apache/flink-cdc/pull/3254#discussion_r1635846252


##
docs/content.zh/docs/connectors/maxcompute.md:
##
@@ -0,0 +1,342 @@
+---
+title: "MaxCompute"
+weight: 7
+type: docs
+aliases:
+  - /connectors/maxcompute
+---
+
+
+
+# MaxCompute Connector
+
+MaxCompute Pipeline 连接器可以用作 Pipeline 的 *Data 
Sink*,将数据写入[MaxCompute](https://www.aliyun.com/product/odps)。
+本文档介绍如何设置 MaxCompute Pipeline 连接器。
+
+## 连接器的功能
+
+* 自动建表
+* 表结构变更同步
+* 数据实时同步
+
+## 示例
+
+从 MySQL 读取数据同步到 MaxCompute 的 Pipeline 可以定义如下:
+
+```yaml
+source:
+  type: mysql
+  name: MySQL Source
+  hostname: 127.0.0.1
+  port: 3306
+  username: admin
+  password: pass
+  tables: adb.\.*, bdb.user_table_[0-9]+, [app|web].order_\.*
+  server-id: 5401-5404
+
+sink:
+  type: maxcompute
+  name: MaxCompute Sink
+  accessId: ak
+  accessKey: sk
+  endpoint: endpoint
+  project: flink_cdc
+  bucketSize: 8
+
+pipeline:
+  name: MySQL to MaxCompute Pipeline
+  parallelism: 2
+```
+
+## 连接器配置项
+
+
+
+   
+  
+Option
+Required
+Default
+Type
+Description
+  
+
+
+
+  type
+  required
+  (none)
+  String
+  指定要使用的连接器, 这里需要设置成 'maxcompute'.
+
+
+  name
+  optional
+  (none)
+  String
+  Sink 的名称.
+
+
+  accessId
+  required
+  (none)
+  String
+  阿里云账号或RAM用户的AccessKey ID。您可以进入https://ram.console.aliyun.com/manage/ak";>
+AccessKey管理页面 获取AccessKey ID。
+
+
+  accessKey
+  required
+  (none)
+  String
+  AccessKey ID对应的AccessKey Secret。您可以进入https://ram.console.aliyun.com/manage/ak";>
+AccessKey管理页面 获取AccessKey Secret。
+
+
+  endpoint
+  required
+  (none)
+  String
+  
MaxCompute服务的连接地址。您需要根据创建MaxCompute项目时选择的地域以及网络连接方式配置Endpoint。各地域及网络对应的Endpoint值,请参见https://help.aliyun.com/zh/maxcompute/user-guide/endpoints";>
+   Endpoint。
+
+
+  project
+  required
+  (none)
+  String
+  MaxCompute项目名称。您可以登录https://maxcompute.console.aliyun.com/";>
+   MaxCompute控制台,在 工作区 > 项目管理 页面获取MaxCompute项目名称。
+
+
+  tunnelEndpoint
+  optional
+  (none)
+  String
+  MaxCompute 
Tunnel服务的连接地址,通常这项配置可以根据指定的project所在的region进行自动路由。仅在使用代理等特殊网络环境下使用该配置。
+
+
+  quotaName
+  optional
+  (none)
+  String
+  MaxCompute 数据传输使用的独享资源组名称,如不指定该配置,则使用共享资源组。详情可以参考https://help.aliyun.com/zh/maxcompute/user-guide/purchase-and-use-exclusive-resource-groups-for-dts";>
+   使用 Maxcompute 独享资源组
+
+
+  stsToken
+  optional
+  (none)
+  String
+  当使用RAM角色颁发的短时有效的访问令牌(STS Token)进行鉴权时,需要指定该参数。
+
+
+  bucketsNum
+  optional
+  16
+  Integer
+  自动创建 MaxCompute Transaction 表时使用的桶数。使用方式可以参考 

Review Comment:
   Invalid link.



##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-maxcompute/src/main/java/org/apache/flink/cdc/connectors/maxcompute/sink/MaxComputeEventSink.java:
##
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.maxcompute.sink;
+
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.sink2.SinkWriter;
+import org.apache.flink.cdc.common.event.Event;
+import org.apache.flink.cdc.connectors.maxcompute.common.Constant;
+import 
org.apache.flink.cdc.connectors.maxcompute.coordinator.SessionManageCoordinatedOperatorFactory;
+import 
org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeExecutionOptions;
+import org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeOptions;
+import 
org.apache.flink.cdc.connectors.maxcompute.options.MaxComputeWriteOptions;
+import org.apache.flink.cdc.runtime.typeutils.EventTypeInfo;
+import org.apache.flink.streaming.api.connector.sink2.WithPreWriteTopology;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+
+import java.io.IOException;
+
+/** A {@link Sink} of {@link Event} to MaxCompute. */
+public class MaxComputeEventSink impleme

[jira] [Comment Edited] (FLINK-35042) Streaming File Sink s3 end-to-end test failed as TM lost

2024-06-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854789#comment-17854789
 ] 

Matthias Pohl edited comment on FLINK-35042 at 6/13/24 4:16 PM:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60237&view=logs&j=ef799394-2d67-5ff4-b2e5-410b80c9c0af&t=9e5768bc-daae-5f5f-1861-e58617922c7a&l=9817]

for that one it looks like the test never reached the expected processed values:
{code:java}
Jun 13 13:04:25 Waiting for Dispatcher REST endpoint to come up...
Jun 13 13:04:26 Dispatcher REST endpoint is up.
Jun 13 13:04:28 [INFO] 1 instance(s) of taskexecutor are already running on 
fv-az209-180.
Jun 13 13:04:28 Starting taskexecutor daemon on host fv-az209-180.
Jun 13 13:04:32 [INFO] 2 instance(s) of taskexecutor are already running on 
fv-az209-180.
Jun 13 13:04:32 Starting taskexecutor daemon on host fv-az209-180.
Jun 13 13:04:37 [INFO] 3 instance(s) of taskexecutor are already running on 
fv-az209-180.
Jun 13 13:04:37 Starting taskexecutor daemon on host fv-az209-180.
Jun 13 13:04:37 Submitting job.
Jun 13 13:04:57 Job (be9bc06a08a4c0fc3bf2c9e1c92219d4) is running.
Jun 13 13:04:57 Waiting for job (be9bc06a08a4c0fc3bf2c9e1c92219d4) to have at 
least 3 completed checkpoints ...
Jun 13 13:05:06 Killing TM
Jun 13 13:05:06 TaskManager 122377 killed.
Jun 13 13:05:06 Starting TM
Jun 13 13:05:08 [INFO] 3 instance(s) of taskexecutor are already running on 
fv-az209-180.
Jun 13 13:05:08 Starting taskexecutor daemon on host fv-az209-180.
Jun 13 13:05:08 Waiting for restart to happen
Jun 13 13:05:08 Still waiting for restarts. Expected: 1 Current: 0
Jun 13 13:05:13 Still waiting for restarts. Expected: 1 Current: 0
Jun 13 13:05:18 Still waiting for restarts. Expected: 1 Current: 0
Jun 13 13:05:23 Killing 2 TMs
Jun 13 13:05:24 TaskManager 121771 killed.
Jun 13 13:05:24 TaskManager 122908 killed.
Jun 13 13:05:24 Starting 2 TMs
Jun 13 13:05:26 [INFO] 2 instance(s) of taskexecutor are already running on 
fv-az209-180.
Jun 13 13:05:26 Starting taskexecutor daemon on host fv-az209-180.
Jun 13 13:05:31 [INFO] 3 instance(s) of taskexecutor are already running on 
fv-az209-180.
Jun 13 13:05:31 Starting taskexecutor daemon on host fv-az209-180.
Jun 13 13:05:31 Waiting for restart to happen
Jun 13 13:05:31 Still waiting for restarts. Expected: 2 Current: 1
Jun 13 13:05:36 Still waiting for restarts. Expected: 2 Current: 1
Jun 13 13:05:41 Waiting until all values have been produced
Jun 13 13:05:43 Number of produced values 0/6
[...] {code}


was (Author: mapohl):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60237&view=logs&j=ef799394-2d67-5ff4-b2e5-410b80c9c0af&t=9e5768bc-daae-5f5f-1861-e58617922c7a&l=9817

> Streaming File Sink s3 end-to-end test failed as TM lost
> 
>
> Key: FLINK-35042
> URL: https://issues.apache.org/jira/browse/FLINK-35042
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=14344
> FAIL 'Streaming File Sink s3 end-to-end test' failed after 15 minutes and 20 
> seconds! Test exited with exit code 1
> I have checked the JM log, it seems that a taskmanager is no longer reachable:
> {code:java}
> 2024-04-08T01:12:04.3922210Z Apr 08 01:12:04 2024-04-08 00:58:15,517 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Sink: 
> Unnamed (4/4) 
> (14b44f534745ffb2f1ef03fca34f7f0d_0a448493b4782967b150582570326227_3_0) 
> switched from RUNNING to FAILED on localhost:44987-47f5af @ localhost 
> (dataPort=34489).
> 2024-04-08T01:12:04.3924522Z Apr 08 01:12:04 
> org.apache.flink.runtime.jobmaster.JobMasterException: TaskManager with id 
> localhost:44987-47f5af is no longer reachable.
> 2024-04-08T01:12:04.3925421Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyTargetUnreachable(JobMaster.java:1511)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926185Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.reportHeartbeatRpcFailure(DefaultHeartbeatMonitor.java:126)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926925Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.runIfHeartbeatMonitorExists(HeartbeatManagerImpl.java:275)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3929898Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.reportHeartbeatTargetUnreachable(HeartbeatManagerImpl.java:267

[jira] [Commented] (FLINK-35042) Streaming File Sink s3 end-to-end test failed as TM lost

2024-06-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854801#comment-17854801
 ] 

Matthias Pohl commented on FLINK-35042:
---

I'm linking FLINK-34150 because we refactored the test to rely on Minio rather 
than AWS s3 backend.

> Streaming File Sink s3 end-to-end test failed as TM lost
> 
>
> Key: FLINK-35042
> URL: https://issues.apache.org/jira/browse/FLINK-35042
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=14344
> FAIL 'Streaming File Sink s3 end-to-end test' failed after 15 minutes and 20 
> seconds! Test exited with exit code 1
> I have checked the JM log, it seems that a taskmanager is no longer reachable:
> {code:java}
> 2024-04-08T01:12:04.3922210Z Apr 08 01:12:04 2024-04-08 00:58:15,517 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Sink: 
> Unnamed (4/4) 
> (14b44f534745ffb2f1ef03fca34f7f0d_0a448493b4782967b150582570326227_3_0) 
> switched from RUNNING to FAILED on localhost:44987-47f5af @ localhost 
> (dataPort=34489).
> 2024-04-08T01:12:04.3924522Z Apr 08 01:12:04 
> org.apache.flink.runtime.jobmaster.JobMasterException: TaskManager with id 
> localhost:44987-47f5af is no longer reachable.
> 2024-04-08T01:12:04.3925421Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyTargetUnreachable(JobMaster.java:1511)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926185Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.reportHeartbeatRpcFailure(DefaultHeartbeatMonitor.java:126)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926925Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.runIfHeartbeatMonitorExists(HeartbeatManagerImpl.java:275)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3929898Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.reportHeartbeatTargetUnreachable(HeartbeatManagerImpl.java:267)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3930692Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.handleHeartbeatRpcFailure(HeartbeatManagerImpl.java:262)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3931442Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.lambda$handleHeartbeatRpc$0(HeartbeatManagerImpl.java:248)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3931917Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3934759Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3935252Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3935989Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRunAsync$4(PekkoRpcActor.java:460)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3936731Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3938103Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:460)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3942549Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:225)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3945371Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3946244Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
>  ~[flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3946960Z Apr 08 01:12:04  at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) 
> [flink-rpc-akka9681a48a-ca1a-45b0-bb71-4bdb5d2aed93.jar:1.20-SNAPSHOT]
> 202

[jira] [Commented] (FLINK-35318) incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during predicate pushdown

2024-06-13 Thread linshangquan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854773#comment-17854773
 ] 

linshangquan commented on FLINK-35318:
--

Thanks,  [~qingyue] , Do you have time to help review this PR ?

> incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during 
> predicate pushdown
> -
>
> Key: FLINK-35318
> URL: https://issues.apache.org/jira/browse/FLINK-35318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.1
> Environment: flink version 1.18.1
> iceberg version 1.15.1
>Reporter: linshangquan
>Assignee: linshangquan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-05-09-14-06-58-007.png, 
> image-2024-05-09-14-09-38-453.png, image-2024-05-09-14-11-38-476.png, 
> image-2024-05-09-14-22-14-417.png, image-2024-05-09-14-22-59-370.png, 
> image-2024-05-09-18-52-03-741.png, image-2024-05-09-18-52-28-584.png
>
>
> In our scenario, we have an Iceberg table that contains a column named 'time' 
> of the {{timestamptz}} data type. This column has 10 rows of data where the 
> 'time' value is {{'2024-04-30 07:00:00'}} expressed in the "Asia/Shanghai" 
> timezone.
> !image-2024-05-09-14-06-58-007.png!
>  
> We encountered a strange phenomenon when accessing the table using 
> Iceberg-flink.
> When the {{WHERE}} clause includes the {{time}} column, the results are 
> incorrect.
> ZoneId.{_}systemDefault{_}() = "Asia/Shanghai" 
> !image-2024-05-09-18-52-03-741.png!
> When there is no {{WHERE}} clause, the results are correct.
> !image-2024-05-09-18-52-28-584.png!
> During debugging, we found that when a {{WHERE}} clause is present, a 
> {{FilterPushDownSpec}} is generated, and this {{FilterPushDownSpec}} utilizes 
> {{RexNodeToExpressionConverter}} for translation.
> !image-2024-05-09-14-11-38-476.png!
> !image-2024-05-09-14-22-59-370.png!
> When {{RexNodeToExpressionConverter#visitLiteral}} encounters a 
> {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type, it uses the specified timezone 
> "Asia/Shanghai" to convert the {{TimestampString}} type to an {{Instant}} 
> type. However, the upstream {{TimestampString}} data has already been 
> processed in UTC timezone. By applying the local timezone processing here, an 
> error occurs due to the mismatch in timezones.
> Whether the handling of {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type of data in 
> {{RexNodeToExpressionConverter#visitLiteral}} is a bug, and whether it should 
> process the data in UTC timezone.
>  
> Please help confirm if this is the issue, and if so, we can submit a patch to 
> fix it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35042) Streaming File Sink s3 end-to-end test failed as TM lost

2024-06-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854800#comment-17854800
 ] 

Matthias Pohl commented on FLINK-35042:
---

This is different to what [~Weijie Guo] observed in his build where the Job 
never observes the expected 2 TM restart:
{code:java}
Apr 08 00:57:27 Submitting job.
Apr 08 00:57:39 Job (d0bec02a7136e671f764bba2938933db) is not yet running.
Apr 08 00:57:49 Job (d0bec02a7136e671f764bba2938933db) is running.
Apr 08 00:57:49 Waiting for job (d0bec02a7136e671f764bba2938933db) to have at 
least 3 completed checkpoints ...
Apr 08 00:58:03 Killing TM
Apr 08 00:58:04 TaskManager 138601 killed.
Apr 08 00:58:04 Starting TM
Apr 08 00:58:06 [INFO] 3 instance(s) of taskexecutor are already running on 
fv-az68-869.
Apr 08 00:58:06 Starting taskexecutor daemon on host fv-az68-869.
Apr 08 00:58:06 Waiting for restart to happen
Apr 08 00:58:06 Still waiting for restarts. Expected: 1 Current: 0
Apr 08 00:58:11 Still waiting for restarts. Expected: 1 Current: 0
Apr 08 00:58:16 Still waiting for restarts. Expected: 1 Current: 0
Apr 08 00:58:21 Killing 2 TMs
Apr 08 00:58:21 TaskManager 141400 killed.
Apr 08 00:58:21 TaskManager 139144 killed.
Apr 08 00:58:21 Starting 2 TMs
Apr 08 00:58:24 [INFO] 2 instance(s) of taskexecutor are already running on 
fv-az68-869.
Apr 08 00:58:24 Starting taskexecutor daemon on host fv-az68-869.
Apr 08 00:58:29 [INFO] 3 instance(s) of taskexecutor are already running on 
fv-az68-869.
Apr 08 00:58:29 Starting taskexecutor daemon on host fv-az68-869.
Apr 08 00:58:29 Waiting for restart to happen
Apr 08 00:58:29 Still waiting for restarts. Expected: 2 Current: 1
Apr 08 00:58:34 Still waiting for restarts. Expected: 2 Current: 1
Apr 08 00:58:39 Still waiting for restarts. Expected: 2 Current: 1
[...]
Apr 08 01:11:56 Still waiting for restarts. Expected: 2 Current: 1
Apr 08 01:12:01 Still waiting for restarts. Expected: 2 Current: 1
Apr 08 01:12:04 Test (pid: 136749) did not finish after 900 seconds.{code}

> Streaming File Sink s3 end-to-end test failed as TM lost
> 
>
> Key: FLINK-35042
> URL: https://issues.apache.org/jira/browse/FLINK-35042
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58782&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=14344
> FAIL 'Streaming File Sink s3 end-to-end test' failed after 15 minutes and 20 
> seconds! Test exited with exit code 1
> I have checked the JM log, it seems that a taskmanager is no longer reachable:
> {code:java}
> 2024-04-08T01:12:04.3922210Z Apr 08 01:12:04 2024-04-08 00:58:15,517 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Sink: 
> Unnamed (4/4) 
> (14b44f534745ffb2f1ef03fca34f7f0d_0a448493b4782967b150582570326227_3_0) 
> switched from RUNNING to FAILED on localhost:44987-47f5af @ localhost 
> (dataPort=34489).
> 2024-04-08T01:12:04.3924522Z Apr 08 01:12:04 
> org.apache.flink.runtime.jobmaster.JobMasterException: TaskManager with id 
> localhost:44987-47f5af is no longer reachable.
> 2024-04-08T01:12:04.3925421Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyTargetUnreachable(JobMaster.java:1511)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926185Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.reportHeartbeatRpcFailure(DefaultHeartbeatMonitor.java:126)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3926925Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.runIfHeartbeatMonitorExists(HeartbeatManagerImpl.java:275)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3929898Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.reportHeartbeatTargetUnreachable(HeartbeatManagerImpl.java:267)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3930692Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.handleHeartbeatRpcFailure(HeartbeatManagerImpl.java:262)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3931442Z Apr 08 01:12:04  at 
> org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl.lambda$handleHeartbeatRpc$0(HeartbeatManagerImpl.java:248)
>  ~[flink-dist-1.20-SNAPSHOT.jar:1.20-SNAPSHOT]
> 2024-04-08T01:12:04.3931917Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>  ~[?:1.8.0_402]
> 2024-04-08T01:12:04.3934759Z Apr 08 01:12:04  at 
> java.util.concurrent.CompletableFuture$UniWhenCom

Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]

2024-06-13 Thread via GitHub


flinkbot commented on PR #24934:
URL: https://github.com/apache/flink/pull/24934#issuecomment-2166393201

   
   ## CI report:
   
   * d31c5b45069430fcdd04727df1aebde7dd111d97 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support `ALTER CATALOG COMMENT` syntax [flink]

2024-06-13 Thread via GitHub


LadyForest commented on code in PR #24932:
URL: https://github.com/apache/flink/pull/24932#discussion_r1638477886


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java:
##
@@ -327,11 +328,15 @@ public void createCatalog(String catalogName, 
CatalogDescriptor catalogDescripto
  *
  * @param catalogName the given catalog name under which to alter the 
given catalog
  * @param catalogUpdater catalog configuration updater to alter catalog
+ * @param catalogCommentUpdater catalog comment updater to alter catalog
  * @throws CatalogException If the catalog neither exists in the catalog 
store nor in the
  * initialized catalogs, or if an error occurs while creating the 
catalog or storing the
  * {@link CatalogDescriptor}
  */
-public void alterCatalog(String catalogName, Consumer 
catalogUpdater)
+public void alterCatalog(
+String catalogName,
+Consumer catalogUpdater,
+Function catalogCommentUpdater)

Review Comment:
   I've created a [PR](https://github.com/liyubin117/flink/pull/2) to refactor 
this method.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35585] Add documentation for distribution [flink]

2024-06-13 Thread via GitHub


jnh5y commented on code in PR #24929:
URL: https://github.com/apache/flink/pull/24929#discussion_r1638444015


##
docs/content/docs/dev/table/sql/create.md:
##
@@ -465,6 +480,7 @@ You can control the merging behavior of:
 * GENERATED - computed columns
 * METADATA - metadata columns
 * OPTIONS - connector options that describe connector and format properties
+* DISTRIBUTION - distribution definition

Review Comment:
   Yes!  Added some content there.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35585] Add documentation for distribution [flink]

2024-06-13 Thread via GitHub


jnh5y commented on code in PR #24929:
URL: https://github.com/apache/flink/pull/24929#discussion_r1638447342


##
docs/content/docs/dev/table/sql/create.md:
##
@@ -406,6 +413,14 @@ Flink will assume correctness of the primary key by 
assuming that the columns nu
 
 Partition the created table by the specified columns. A directory is created 
for each partition if this table is used as a filesystem sink.
 
+### `DISTRIBUTED BY / DISTRIBUTED INTO`

Review Comment:
   Ok, I copied in the text here: 
https://github.com/apache/flink/pull/24929/files#diff-d0ac52822e134b21138761d84268f080defc8323a120650f956afb697a1bf5f6R422-R456
   
   How should the links to JavaDocs/specifics be handled?  (I updated some of 
the text; I am not sure how to line up other details.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34918][table] Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax [flink]

2024-06-13 Thread via GitHub


liyubin117 opened a new pull request, #24934:
URL: https://github.com/apache/flink/pull/24934

   ## What is the purpose of the change
   
   Introduce comment for CatalogStore and Support enhanced `CREATE CATALOG` 
syntax
   
   ## Brief change log
   
   * add comment instance in `CatalogDescriptor` and expose setter/getter 
function
   * CREATE CATALOG [IF NOT EXISTS] catalog_name [COMMENT 'comment_value'] 
[WITH (property_name=property_value, ...)]
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   flink-table/flink-sql-client/src/test/resources/sql/catalog_database.q
   flink-table/flink-sql-gateway/src/test/resources/sql/catalog_database.q
   
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogManagerTest#testCatalogStore
   
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest#testAlterCatalog
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? yes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35165][runtime/coordination] AdaptiveBatch Scheduler should not restrict the default source parall… [flink]

2024-06-13 Thread via GitHub


venkata91 commented on PR #24736:
URL: https://github.com/apache/flink/pull/24736#issuecomment-2166456519

   > > > Thanks, @venkata91, for your contribution! After reviewing this PR, 
I'm concerned that it entirely removes limit that source parallelism should 
lower than source jobVertex's max parallelism. And I think the goal of this pr 
is ensure source parallelism isn't limited by config option 
execution.batch.adaptive.auto-parallelism.max-parallelism, but still respects 
the max parallelism of source jobVertex.
   > > > WDYT?
   > > 
   > > 
   > > I think that makes sense. Basically what you're saying is if `source's 
max parallelism` is determined by the `source` itself which is < 
`default-source-parallelism` config, we should cap it by the `source computed 
max parallelism` correct? If so, I agree with that.
   > 
   > Yes, that's correct.
   
   @JunRuiLee Sorry for the late reply. I looked at the code again and it does 
look to be doing as what we expected. Can you please point me to the 
corresponding code reference?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33977) Adaptive scheduler may not minimize the number of TMs during downscaling

2024-06-13 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-33977:
---
Affects Version/s: 1.19.0
   1.20.0

> Adaptive scheduler may not minimize the number of TMs during downscaling
> 
>
> Key: FLINK-33977
> URL: https://issues.apache.org/jira/browse/FLINK-33977
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler, Runtime / Coordination
>Affects Versions: 1.18.0, 1.19.0, 1.20.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> Adaptive Scheduler uses SlotAssigner to assign free slots to slot sharing 
> groups. Currently, there're two implementations of SlotAssigner available: 
> the 
> DefaultSlotAssigner that treats all slots and slot sharing groups equally and 
> the {color:#172b4d}StateLocalitySlotAssigner{color} that assigns slots based 
> on the number of local key groups to utilize local state recovery. The 
> scheduler will use the DefaultSlotAssigner when no key group assignment info 
> is available and use the StateLocalitySlotAssigner otherwise.
>  
> However, none of the SlotAssigner targets at minimizing the number of TMs, 
> which may produce suboptimal slot assignment under the Application Mode. For 
> example, when a job with 8 slot sharing groups and 2 TMs (each 4 slots) is 
> downscaled through the FLIP-291 API to have 4 slot sharing groups instead, 
> the cluster may still have 2 TMs, one with 1 free slot, and the other with 3 
> free slots. For end-users, this implies an ineffective downscaling as the 
> total cluster resources are not reduced.
>  
> We should take minimizing number of TMs into consideration as well. A 
> possible approach is to enhance the {color:#172b4d}StateLocalitySlotAssigner: 
> when the number of free slots exceeds need, sort all the TMs by a score 
> summing from the allocation scores of all slots on it, remove slots from the 
> excessive TMs with the lowest score and proceed the remaining slot 
> assignment.{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35594) Downscaling doesn't release TaskManagers.

2024-06-13 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-35594.
--
Resolution: Duplicate

> Downscaling doesn't release TaskManagers.
> -
>
> Key: FLINK-35594
> URL: https://issues.apache.org/jira/browse/FLINK-35594
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.1
> Environment: * Flink 1.18.1 (Java 11, Temurin).
>  * Kubernetes Operator 1.8
>  * Kubernetes version v1.28.9-eks-036c24b (AWS EKS).
>  
> Autoscaling configuration:
> {code:java}
> jobmanager.scheduler: adaptive
> job.autoscaler.enabled: "true"
> job.autoscaler.metrics.window: 15m
> job.autoscaler.stabilization.interval: 15m
> job.autoscaler.scaling.effectiveness.threshold: 0.2
> job.autoscaler.target.utilization: "0.75"
> job.autoscaler.target.utilization.boundary: "0.25"
> job.autoscaler.metrics.busy-time.aggregator: "AVG"
> job.autoscaler.restart.time-tracking.enabled: "true"{code}
>Reporter: Aviv Dozorets
>Priority: Major
> Attachments: Screenshot 2024-06-10 at 12.50.37 PM.png
>
>
> (Follow-up of Slack conversation on #troubleshooting channel).
> Recently I've observed a behavior, that should be improved:
> A Flink DataStream that runs with autoscaler (backed by Kubernetes operator) 
> and Adaptive scheduler doesn't release a node (TaskManager) when scaling 
> down. In my example job started with initial parallelism of 64, while having 
> 4 TM with 16 cores each (1:1 core:slot) and scaled down to 16.
> My expectation: 1 TaskManager should be up and running.
> Reality: All 4 initial TaskManagers are running, with multiple and unequal 
> amount of available slots.
>  
> Didn't find an existing configuration to change the behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35165][runtime/coordination] AdaptiveBatch Scheduler should not restrict the default source parall… [flink]

2024-06-13 Thread via GitHub


venkata91 commented on code in PR #24736:
URL: https://github.com/apache/flink/pull/24736#discussion_r1638672737


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptivebatch/AdaptiveBatchSchedulerTest.java:
##


Review Comment:
   Addressed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35595) MailboxProcessor#processMailsWhenDefaultActionUnavailable is allocation intensive

2024-06-13 Thread David Schlosnagle (Jira)
David Schlosnagle created FLINK-35595:
-

 Summary: MailboxProcessor#processMailsWhenDefaultActionUnavailable 
is allocation intensive
 Key: FLINK-35595
 URL: https://issues.apache.org/jira/browse/FLINK-35595
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: David Schlosnagle


While investigating allocation stalls and GC pressure of a Flink streaming 
pipeline, I noticed significant allocations of {{Optional}} in JFRs from 
{{org.apache.flink.streaming.runtime.tasks.mailbox. 
MailboxProcessor#processMailsWhenDefaultActionUnavailable()}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35485) JobMaster failed with "the job xx has not been finished"

2024-06-13 Thread Xingcan Cui (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854891#comment-17854891
 ] 

Xingcan Cui commented on FLINK-35485:
-

Hi [~mapohl], I hit the exception again today and it caused the jobmanager to 
restart. Still no WARN+ logs. I checked the corresponding job and it actually 
finished succefully.

> JobMaster failed with "the job xx has not been finished"
> 
>
> Key: FLINK-35485
> URL: https://issues.apache.org/jira/browse/FLINK-35485
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Priority: Major
>
> We ran a session cluster on K8s and used Flink SQL gateway to submit queries. 
> Hit the following rare exception once which caused the job manager to restart.
> {code:java}
> org.apache.flink.util.FlinkException: JobMaster for job 
> 50d681ae1e8170f77b4341dda6aba9bc failed.
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:1454)
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.jobManagerRunnerFailed(Dispatcher.java:776)
>   at 
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$runJob$6(Dispatcher.java:698)
>   at java.base/java.util.concurrent.CompletableFuture.uniHandle(Unknown 
> Source)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(Unknown 
> Source)
>   at java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
> Source)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRunAsync$4(PekkoRpcActor.java:451)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:451)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:218)
>   at 
> org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168)
>   at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
>   at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
>   at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
>   at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
>   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
>   at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
>   at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545)
>   at 
> org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
>   at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
>   at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
>   at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280)
>   at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241)
>   at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253)
>   at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
>   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
> Source)
>   at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
>   at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
>   at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)
> Caused by: org.apache.flink.runtime.jobmaster.JobNotFinishedException: The 
> job (50d681ae1e8170f77b4341dda6aba9bc) has not been finished.
>   at 
> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.closeAsync(DefaultJobMasterServiceProcess.java:157)
>   at 
> org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner.stopJobMasterServiceProcess(JobMasterServiceLeadershipRunner.java:431)
>   at 
> org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner.callIfRunning(JobMasterServiceLeadershipRunner.java:476)
>   at 
> org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner.lambda$stopJobMasterServiceProcessAsync$12(JobMasterServiceLeadershipRunner.java:407)
>   at java.base/java.util.concurrent.CompletableFuture.uniComposeStage(Unknown 
> Source)
>   at java.base/java.util.concurrent.CompletableFuture.thenCompose(Unknown 
> Source)
>   at 
> org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner.stopJobMasterServiceProcessAsync(JobMasterServiceLeadershipRunner.java:405)
>   at 
> org.apache.flink.runtime.jobm

[PR] [hotfix] [docs] reference.md: Add missing FlinkSessionJob CRD [flink-kubernetes-operator]

2024-06-13 Thread via GitHub


mattayes opened a new pull request, #838:
URL: https://github.com/apache/flink-kubernetes-operator/pull/838

   
   
   ## What is the purpose of the change
   
   Add missing documentation for `FlinkSessionJob` CRD.
   
   
   ## Brief change log
   
   Add docs for `FlinkSessionJob` CRD.
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage, 
other than verifying locally.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] [docs] reference.md: Add missing FlinkSessionJob CRD [flink-kubernetes-operator]

2024-06-13 Thread via GitHub


mattayes commented on PR #838:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/838#issuecomment-2166971750

   @gyfora Here's a rework of #837.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35596) Flink application fails with The implementation of the BlockElement is not serializable

2024-06-13 Thread Venkata krishnan Sowrirajan (Jira)
Venkata krishnan Sowrirajan created FLINK-35596:
---

 Summary: Flink application fails with The implementation of the 
BlockElement is not serializable
 Key: FLINK-35596
 URL: https://issues.apache.org/jira/browse/FLINK-35596
 Project: Flink
  Issue Type: Bug
Reporter: Venkata krishnan Sowrirajan


 
Flink application fails with 
_org.apache.flink.api.common.InvalidProgramException: The implementation of the 
BlockElement is not serializable. The object probably contains or references 
non serializable fields._

Looks like as part of [[FLINK-33058][formats] Add encoding option to Avro 
format|https://github.com/apache/flink/pull/23395/files#top] new _AvroEncoding_ 
enum is introduced but this also uses the TextElement to format the description 
for Javadocs.

This is internally used in the _AvroRowDataSerializationSchema_ and 
_AvroRowDataDeSerializationSchema_ which needs to be serialized while the 
_BlockElement_ is not serializable.
{code:java}
org.apache.flink.api.common.InvalidProgramException: The implementation of the 
BlockElement is not serializable. The object probably contains or references 
non serializable fields. at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:164) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
org.apache.flink.connector.kafka.sink.KafkaSinkBuilder.setRecordSerializer(KafkaSinkBuilder.java:152)
 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:207)
 at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
 at 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
 at 
org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
at scala.collection.Iterator.foreach(Iterator.scala:937) at 
scala.collection.Iterator.foreach$(Iterator.scala:937) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at 
scala.collection.IterableLike.foreach(IterableLike.scala:70) at 
scala.collection.IterableLike.foreach$(IterableLike.scala:69) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1733)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:825)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:918)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSink(KafkaTableITCase.java:140)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.

[jira] [Updated] (FLINK-35596) Flink application fails with The implementation of the BlockElement is not serializable

2024-06-13 Thread Venkata krishnan Sowrirajan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata krishnan Sowrirajan updated FLINK-35596:

Affects Version/s: 1.19.0

> Flink application fails with The implementation of the BlockElement is not 
> serializable
> ---
>
> Key: FLINK-35596
> URL: https://issues.apache.org/jira/browse/FLINK-35596
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
>  
> Flink application fails with 
> _org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the BlockElement is not serializable. The object probably contains or 
> references non serializable fields._
> Looks like as part of [[FLINK-33058][formats] Add encoding option to Avro 
> format|https://github.com/apache/flink/pull/23395/files#top] new 
> _AvroEncoding_ enum is introduced but this also uses the TextElement to 
> format the description for Javadocs.
> This is internally used in the _AvroRowDataSerializationSchema_ and 
> _AvroRowDataDeSerializationSchema_ which needs to be serialized while the 
> _BlockElement_ is not serializable.
> {code:java}
> org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the BlockElement is not serializable. The object probably contains or 
> references non serializable fields. at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:164) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
> org.apache.flink.connector.kafka.sink.KafkaSinkBuilder.setRecordSerializer(KafkaSinkBuilder.java:152)
>  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:207)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.Iterator.foreach(Iterator.scala:937) at 
> scala.collection.Iterator.foreach$(Iterator.scala:937) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:70) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:69) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1733)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:825)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:918)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSink(KafkaTableITCase.java:140)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.i

[jira] [Commented] (FLINK-35596) Flink application fails with The implementation of the BlockElement is not serializable

2024-06-13 Thread Venkata krishnan Sowrirajan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17854896#comment-17854896
 ] 

Venkata krishnan Sowrirajan commented on FLINK-35596:
-

Most likely the flink-kafka-connector is not upgraded to 1.19 or picking up the 
change above and therefore the `KafkaTableITcase`s are not failing.

> Flink application fails with The implementation of the BlockElement is not 
> serializable
> ---
>
> Key: FLINK-35596
> URL: https://issues.apache.org/jira/browse/FLINK-35596
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
>  
> Flink application fails with 
> _org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the BlockElement is not serializable. The object probably contains or 
> references non serializable fields._
> Looks like as part of [[FLINK-33058][formats] Add encoding option to Avro 
> format|https://github.com/apache/flink/pull/23395/files#top] new 
> _AvroEncoding_ enum is introduced but this also uses the TextElement to 
> format the description for Javadocs.
> This is internally used in the _AvroRowDataSerializationSchema_ and 
> _AvroRowDataDeSerializationSchema_ which needs to be serialized while the 
> _BlockElement_ is not serializable.
> {code:java}
> org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the BlockElement is not serializable. The object probably contains or 
> references non serializable fields. at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:164) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
> org.apache.flink.connector.kafka.sink.KafkaSinkBuilder.setRecordSerializer(KafkaSinkBuilder.java:152)
>  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:207)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.Iterator.foreach(Iterator.scala:937) at 
> scala.collection.Iterator.foreach$(Iterator.scala:937) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:70) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:69) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1733)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:825)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:918)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSink(KafkaTableITCase.java:140)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.in

[PR] FLINK-35596: Make DescriptionElement serializable [flink]

2024-06-13 Thread via GitHub


venkata91 opened a new pull request, #24935:
URL: https://github.com/apache/flink/pull/24935

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   This change makes `DescriptionElement` serializable so that `AvroEncoding` 
that is set in `AvroRowDataSerializationSchema` and 
`AvroRowDataDeSerializationSchema` can be serializable
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as KafkaTableITCase 
in flink-kafka-connector, but if needed we can also add a ITCase in Flink 
itself to test this scenario.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35596) Flink application fails with The implementation of the BlockElement is not serializable

2024-06-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35596:
---
Labels: pull-request-available  (was: )

> Flink application fails with The implementation of the BlockElement is not 
> serializable
> ---
>
> Key: FLINK-35596
> URL: https://issues.apache.org/jira/browse/FLINK-35596
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>  Labels: pull-request-available
>
>  
> Flink application fails with 
> _org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the BlockElement is not serializable. The object probably contains or 
> references non serializable fields._
> Looks like as part of [[FLINK-33058][formats] Add encoding option to Avro 
> format|https://github.com/apache/flink/pull/23395/files#top] new 
> _AvroEncoding_ enum is introduced but this also uses the TextElement to 
> format the description for Javadocs.
> This is internally used in the _AvroRowDataSerializationSchema_ and 
> _AvroRowDataDeSerializationSchema_ which needs to be serialized while the 
> _BlockElement_ is not serializable.
> {code:java}
> org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the BlockElement is not serializable. The object probably contains or 
> references non serializable fields. at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:164) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at 
> org.apache.flink.connector.kafka.sink.KafkaSinkBuilder.setRecordSerializer(KafkaSinkBuilder.java:152)
>  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink.getSinkRuntimeProvider(KafkaDynamicSink.java:207)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:150)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:159)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.Iterator.foreach(Iterator.scala:937) at 
> scala.collection.Iterator.foreach$(Iterator.scala:937) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at 
> scala.collection.IterableLike.foreach(IterableLike.scala:70) at 
> scala.collection.IterableLike.foreach$(IterableLike.scala:69) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1733)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:825)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:918)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSink(KafkaTableITCase.java:140)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBef

Re: [PR] FLINK-35596: Make DescriptionElement serializable [flink]

2024-06-13 Thread via GitHub


venkata91 commented on PR #24935:
URL: https://github.com/apache/flink/pull/24935#issuecomment-2167003430

   cc @becketqin 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] FLINK-35596: Make DescriptionElement serializable [flink]

2024-06-13 Thread via GitHub


flinkbot commented on PR #24935:
URL: https://github.com/apache/flink/pull/24935#issuecomment-2167007574

   
   ## CI report:
   
   * 8c396c54eeda5bc8f8f656847a372b3421c25a7f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35157][runtime] Sources with watermark alignment get stuck once some subtasks finish [flink]

2024-06-13 Thread via GitHub


1996fanrui merged PR #24757:
URL: https://github.com/apache/flink/pull/24757


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >