[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411336028 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. Review comment: ```suggestion excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411336241 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. Review comment: ```suggestion Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411336465 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. Review comment: ```suggestion The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411336895 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. Review comment: ```suggestion With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in Task Managers. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411337034 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the task manager: Review comment: ```suggestion The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the Task Manager: ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411337742 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the task manager: + +- *Total Process Memory*: memory consumed by the Flink application and by the JVM to run the process. +- *Total Flink Memory*: only memory consumed by the Flink application + +It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. + +If you want more fine-grained control over the size of *JVM Heap* and *Managed* Off-Heap, there is also the second alternative to configure both *[Task Heap](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#task-operator-heap-memory)* and *[Managed Memory](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#managed-memory)*. This alternative gives a clear separation between the heap memory and any other memory types. + +In line with the community’s efforts to [unify batch and stream processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html), this model works universally for both scenarios. It allows sharing the *JVM Heap* memory between the user code of operator tasks in any workload and the heap state backend in stream processing scenarios. The *Managed Off-Heap Memory* can be used for batch spilling and for the RocksDB state backend in streaming. + +The remaining memory components are automatically adjusted either based on their default values or additionally configured parameters. Flink also checks the overall consistency. You can find more information about the different memory components in the corresponding [documentation](https://ci.apache.org/proj
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411337034 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the task manager: Review comment: ```suggestion The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the JVM process of the Task Manager: ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411339534 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the task manager: + +- *Total Process Memory*: memory consumed by the Flink application and by the JVM to run the process. +- *Total Flink Memory*: only memory consumed by the Flink application Review comment: ```suggestion - *Total Flink Memory*: only memory consumed only by the Flink Java application, including user code but excluding memory allocated by JVM to run it ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r409549023 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the task manager: + +- *Total Process Memory*: memory consumed by the Flink application and by the JVM to run the process. Review comment: ```suggestion - *Total Process Memory*: total memory consumed by the Flink Java application (including user code) and by the JVM to run the whole process. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MarkSfik commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
MarkSfik commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411340267 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the task managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in task managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the task manager: + +- *Total Process Memory*: memory consumed by the Flink application and by the JVM to run the process. +- *Total Flink Memory*: only memory consumed by the Flink application + +It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. Review comment: ```suggestion It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. Containerized environments usually strictly enforce this memory limit. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] zentol opened a new pull request #84: [FLINK-17287][github] Disable merge commit button
zentol opened a new pull request #84: URL: https://github.com/apache/flink-shaded/pull/84 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] azagrebin commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411984361 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the task managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The task manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. Review comment: ```suggestion The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] azagrebin commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411989670 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in Task Managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the JVM process of the Task Manager: + +- *Total Process Memory*: total memory consumed by the Flink Java application (including user code) and by the JVM to run the whole process. +- *Total Flink Memory*: only memory consumed only by the Flink Java application, including user code but excluding memory allocated by JVM to run it + +It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. Containerized environments usually strictly enforce this memory limit. + +If you want more fine-grained control over the size of *JVM Heap* and *Managed Off-Heap*, there is also a second alternative to configure both *[Task Heap](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#task-operator-heap-memory)* and *[Managed Memory](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#managed-memory)*. This alternative gives a clear separation between the heap memory and any other memory types. + +In line with the community’s efforts to [unify batch and stream processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html), this model works universally for both scenarios. It allows sharing the *JVM Heap* memory between the user code of operator tasks in any workload and the heap state backend in stream processing scenarios. In a similar way, the *Managed Off-Heap Memory* can be used for batch spilling and for the RocksDB state backend in streaming. + +The remaining memory components are automatically adjusted either based on their d
[GitHub] [flink-web] azagrebin commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411994775 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in Task Managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the JVM process of the Task Manager: + +- *Total Process Memory*: total memory consumed by the Flink Java application (including user code) and by the JVM to run the whole process. +- *Total Flink Memory*: only memory consumed by the Flink Java application, including user code but excluding memory allocated by JVM to run it + +It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. Containerized environments usually strictly enforce this memory limit. + +If you want more fine-grained control over the size of *JVM Heap* and *Managed Off-Heap*, there is also a second alternative to configure both *[Task Heap](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#task-operator-heap-memory)* and *[Managed Memory](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#managed-memory)*. This alternative gives a clear separation between the heap memory and any other memory types. Review comment: ```suggestion If you want more fine-grained control over the size of *JVM Heap* and *Managed Memory* (Off-Heap), there is also a second alternative to configure both *[Task Heap](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#task-operator-heap-memory)* and *[Managed Memory](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#managed-memory)*. This alternative gives a clear separation between the heap memory and any other memory types. ``` -
[GitHub] [flink-web] azagrebin commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411994181 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in Task Managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the JVM process of the Task Manager: + +- *Total Process Memory*: total memory consumed by the Flink Java application (including user code) and by the JVM to run the whole process. +- *Total Flink Memory*: only memory consumed only by the Flink Java application, including user code but excluding memory allocated by JVM to run it Review comment: ```suggestion - *Total Flink Memory*: only memory consumed by the Flink Java application, including user code but excluding memory allocated by JVM to run it ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] azagrebin commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411995447 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Off-Heap Memory* Review comment: ```suggestion - *Managed Memory* (Off-Heap) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] azagrebin commented on a change in pull request #328: Add blog post: "Memory Management improvements with Apache Flink 1.10"
azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411995827 ## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + + + + +Flink: Total Process Memory + + + +The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Memory* (Off-Heap) +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in Task Managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the JVM process of the Task Manager: + +- *Total Process Memory*: total memory consumed by the Flink Java application (including user code) and by the JVM to run the whole process. +- *Total Flink Memory*: only memory consumed by the Flink Java application, including user code but excluding memory allocated by JVM to run it + +It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. Containerized environments usually strictly enforce this memory limit. + +If you want more fine-grained control over the size of *JVM Heap* and *Managed Memory* (Off-Heap), there is also a second alternative to configure both *[Task Heap](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#task-operator-heap-memory)* and *[Managed Memory](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#managed-memory)*. This alternative gives a clear separation between the heap memory and any other memory types. + +In line with the community’s efforts to [unify batch and stream processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html), this model works universally for both scenarios. It allows sharing the *JVM Heap* memory between the user code of operator tasks in any workload and the heap state backend in stream processing scenarios. In a similar way, the *Managed Off-Heap Memory* can be used for batch spilling and for the RocksDB state backend in streaming. Review comment: ```suggestion In line with the community’s efforts to
[GitHub] [flink-web] dianfu commented on issue #329: Add Apache Flink release 1.9.3
dianfu commented on issue #329: URL: https://github.com/apache/flink-web/pull/329#issuecomment-618330787 @carp84 @uce Thanks a lot for the review. I will update the post date and release date before merging this PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] carp84 opened a new pull request #330: Add Apache Flink release 1.10.1
carp84 opened a new pull request #330: URL: https://github.com/apache/flink-web/pull/330 Adds release announcement and download link for 1.10.1 Dates on announcements are still TBD and should be updated accordingly once release happens. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] dianfu commented on pull request #329: Add Apache Flink release 1.9.3
dianfu commented on pull request #329: URL: https://github.com/apache/flink-web/pull/329#issuecomment-619327260 Merged via af2d8de132b73f0dae6fd88912fe1c92202f3efe This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-elasticsearch] JingGe opened a new pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo
JingGe opened a new pull request #2: URL: https://github.com/apache/flink-connector-elasticsearch/pull/2 ## What is the purpose of the change **Attention**: this PR is still under construction with the following tasks: 1. merge the change of #18634 2. remove all code and test related to legacy SouceFunction/SinkFunction. With this PR, Elasticsearch connectors will be moved to the external repo. ## Brief change log - create a new maven project and migrate the most part of the Flink maven pom. - migrate `flink-connector-elasticsearch-base`, `flink-connector-elasticsearch6`, `flink-connector-elasticsearch7` - migrate uber for SQL: ` flink-sql-connector-elasticsearch6`, `flink-sql-connector-elasticsearch7` - some dependency bugs fix to make the compile and test passed ## Verifying this change It can be verified by checking if "mvn clean package" works successfully. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo
MartijnVisser commented on pull request #2: URL: https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083152212 @JingGe Thanks for this! Should we also be able to run `mvn clean install` ? I've tried that, but I'm getting getting some errors: ``` ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-elasticsearch] MartijnVisser edited a comment on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo
MartijnVisser edited a comment on pull request #2: URL: https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083152212 @JingGe Thanks for this! Should we also be able to run `mvn clean install` ? I've tried that, but I'm getting getting some errors: ``` [INFO] Running org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase [INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.075 s - in org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase [INFO] Running org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase [ERROR] Tests run: 4, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 14.291 s <<< FAILURE! - in org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase [ERROR] org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsNoPrimaryKey Time elapsed: 2.69 s <<< ERROR! java.lang.reflect.InaccessibleObjectException: Unable to make field private static final int java.lang.Class.ANNOTATION accessible: module java.base does not "opens java.lang" to unnamed module @1a27aae3 at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354) at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297) at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178) at java.base/java.lang.reflect.Field.setAccessible(Field.java:172) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:106) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:132) at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:69) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:2194) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1871) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1854) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createInput(StreamExecutionEnvironment.java:1753) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createInput(StreamExecutionEnvironment.java:1743) at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:73) at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148) at org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:249) at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:94) at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148) at org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:249) at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:136) at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148) at org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:79) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at scala.collection.Iterator.foreach(Iterator.scala:937) at scala.collection.Iterator.foreach$(Iterator.scala:937) at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) at scala.collection.IterableLike.foreach(IterableLike.scala:70) at scala.collection.IterableLike.foreach$(IterableLike.scala:69) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike.map(TraversableLike.scala:233) at scala.collection.TraversableLike.map$(TraversableLike.scala:226) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:78) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:181) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1656) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:782) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:861) at org.apache.flink.table.api.internal.TablePipelineImpl.exe
[GitHub] [flink-connector-elasticsearch] JingGe commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo
JingGe commented on pull request #2: URL: https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083166099 > @JingGe Thanks for this! Should we also be able to run `mvn clean install` ? I've tried that, but I'm getting getting some errors: looks like a Java version issue with the `InaccessibleObjectException`. Which Java version did you use while running the script? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on pull request #2: [Flink-26884][draft] move elasticsearch connectors to the external repo
MartijnVisser commented on pull request #2: URL: https://github.com/apache/flink-connector-elasticsearch/pull/2#issuecomment-1083167678 > > @JingGe Thanks for this! Should we also be able to run `mvn clean install` ? I've tried that, but I'm getting getting some errors: > > looks like a Java version issue with the `InaccessibleObjectException`. Which Java version did you use while running the script? ``` java -version openjdk version "1.8.0_322" OpenJDK Runtime Environment (Temurin)(build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (Temurin)(build 25.322-b06, mixed mode) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-elasticsearch] MartijnVisser opened a new pull request #3: [hotfix] Change email/repository notifications to match with Flink Core
MartijnVisser opened a new pull request #3: URL: https://github.com/apache/flink-connector-elasticsearch/pull/3 Matching https://github.com/apache/flink/blob/master/.asf.yaml -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-elasticsearch] AHeise merged pull request #3: [hotfix] Change email/repository notifications to match with Flink Core
AHeise merged pull request #3: URL: https://github.com/apache/flink-connector-elasticsearch/pull/3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] Aitozi commented on a diff in pull request #153: [FLINK-27000] Support to set JVM args for operator
Aitozi commented on code in PR #153: URL: https://github.com/apache/flink-kubernetes-operator/pull/153#discussion_r841297026 ## docker-entrypoint.sh: ## @@ -27,12 +27,12 @@ if [ "$1" = "help" ]; then elif [ "$1" = "operator" ]; then echo "Starting Operator" -exec java -cp /$FLINK_KUBERNETES_SHADED_JAR:/$OPERATOR_JAR $LOG_CONFIG org.apache.flink.kubernetes.operator.FlinkOperator +exec java $JVM_ARGS -cp /$FLINK_KUBERNETES_SHADED_JAR:/$OPERATOR_JAR $LOG_CONFIG org.apache.flink.kubernetes.operator.FlinkOperator Review Comment: Good point, fixed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] Aitozi commented on a diff in pull request #153: [FLINK-27000] Support to set JVM args for operator
Aitozi commented on code in PR #153: URL: https://github.com/apache/flink-kubernetes-operator/pull/153#discussion_r841297739 ## helm/flink-kubernetes-operator/values.yaml: ## @@ -78,3 +78,8 @@ metrics: imagePullSecrets: [] nameOverride: "" fullnameOverride: "" + +# Set the jvm start up options for webhook and operator +jvmArgs: + webhook: "" + operator: "" Review Comment: I also feel the helm option is a little mess now 😄, do you have some suggestion for this ? Or let it be improved in your ticket? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #153: [FLINK-27000] Support to set JVM args for operator
gyfora commented on code in PR #153: URL: https://github.com/apache/flink-kubernetes-operator/pull/153#discussion_r841366071 ## helm/flink-kubernetes-operator/values.yaml: ## @@ -78,3 +78,8 @@ metrics: imagePullSecrets: [] nameOverride: "" fullnameOverride: "" + +# Set the jvm start up options for webhook and operator +jvmArgs: + webhook: "" + operator: "" Review Comment: I will try to work on this in another ticket, we need to clean up many things . It’s ok as it is -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #72: [FLINK-26899] Introduce write/query table document for table store
JingsongLi commented on code in PR #72: URL: https://github.com/apache/flink-table-store/pull/72#discussion_r841400831 ## docs/content/docs/development/query-table.md: ## @@ -0,0 +1,87 @@ +--- +title: "Query Table" +weight: 4 +type: docs +aliases: +- /development/query-table.html +--- + + +# Query Table + +The Table Store is streaming batch unified, you can read full +and incremental data depending on the runtime execution mode: + +```sql +-- Batch mode, read latest snapshot +SET 'execution.runtime-mode' = 'batch'; +SELECT * FROM MyTable; + +-- Streaming mode, read incremental snapshot, read the snapshot first, then read the increment +SET 'execution.runtime-mode' = 'streaming'; +SELECT * FROM MyTable; + +-- Streaming mode, read latest incremental +SET 'execution.runtime-mode' = 'streaming'; +SELECT * FROM MyTable /*+ OPTIONS ('log.scan'='latest') */; +``` + +## Query Optimization + +It is highly recommended taking partition and primary key filters +in the query, which will speed up the data skipping of the query. + +Supported filter functions are: +- `=` +- `<>` +- `<` +- `<=` +- `>` +- `>=` +- `in` +- starts with `like` + +## Streaming Real-time + +By default, data is only visible after the checkpoint, which means +that the streaming reading has transactional consistency. + +If you want the data to be immediately visible, you need to: +- 'log.system' = 'kafka', you can't use the FileStore's continuous consumption + capability because the FileStore only provides checkpoint-based visibility. +- 'log.consistency' = 'eventual', this means that writes are visible without + using LogSystem's transaction mechanism. +- All tables need to have primary key defined, because only then can the + data be de-duplicated by normalize node of downstream job. + +## Streaming Low Cost + +By default, for the table with primary key, the records in the table store only +contains INSERT, UPDATE_AFTER, DELETE. No UPDATE_BEFORE. A normalized node is +generated in downstream consuming job, the node will store all key-value for +producing UPDATE_BEFORE message. Review Comment: No, users don't need to `handle the deduplication manually`, this mode is to generate exactly-once without deduplication. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-statefun] FilKarnicki commented on a diff in pull request #309: [FLINK-26570][statefun] Remote module configuration interpolation
FilKarnicki commented on code in PR #309: URL: https://github.com/apache/flink-statefun/pull/309#discussion_r841456053 ## docs/content/docs/modules/overview.md: ## @@ -61,3 +61,36 @@ spec: A module YAML file can contain multiple YAML documents, separated by `---`, each representing a component to be included in the application. Each component is defined by a kind typename string and a spec object containing the component's properties. + +# Configuration string interpolation +You can use `${placeholders}` inside `spec` elements. These will be replaced by entries from a configuration map, consisting of: +1. System properties +2. Environment variables +3. flink-conf.yaml entries with prefix 'statefun.module.global-config.' +4. Command line args + +where (4) override (3) which override (2) which override (1). + +Example: +```yaml +kind: io.statefun.endpoints.v2/http +spec: + functions: com.example/* + urlPathTemplate: ${FUNC_PROTOCOL}://${FUNC_DNS}/{function.name} +--- +kind: io.statefun.kafka.v1/ingress +spec: + id: com.example/my-ingress + address: ${KAFKA_ADDRESS}:${KAFKA_PORT} + consumerGroupId: my-consumer-group + topics: +- topic: ${KAFKA_INGRESS_TOPIC} + (...) + properties: +- ssl.truststore.location: ${SSL_TRUSTSTORE_LOCATION} +- ssl.truststore.password: ${SSL_TRUSTSTORE_PASSWORD} +(...) Review Comment: I like the idea of logging the effective yaml somewhere. We should probably make it an opt-in kind of a deal, since we don't want to be automatically logging secrets. I'll hold off on coding this until we hear from @igalshilman -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp commented on a diff in pull request #19275: [FLINK-24491][runtime] Make the job termination wait until the archiving of ExecutionGraphInfo finishes
XComp commented on code in PR #19275: URL: https://github.com/apache/flink/pull/19275#discussion_r841387465 ## flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java: ## @@ -1060,40 +1066,24 @@ protected CleanupJobState jobReachedTerminalState(ExecutionGraphInfo executionGr terminalJobStatus); } -archiveExecutionGraph(executionGraphInfo); +storeExecutionGraphInfo(executionGraphInfo); if (terminalJobStatus.isGloballyTerminalState()) { -final JobID jobId = executionGraphInfo.getJobId(); -try { -if (jobResultStore.hasCleanJobResultEntry(jobId)) { -log.warn( -"Job {} is already marked as clean but clean up was triggered again.", -jobId); -} else if (!jobResultStore.hasDirtyJobResultEntry(jobId)) { -jobResultStore.createDirtyResult( -new JobResultEntry( -JobResult.createFrom( - executionGraphInfo.getArchivedExecutionGraph(; -log.info( -"Job {} has been registered for cleanup in the JobResultStore after reaching a terminal state.", -jobId); -} -} catch (IOException e) { -fatalErrorHandler.onFatalError( -new FlinkException( -String.format( -"The job %s couldn't be marked as pre-cleanup finished in JobResultStore.", -jobId), -e)); -} -} -return terminalJobStatus.isGloballyTerminalState() -? CleanupJobState.GLOBAL -: CleanupJobState.LOCAL; +// do not create an archive for suspended jobs, as this would eventually lead to +// multiple archive attempts which we currently do not support +CompletableFuture archiveFuture = +archiveExecutionGraph(executionGraphInfo); + +registerCleanupInJobResultStore(executionGraphInfo); + +return archiveFuture.thenApplyAsync(ignored -> CleanupJobState.GLOBAL); +} else { +return CompletableFuture.completedFuture(CleanupJobState.LOCAL); +} } -private void archiveExecutionGraph(ExecutionGraphInfo executionGraphInfo) { +private void storeExecutionGraphInfo(ExecutionGraphInfo executionGraphInfo) { Review Comment: `storeExecutionGraphInfo` and `archiveExecutionGraph` are too generic in my opinion. What about something like `writeToExecutionGraphInfoStore` and `writeToHistoryServer`? That would help distinguishing these two methods. ## flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java: ## @@ -1103,23 +1093,56 @@ private void archiveExecutionGraph(ExecutionGraphInfo executionGraphInfo) { executionGraphInfo.getArchivedExecutionGraph().getJobID(), e); } +} + +private CompletableFuture archiveExecutionGraph( Review Comment: Same as mentioned above, already: `storeExecutionGraphInfo` and `archiveExecutionGraph` are too generic in my opinion. ## flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/DispatcherResourceCleanupTest.java: ## @@ -651,6 +661,104 @@ public void testFailingJobManagerRunnerCleanup() throws Exception { awaitStatus(dispatcherGateway, jobId, JobStatus.RUNNING); } +@Test(timeout = 5000L) +public void testArchiveSuccessfullyWithinTimeout() throws Exception { + +final Configuration configuration = new Configuration(); + configuration.setLong(ClusterOptions.CLUSTER_SERVICES_SHUTDOWN_TIMEOUT, 1000L); + +final ExecutorService ioExecutor = Executors.newSingleThreadExecutor(); + +try { +final TestingHistoryServerArchivist archivist = +new TestingHistoryServerArchivist(ioExecutor, 50L); +final TestingJobMasterServiceLeadershipRunnerFactory testingJobManagerRunnerFactory = +new TestingJobMasterServiceLeadershipRunnerFactory(0); +final TestingDispatcher.Builder testingDispatcherBuilder = +createTestingDispatcherBuilder() +.setHistoryServerArchivist(archivist) +.setConfiguration(configuration); +startDispatcher(testingDispatcherBuilder, testingJobManagerRunnerFactory); + +submitJobAndWait(); +final TestingJobManagerRunner testingJobManagerRunner = + testingJobManagerRunnerFactory.takeCreatedJobManagerRunner(); +finishJob(testingJobManagerR
[GitHub] [flink-connector-redis] MartijnVisser merged pull request #1: [FLINK-27472][Connector][Redis] Setup Redis connector repository
MartijnVisser merged PR #1: URL: https://github.com/apache/flink-connector-redis/pull/1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] pscls opened a new pull request, #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
pscls opened a new pull request, #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1 ## What is the purpose of the change This pull request ports the RabbitMQ connector implementation to the new Connector’s API described in [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) and [FLIP-143](https://cwiki.apache.org/confluence/display/FLINK/FLIP-143%3A+Unified+Sink+API). It includes both source and sink with at-most-once, at-least-once, and exactly-once behavior, respectively. This pull request closes the following issues (separated RabbitMQ connector Source and Sink tickets): [FLINK-20628](https://issues.apache.org/jira/browse/FLINK-20628) and [FLINK-21373](https://issues.apache.org/jira/browse/FLINK-21373) ## Brief change log - Source and Sink use the RabbitMQ’s Java Client API to interact with RabbitMQ - The RabbitMQ Source reads messages from a queue - At-least-once - Messages are acknowledged on checkpoint completion - Exactly-once - Messages are acknowledged in a transaction - The user has to set correlation ids for deduplication - The RabbitMQ Sink publishes messages to a queue - At-least-once - Unacknowledged messages are resend on checkpoints - Exactly-once - Messages between two checkpoints are published in a transaction ## Verifying this change This change added tests and can be verified as follows: All changes are within the flink-connectors/flink-connector-rabbitmq2/ module. Added Integration Tests can be find under org.apache.flink.connector.rabbitmq2.source and org.apache.flink.connector.rabbitmq2.sink package in the test respective directories. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (don't know) - The runtime per-record code paths (performance sensitive): (don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (don't know) - The S3 file system connector: (no) ## Documentation - Does this pull request introduces a new feature? (yes) - If yes, how is the feature documented? (JavaDocs) Co-authored-by: Yannik Schroeder Co-authored-by: Jan Westphal -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] pscls commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
pscls commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1135011871 This is a copy from the original PR (https://github.com/apache/flink/pull/15140) against the Flink repository. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] pscls commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
pscls commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1135015425 @MartijnVisser We are not exactly sure what has to be part of the root-pom. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
MartijnVisser commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1135867243 @pscls I think you've done a good job already with the root-pom; it looks like the one we currently have for Elasticsearch. I've just approved the run, so we can also see how the build behaves. When I tried it locally, it complained about https://github.com/pscls/flink-connector-rabbitmq/blob/new-api-connector/flink-connector-rabbitmq/src/main/java/org/apache/flink/connector/rabbitmq/sink/RabbitMQSink.java#L124 having a whitespace, but now the tests are running for me. I'll work on finding someone who can help with the review for this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] chayim commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
chayim commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1136864263 @MartijnVisser Pinging here - as this superceeds https://github.com/apache/flink/pull/15487 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] MartijnVisser commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
MartijnVisser commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1137146071 @chayim Thanks for that! I'm off for a little bit but I'll see if we can find anyone who can help with a review of this! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] sazzad16 commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
sazzad16 commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1137164171 @MartijnVisser Thank you :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] chayim commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
chayim commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1148455033 @MartijnVisser any luck finding someone to help? Hope you're enjoying your time off! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] MartijnVisser commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
MartijnVisser commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1148571757 @chayim Not yet, I'm bringing this PR up in our release meet-up which is scheduled for next week on Tuesday! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] MartijnVisser commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
MartijnVisser commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1148589486 @sazzad16 One request from my end: can you rebase on the current `main` branch to get the correct ASF repository configuration in. It will avoid creating a lot of comments towards the Dev mailing list -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] sazzad16 commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
sazzad16 commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1148599822 @MartijnVisser It is already rebased. Just rechecked. Is there anything missing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] lindong28 opened a new pull request #1: [Flink-21976] Move Flink ML pipeline API and library code from apache/flink to apache/flink-ml
lindong28 opened a new pull request #1: URL: https://github.com/apache/flink-ml/pull/1 ## What is the purpose of the change Move Flink ML pipeline API and library code from apache/flink to apache/flink-ml ## Brief change log - Move files under flink/flink-ml-parent to flink-ml repo - Add CODE_OF_CONDUCT.md, LICENSE and .gitignore - Add files needed for checkstyle under tools/maven - Update pom.xml to include plugins from apache/flink/pom.xml that are needed to build and release this flink-ml repo. ## Verifying this change 1) This PR could run and pass all unit tests. 2) Run `mvn install` in both `apache/flink-ml` and `apache/flink/flink-ml-parent` and verify that they generate the same set of files (e.g. *.pom files and *.jar files) at the same path under `~/.m2/repository/org/apache/flink/` 3) `mvn install` generates `flink-ml-api-1.13-SNAPSHOT.jar`, `flink-ml-lib_2.11-1.13-SNAPSHOT.jar` and `flink-ml-uber_2.11-1.13-SNAPSHOT.jar`. I used IntellIj to compare the jar files with those jar files generated by `apache/flink/flink-ml-parent` and verified that they contains the same class files. The only difference in the jar files is that the jar file from this repo has NOTICE that says "Copyright 2019-2020 The Apache Software Foundation", whereas the jar file from apache/flink has NOTICE that says "Copyright 2014-2021 The Apache Software Foundation". That is not clear to me where that difference comes from. I believe this is not a blocking issue. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] lindong28 commented on pull request #1: [Flink-21976] Move Flink ML pipeline API and library code from apache/flink to apache/flink-ml
lindong28 commented on pull request #1: URL: https://github.com/apache/flink-ml/pull/1#issuecomment-809144123 @becketqin Could you review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] becketqin merged pull request #1: [Flink-21976] Move Flink ML pipeline API and library code from apache/flink to apache/flink-ml
becketqin merged pull request #1: URL: https://github.com/apache/flink-ml/pull/1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] becketqin commented on pull request #1: [Flink-21976] Move Flink ML pipeline API and library code from apache/flink to apache/flink-ml
becketqin commented on pull request #1: URL: https://github.com/apache/flink-ml/pull/1#issuecomment-809247359 @lindong28 Thanks for the patch. Merged to master. 08d058046f34b711128e0646ffbdc7e384c22064 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] lindong28 opened a new pull request #2: [FLINK-22013] Add Github Actions to flink-ml for every push and pull request
lindong28 opened a new pull request #2: URL: https://github.com/apache/flink-ml/pull/2 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-ml] lindong28 commented on pull request #2: [FLINK-22013] Add Github Actions to flink-ml for every push and pull request
lindong28 commented on pull request #2: URL: https://github.com/apache/flink-ml/pull/2#issuecomment-810706284 @becketqin Could you review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connectors] AHeise closed pull request #1: Fix tests
AHeise closed pull request #1: URL: https://github.com/apache/flink-connectors/pull/1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
MartijnVisser commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1160080270 @pscls Can you have a look at the failing build? It's a checkstyle error. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] wanglijie95 commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
wanglijie95 commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1160089843 @MartijnVisser @pscls I noticed that the GitBox of flink-connector-rabbitmq sent emails to the dev@flink.apache.org. Is it expected? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
MartijnVisser commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1160102027 @wanglijie95 No. Most likely this is caused because the PR was created/is not yet using the ASF config as defined in https://github.com/apache/flink-connector-rabbitmq/blob/main/.asf.yaml -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] knaufk commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
knaufk commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1190766247 Thanks @sazzad16 for you contribution and patience. I think the community would really benefit from a Redis Connector and I'll try to help you get this in. The main challenge with the current PR is that it uses the old - about to be deprecated - source and sink APIs of Apache Flink (SourceFunction, SinkFunction). Could you try migrating your implemention to these new APIs? For the Source, https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/datastream/sources/ contains a description of the interfaces and there are already a few examples like [Kafka](https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSource.java) or [Pulsar](https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/PulsarSource.java). For the source, there is [ElasticSearch](https://github.com/apache/flink-connector-elasticsearch/blob/main/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSink.java) that uses the new Sink API as well as the [FileSink](https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/sink/FileSink.java). However, I would recommend you have a look whether you can leverage the [Async Sink](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink) like e.g. the DynamoDB Sink is doing, which is also under development right now. As prerequisite, you would need to be to asynchronously write to Redis and tell based on the resulting Future whether the request was successful or not (see Public Interfaces in the Async Sink FLIP). From what I know about Redis this should be possible and would greatly simplify the implementation of a Sink. Lastly, I propose we split this contribution up into at least four separate PRs to get this moving more quickly. * the Source * the Sink * the TableSource * the TableSink Just start with what seems most relevant to you. I am sorry about these requests to port the implementation to the new APIs, but building on the old APIs will not be sustainable and with the new APIs we immediately get all the support for both batch and stream execution in the DataStream and Table API as well as better chances this connector ties in perfectly with checkpointing and watermarking without much work from your side. Thanks again and looking forward to hearing from you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] sazzad16 commented on pull request #2: [FLINK-15571][connector][WIP] Redis Stream connector for Flink
sazzad16 commented on PR #2: URL: https://github.com/apache/flink-connector-redis/pull/2#issuecomment-1193897653 Hi @knaufk, Thanks for the pointers and the feedback! @MartijnVisser has also sent me more read material including https://cwiki.apache.org/confluence/display/FLINK/FLIP+Connector+Template, https://cwiki.apache.org/confluence/display/FLINK/FLIP-252%3A+Amazon+DynamoDB+Sink+Connector, https://cwiki.apache.org/confluence/display/FLINK/FLIP-243%3A+Dedicated+Opensearch+connectors. I'll read through the links, and figure out how to best change my code. This may lead to a single PR or multiple PRs, as I get down that path. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] eskabetxe opened a new pull request, #3: [FLINK-15571] Add Redis sink
eskabetxe opened a new pull request, #3: URL: https://github.com/apache/flink-connector-redis/pull/3 Add basic RedisSink based on AsyncSinkWriter As Jedis dont have a async call, it was implemented. Maybe better choice was to use lettuce or Redisson libs.. Need to be added a lot of commands, its basic to allow the discussion of implementation -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-redis] MartijnVisser merged pull request #4: [hotfix] Add CI and label configurator
MartijnVisser merged PR #4: URL: https://github.com/apache/flink-connector-redis/pull/4 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
MartijnVisser commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1203925901 @pscls The CI fails due to spotless; can you fix that? (By running `mvn spotless:apply`) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] pscls commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
pscls commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1209062942 > @pscls The CI fails due to spotless; can you fix that? (By running `mvn spotless:apply`) @MartijnVisser I've nothing to commit when running `mvn spotless:apply`. https://user-images.githubusercontent.com/9250254/183599768-253a8b36-3851-4c91-98a2-f04080f2446f.png";> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser commented on pull request #1: [FLINK-20628] RabbitMQ Connector using FLIP-27 Source API
MartijnVisser commented on PR #1: URL: https://github.com/apache/flink-connector-rabbitmq/pull/1#issuecomment-1232836643 @pscls Weird. Could you push once more, since the logs are no longer available? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser commented on pull request #2: [FLINK-29467][CI] Update CI workflow
MartijnVisser commented on PR #2: URL: https://github.com/apache/flink-connector-rabbitmq/pull/2#issuecomment-1262073428 @zentol Ideally this will be available for https://github.com/apache/flink-connector-rabbitmq/pull/1 benefits from it when rebasing -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-rabbitmq] MartijnVisser merged pull request #2: [FLINK-29467][CI] Update CI workflow
MartijnVisser merged PR #2: URL: https://github.com/apache/flink-connector-rabbitmq/pull/2 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol opened a new pull request, #1: Add first version of release scripts
zentol opened a new pull request, #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1 This PR adds the first version of shared release utilities for connectors. The expectation is that the `release_utils` branch will be mounted as a git submodule into each connector repo. See the README for the purpose of each script. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#issuecomment-1297059496 Note that we could just do a review round, and then try them out for the ES connector release before merging them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] MartijnVisser commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
MartijnVisser commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1010743355 ## README.md: ## @@ -1 +1,79 @@ -This repository contains utilities for [Apache Flink](https://flink.apache.org/) connectors. \ No newline at end of file +This is a collection of release utils for [Apache Flink](https://flink.apache.org/) connectors. + +# Integration + +The scripts assume that they are integrated into a connector repo as a submodule into the connector repo +under `tools/releasing/`. + +# Usage + +Some scripts rely on environment variables to be set. +These are checked at the start of each script. +Any instance of `${some_variable}` in this document refers to an environment variable that is used by the respective +script. + +## check_environment.sh + +Runs some pre-release checks for the current environment, for example that all required programs are available. +This should be run once at the start of the release process. + +## publish_snapshot_branch.sh + +Creates (and pushes!) a new snapshot branch for the current commit. +The branch name is automatically determined from the version in the pom. +This script should be called when work on a new major/minor version has started. Review Comment: ```suggestion This script should be called when work on a new major/minor version of the connector has started. ``` ## _init.sh: ## @@ -0,0 +1,45 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# all scripts should contain this line + source ${SCRIPT_DIR}/_init.sh +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) + +set -o errexit +set -o nounset +set -o pipefail + +export SHELLOPTS + +### + +MVN=${MVN:-mvn} + +if [ "$(uname)" == "Darwin" ]; then Review Comment: While I have Darwin, I can also both run `sha512sum` and `shasum -a 512`. Don't think that's an issue for this script's purpose though ## check_environment.sh: ## @@ -0,0 +1,67 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd) + +source "${SCRIPT_DIR}/_init.sh" + +EXIT_CODE=0 + +function check_program_available { + if program=$(command -v ${1}); then +printf "\t%-10s%s\n" "${1}" "using ${program}" + else +printf "\t%-10s%s\n" "${1}" "is not available." +EXIT_CODE=1 + fi +} + +echo "Checking program availability:" +check_program_available git +check_program_available tar +check_program_available rsync +check_program_available gpg +check_program_available perl +check_program_available sed +check_program_available svn +check_program_available ${MVN} Review Comment: Just to double check, any version of Maven for connectors should suffice, right? Doesn't need to be 3.2.5 like for Flink itself ## README.md: ## @@ -1 +1,79 @@ -This repository contains utilities for [Apache Flink](https://flink.apache.org/) connectors. \ No newline at end of file +This is a collection of release utils for [Apache Flink](https://flink.apache.org/) connectors. + +# Integration + +The scripts assume that they are integrated into a connector repo as a submodule into the connector repo +under `tools/releasing/`. + +# Usage + +Some scripts rely on environment variables to be set. +These are checked at the start of each script. +Any instance of `${some_va
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011296837 ## check_environment.sh: ## @@ -0,0 +1,67 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd) + +source "${SCRIPT_DIR}/_init.sh" + +EXIT_CODE=0 + +function check_program_available { + if program=$(command -v ${1}); then +printf "\t%-10s%s\n" "${1}" "using ${program}" + else +printf "\t%-10s%s\n" "${1}" "is not available." +EXIT_CODE=1 + fi +} + +echo "Checking program availability:" +check_program_available git +check_program_available tar +check_program_available rsync +check_program_available gpg +check_program_available perl +check_program_available sed +check_program_available svn +check_program_available ${MVN} Review Comment: This script doesn't neither enforces a maven version nor uses fancy features that would implicitly require a newer maven version. This is all left to the connector. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011297983 ## _init.sh: ## @@ -0,0 +1,45 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# all scripts should contain this line + source ${SCRIPT_DIR}/_init.sh +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) + +set -o errexit +set -o nounset +set -o pipefail + +export SHELLOPTS + +### + +MVN=${MVN:-mvn} + +if [ "$(uname)" == "Darwin" ]; then Review Comment: Doesn't hurt to have it I guess :shrug: -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011301075 ## README.md: ## @@ -1 +1,79 @@ -This repository contains utilities for [Apache Flink](https://flink.apache.org/) connectors. \ No newline at end of file +This is a collection of release utils for [Apache Flink](https://flink.apache.org/) connectors. + +# Integration + +The scripts assume that they are integrated into a connector repo as a submodule into the connector repo +under `tools/releasing/`. + +# Usage + +Some scripts rely on environment variables to be set. +These are checked at the start of each script. +Any instance of `${some_variable}` in this document refers to an environment variable that is used by the respective +script. + +## check_environment.sh + +Runs some pre-release checks for the current environment, for example that all required programs are available. +This should be run once at the start of the release process. + +## publish_snapshot_branch.sh + +Creates (and pushes!) a new snapshot branch for the current commit. +The branch name is automatically determined from the version in the pom. +This script should be called when work on a new major/minor version has started. + +## update_branch_version.sh + +Updates the version in the poms of the current branch to `${NEW_VERSION}`. + +## stage_source_release.sh + +Creates a source release from the current branch and pushes it via `svn` +to [dist.apache.org](https://dist.apache.org/repops/dist/dev/flink). +The version is automatically determined from the version in the pom. +The created `svn` directory will contain a `-rc${RC_NUM}` suffix. + +## stage_jars.sh + +Creates the jars from the current branch and deploys them to [repository.apache.org](https://repository.apache.org). +The version will be suffixed with `-${FLINK_MINOR_VERSION}` to indicate the supported Flink version. +If a particular version of a connector supports multiple Flink versions then this script should be called multiple +times. + +## publish_git_tag.sh + +Creates a release tag for the current branch and pushes it to GitHub. +The tag will be suffixed with `-rc${RC_NUM}`, if `${RC_NUM}` was set. +This script should only be used _after_ the `-SNAPSHOT` version suffix was removd via `update_branch_version.sh`. + +## update_japicmp_configuration.sh + +Sets the japicmp reference version in the pom of the current branch to `${NEW_VERSION}`, enables compatibility checks +for `@PublicEvolving` when used on snapshot branches an clears the list of exclusions. +This should be called after a release on the associated snapshot branch. If it was a minor release it should +additionally be called on the `main` branch. + +# Common workflow + +1. run `publish_snapshot_branch.sh` +2. do some development work on the created snapshot branch +3. checkout a specific commit to create a release from +4. run `check_environment.sh` Review Comment: Basically the issue is that creating a snapshot branch will sometimes not be done by a release manager. For example, after the ES 3.0 release we have a v3.0 branch and a main branch on 4.0-SNAPSHOT. v3.1 would be created by _someone_ when we want to make some change that requires a new minor version. So all the other things the check does just aren't required. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011301544 ## publish_git_tag.sh: ## @@ -0,0 +1,46 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd) + +source ${SCRIPT_DIR}/_init.sh +source ${SCRIPT_DIR}/_utils.sh + +### + +RC_NUM=${RC_NUM:-none} + +### + +function create_release_tag { + cd "${SOURCE_DIR}" + + version=$(get_pom_version) + + tag=v${version} + if [ "$RC_NUM" != "none" ]; then +tag=${tag}-rc${RC_NUM} + fi Review Comment: We could do that, yes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011297983 ## _init.sh: ## @@ -0,0 +1,45 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# all scripts should contain this line + source ${SCRIPT_DIR}/_init.sh +SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) + +set -o errexit +set -o nounset +set -o pipefail + +export SHELLOPTS + +### + +MVN=${MVN:-mvn} + +if [ "$(uname)" == "Darwin" ]; then Review Comment: Doesn't hurt to have it I guess :shrug: I suppose we just can't be sure this is the case in general. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011368848 ## _utils.sh: ## @@ -0,0 +1,59 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +function check_variable_set { + variable=$1 + + if [ -z "${!variable:-}" ]; then + echo "${variable} was not set." + exit 1 + fi +} + +function create_pristine_source { + source_dir=$1 + release_dir=$2 + + clone_dir="${release_dir}/tmp-clone" + clean_dir="${release_dir}/tmp-clean-clone" + # create a temporary git clone to ensure that we have a pristine source release + git clone "${source_dir}" "${clone_dir}" + + rsync -a \ +--exclude ".git" --exclude ".gitignore" --exclude ".gitattributes" --exclude ".gitmodules" --exclude ".github" \ +--exclude ".idea" --exclude "*.iml" \ +--exclude ".DS_Store" \ +--exclude ".asf.yaml" \ +--exclude "target" --exclude "tools/releasing/shared" \ Review Comment: ideally the second exclusion would be more dynamic and figure out on its own what the `shared` directory is actually called. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011369044 ## README.md: ## @@ -1 +1,79 @@ -This repository contains utilities for [Apache Flink](https://flink.apache.org/) connectors. \ No newline at end of file +This is a collection of release utils for [Apache Flink](https://flink.apache.org/) connectors. + +# Integration + +The scripts assume that they are integrated into a connector repo as a submodule into the connector repo +under `tools/releasing/`. Review Comment: ```suggestion under `tools/releasing/shared`. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011501540 ## stage_jars.sh: ## @@ -0,0 +1,55 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd) + +source "${SCRIPT_DIR}/_init.sh" +source "${SCRIPT_DIR}/_utils.sh" + +### + +check_variable_set FLINK_MINOR_VERSION + +### + +function deploy_staging_jars { + cd "${SOURCE_DIR}" + mkdir -p "${RELEASE_DIR}" + + project_version=$(get_pom_version) + if [[ ${project_version} =~ -SNAPSHOT$ ]]; then +echo "Jars should not be created for SNAPSHOT versions. Use 'update_branch_version.sh' first." +exit 1 + fi + version=$(project_version)-${FLINK_MINOR_VERSION} + + echo "Deploying jars v${version} to repository.apache.org" + echo "To revert this step, login to 'https://repository.apache.org' -> 'Staging repositories' -> Select repository -> 'Drop'" + + clone_dir=$(create_pristine_source "${SOURCE_DIR}" "${RELEASE_DIR}") + cd "${clone_dir}" + set_pom_version "${version}" + + options="-Prelease,docs-and-source -DskipTests -DretryFailedDeploymentCount=10" + ${MVN} clean deploy ${options} Review Comment: This should set the `flink.version` property. May require the user to provide a full Flink version from which we extract the minor version. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1011501540 ## stage_jars.sh: ## @@ -0,0 +1,55 @@ +#!/usr/bin/env bash + +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +SCRIPT_DIR=$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" &>/dev/null && pwd) + +source "${SCRIPT_DIR}/_init.sh" +source "${SCRIPT_DIR}/_utils.sh" + +### + +check_variable_set FLINK_MINOR_VERSION + +### + +function deploy_staging_jars { + cd "${SOURCE_DIR}" + mkdir -p "${RELEASE_DIR}" + + project_version=$(get_pom_version) + if [[ ${project_version} =~ -SNAPSHOT$ ]]; then +echo "Jars should not be created for SNAPSHOT versions. Use 'update_branch_version.sh' first." +exit 1 + fi + version=$(project_version)-${FLINK_MINOR_VERSION} + + echo "Deploying jars v${version} to repository.apache.org" + echo "To revert this step, login to 'https://repository.apache.org' -> 'Staging repositories' -> Select repository -> 'Drop'" + + clone_dir=$(create_pristine_source "${SOURCE_DIR}" "${RELEASE_DIR}") + cd "${clone_dir}" + set_pom_version "${version}" + + options="-Prelease,docs-and-source -DskipTests -DretryFailedDeploymentCount=10" + ${MVN} clean deploy ${options} Review Comment: This should set the `flink.version` property so we actually compile against the correct Flink version. May require the user to provide a full Flink version from which we extract the minor version. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] leonardBang commented on pull request #1: [FLINK-29472] Add first version of release scripts
leonardBang commented on PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#issuecomment-1299982431 Thanks @zentol and @zentol for driving this, but I saw `dev` mail list received all PR updates information from this repo, is there some settings incorrect? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] MartijnVisser commented on pull request #1: [FLINK-29472] Add first version of release scripts
MartijnVisser commented on PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#issuecomment-1299988788 > Thanks @zentol and @MartijnVisser for driving this, but I saw `dev` mail list received all PR updates information from this repo, is there some settings incorrect? Yes, we need to add an `.asf.yaml` file to this repo. Will do in a moment :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] leonardBang commented on pull request #1: [FLINK-29472] Add first version of release scripts
leonardBang commented on PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#issuecomment-1303154579 Happy to see you start working @zentol , Could you take a look my comments here? https://github.com/apache/flink/pull/21227 sorry to reply under this issue but I've tried ping you on slack but no response. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol commented on a diff in pull request #1: [FLINK-29472] Add first version of release scripts
zentol commented on code in PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1#discussion_r1016492375 ## README.md: ## @@ -1 +1,79 @@ -This repository contains utilities for [Apache Flink](https://flink.apache.org/) connectors. \ No newline at end of file +This is a collection of release utils for [Apache Flink](https://flink.apache.org/) connectors. + +# Integration + +The scripts assume that they are integrated into a connector repo as a submodule into the connector repo +under `tools/releasing/shared`. + +# Usage + +Some scripts rely on environment variables to be set. +These are checked at the start of each script. +Any instance of `${some_variable}` in this document refers to an environment variable that is used by the respective +script. + +## check_environment.sh + +Runs some pre-release checks for the current environment, for example that all required programs are available. +This should be run once at the start of the release process. + +## publish_snapshot_branch.sh + +Creates (and pushes!) a new snapshot branch for the current commit. +The branch name is automatically determined from the version in the pom. +This script should be called when work on a new major/minor version of the connector has started. + +## update_branch_version.sh + +Updates the version in the poms of the current branch to `${NEW_VERSION}`. + +## stage_source_release.sh + +Creates a source release from the current branch and pushes it via `svn` +to [dist.apache.org](https://dist.apache.org/repops/dist/dev/flink). +The version is automatically determined from the version in the pom. +The created `svn` directory will contain a `-rc${RC_NUM}` suffix. + +## stage_jars.sh + +Creates the jars from the current branch and deploys them to [repository.apache.org](https://repository.apache.org). +The version will be suffixed with `-${FLINK_MINOR_VERSION}` to indicate the supported Flink version. +If a particular version of a connector supports multiple Flink versions then this script should be called multiple +times. + +## publish_git_tag.sh + +Creates a release tag for the current branch and pushes it to GitHub. +The tag will be suffixed with `-rc${RC_NUM}`, if `${RC_NUM}` was set. +This script should only be used _after_ the `-SNAPSHOT` version suffix was removd via `update_branch_version.sh`. + +## update_japicmp_configuration.sh + +Sets the japicmp reference version in the pom of the current branch to `${NEW_VERSION}`, enables compatibility checks +for `@PublicEvolving` when used on snapshot branches an clears the list of exclusions. +This should be called after a release on the associated snapshot branch. If it was a minor release it should +additionally be called on the `main` branch. + +# Common workflow + +1. run `publish_snapshot_branch.sh` +2. do some development work on the created snapshot branch +3. checkout a specific commit to create a release from +4. run `check_environment.sh` +5. run `update_branch_version.sh` +6. run `stage_source_release.sh` +7. run `stage_jars.sh` (once for each supported Flink version) +8. run `publish_git_tag.sh` (with `RC_NUM`) +9. vote on release +10. finalize release or cancel and go back to step 2 +11. run `publish_git_tag.sh` (without `RC_NUM`) +12. run `update_japicmp_configuration.sh` (on snapshot branch, and maybe `main`) + +# Script naming conventions + +| Prefix | Meaning | +|-|| +| check | Verifies conditions without making any changes. | +| update | Applies modifications locally to the current branch. | +| stage | Publishes an artifact to an intermediate location for voting purposes. | +| publish | Releases an artifact to a user-facing location. | Review Comment: ```suggestion | release | Publishes an artifact to a user-facing location. | ``` It's a bit of a mess with "publishing" having 2 different meanings. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-shared-utils] zentol merged pull request #1: [FLINK-29472] Add first version of release scripts
zentol merged PR #1: URL: https://github.com/apache/flink-connector-shared-utils/pull/1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] klion26 commented on a change in pull request #235: [FLINK-13344][docs-zh] Translate "How to Contribute" page into Chinese
klion26 commented on a change in pull request #235: URL: https://github.com/apache/flink-web/pull/235#discussion_r417031566 ## File path: contributing/how-to-contribute.zh.md ## @@ -4,136 +4,138 @@ title: "如何参与贡献" -Apache Flink is developed by an open and friendly community. Everybody is cordially welcome to join the community and contribute to Apache Flink. There are several ways to interact with the community and to contribute to Flink including asking questions, filing bug reports, proposing new features, joining discussions on the mailing lists, contributing code or documentation, improving the website, or testing release candidates. +Apache Flink 是由一个开放友好的社区开发的。我们诚挚地欢迎每个人加入社区并为 Apache Flink 做出贡献。与社区交流和为 Flink 做贡献的方式包括:提问题、填写 bug 报告、提议新特性、参与邮件列表的讨论、贡献代码或文档、改进网站和测试候选发布版本。 Review comment: `填写 bug 报告` 是否可以改成 `报告 bug` 呢 ## File path: contributing/how-to-contribute.zh.md ## @@ -4,136 +4,138 @@ title: "如何参与贡献" -Apache Flink is developed by an open and friendly community. Everybody is cordially welcome to join the community and contribute to Apache Flink. There are several ways to interact with the community and to contribute to Flink including asking questions, filing bug reports, proposing new features, joining discussions on the mailing lists, contributing code or documentation, improving the website, or testing release candidates. +Apache Flink 是由一个开放友好的社区开发的。我们诚挚地欢迎每个人加入社区并为 Apache Flink 做出贡献。与社区交流和为 Flink 做贡献的方式包括:提问题、填写 bug 报告、提议新特性、参与邮件列表的讨论、贡献代码或文档、改进网站和测试候选发布版本。 -What do you want to do? -Contributing to Apache Flink goes beyond writing code for the project. Below, we list different opportunities to help the project: +你想做什么? +为 Apache Flink 做贡献不仅仅是为项目编写代码。以下,我们提供了不同的途径可以为项目提供帮助: - Area - Further information + 可以贡献的领域 + 详细说明 - Report a Bug - To report a problem with Flink, open http://issues.apache.org/jira/browse/FLINK";>Flink’s Jira, log in if necessary, and click on the red Create button at the top. - Please give detailed information about the problem you encountered and, if possible, add a description that helps to reproduce the problem. + 报告 Bug + 要报告 Flink 的问题,请登录 http://issues.apache.org/jira/browse/FLINK";>Flink’s Jira,然后点击顶部红色的 Create 按钮。 + 请提供你遇到的问题的详细信息,如果可以,请附上能够帮助我们复现问题的描述。 - Contribute Code - Read the Code Contribution Guide + 贡献代码 + 请阅读 代码贡献指南 - Help With Code Reviews - Read the Code Review Guide + 帮助做代码审核 + 请阅读 代码审核指南 - Help Preparing a Release + 帮助准备版本发布 -Releasing a new version consists of the following steps: +发布新版本包括以下步骤: - Building a new release candidate and starting a vote (usually for 72 hours) on the dev@flink.apache.org list - Testing the release candidate and voting (+1 if no issues were found, -1 if the release candidate has issues). - Going back to step 1 if the release candidate had issues. Otherwise we publish the release. + 建立新的候选版本并且在 dev@flink.apache.org 邮件列表发起投票(投票通常持续72小时)。 + 测试候选版本并投票 (如果没发现问题就+1,如果候选版本有问题就-1)。 + 如果候选版本有问题就退回到第一步。否则我们发布该版本。 -Read the https://cwiki.apache.org/confluence/display/FLINK/Releasing";>test procedure for a release. +请阅读 https://cwiki.apache.org/confluence/display/FLINK/Releasing";>版本测试流程。 - Contribute Documentation - Read the Documentation Contribution Guide + 贡献文档 + 请阅读 文档贡献指南 - Support Flink Users + 支持 Flink 用户 - Reply to questions on the https://flink.apache.org/community.html#mailing-lists";>user mailing list - Reply to Flink related questions on https://stackoverflow.com/questions/tagged/apache-flink";>Stack Overflow with the https://stackoverflow.com/questions/tagged/apache-flink";>apache-flink, https://stackoverflow.com/questions/tagged/flink-streaming";>flink-streaming or https://stackoverflow.com/questions/tagged/flink-sql";>flink-sql tag - Check the latest issues in https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20created%20DESC%2C%20priority%20DESC%2C%20updated%20DESC";>Jira for tickets which are actually user questions + 回答 https://flink.apache.org/community.html#mailing-lists";>用户邮件列表 中的问题 + 回答 https://stackoverflow.com/questions/tagged/apache-flink";>Stack Overflow 上带有 https://stackoverflow.com/questions/tagged/apache-flink";>apache-flink、 https://stackoverflow.com/questions/tagged/flink-streaming";>flink-streaming 或 https://stackoverflow.com/questions/tagged/flink-sql";>flink-sql 标签的 Flink 相关问题 + 检查 https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20created%20DESC%2C%20pri
[GitHub] [flink-web] klion26 commented on pull request #242: [FLINK-13684][docs-zh] Translate "Code Style - Formatting Guide" page into Chinese
klion26 commented on pull request #242: URL: https://github.com/apache/flink-web/pull/242#issuecomment-620981849 @shining-huang thanks for your contribution. could you please resolve the conflic by rebasing the newly master? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] klion26 edited a comment on pull request #242: [FLINK-13684][docs-zh] Translate "Code Style - Formatting Guide" page into Chinese
klion26 edited a comment on pull request #242: URL: https://github.com/apache/flink-web/pull/242#issuecomment-620981849 @shining-huang thanks for your contribution. could you please resolve the conflict by rebasing the newly master? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] piyushnarang opened a new pull request #85: [FLINK-16955] Bump Zookeeper 3.4.X to 3.4.14
piyushnarang opened a new pull request #85: URL: https://github.com/apache/flink-shaded/pull/85 Follow up from https://github.com/apache/flink/pull/11938 to commit in the right project. Picking up the updated zk dependency allows us to get around an issue in the `StaticHostProvider` in the current zk where if 1 of n configured zk hosts is unreachable the Flink job manager is not able to start up. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] piyushnarang commented on pull request #85: [FLINK-16955] Bump Zookeeper 3.4.X to 3.4.14
piyushnarang commented on pull request #85: URL: https://github.com/apache/flink-shaded/pull/85#issuecomment-621507875 cc @zentol - I tried using this version on Flink and I hit the issues that were captured in https://issues.apache.org/jira/browse/FLINK-11259 We need to make a minor tweak to the `SecureTestEnvironment.prepare(..)` to get around this - https://github.com/piyushnarang/flink/commit/fd6cc0311d0ef20606ba4566a54315a186889304 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] klion26 commented on a change in pull request #245: [FLINK-13678] Translate "Code Style - Preamble" page into Chinese
klion26 commented on a change in pull request #245: URL: https://github.com/apache/flink-web/pull/245#discussion_r417746431 ## File path: contributing/code-style-and-quality-preamble.zh.md ## @@ -1,25 +1,25 @@ --- -title: "Apache Flink Code Style and Quality Guide — Preamble" +title: "Apache Flink 代码样式与质量指南 — 序言" --- {% include code-style-navbar.zh.md %} -This is an attempt to capture the code and quality standard that we want to maintain. +本文旨在确立我们要维护的代码样式与质量标准。 -A code contribution (or any piece of code) can be evaluated in various ways: One set of properties is whether the code is correct and efficient. This requires solving the _logical or algorithmic problem_ correctly and well. +评估代码贡献(或任何代码片段)有多种方式:一组指标是代码是否正确和高效。这需要正确地解决逻辑或算法问题。 -Another set of properties is whether the code follows an intuitive design and architecture, whether it is well structured with right separation of concerns, and whether the code is easily understandable and makes its assumptions explicit. That set of properties requires solving the _software engineering problem_ well. A good solution implies that the code is easily testable, maintainable also by other people than the original authors (because it is harder to accidentally break), and efficient to evolve. +另一组指标是代码的设计和架构是否直观、结构是否良好、关注点是否正确、代码是否易于理解以及假设是否明确。这需要很好地解决软件工程问题。好的解决方案意味着代码容易测试,可以由原作者之外的其他人维护(代码不容易被意外破坏),并且可持续优化。 -While the first set of properties has rather objective approval criteria, the second set of properties is much harder to assess, but is of high importance for an open source project like Apache Flink. To make the code base inviting to many contributors, to make contributions easy to understand for developers that did not write the original code, and to make the code robust in the face of many contributions, well engineered code is crucial.[^1] For well engineered code, it is easier to keep it correct and fast over time. +第一组指标具有比较客观的评价标准,第二组指标较难于评估,然而对于 Apache Flink 这样的开源项目,第二组指标更加重要。为了能够邀请更多的贡献者,为了使非原始开发人员容易上手参与贡献,为了使大量贡献者协作开发的代码保持健壮,对代码进行精心地设计至关重要。[^1] 随着时间的推移,精心设计的代码更容易保持正确和高效。 -This is of course not a full guide on how to write well engineered code. There is a world of big books that try to capture that. This guide is meant as a checklist of best practices, patterns, anti-patterns, and common mistakes that we observed in the context of developing Flink. +本文当然不是代码设计的完全指南。有海量的书籍研究和讨论相关课题。本指南旨在作为一份清单,列举出我们在开发 Flink 过程中所观察到的最佳实践、模式、反模式和常见错误。 -A big part of high-quality open source contributions is about helping the reviewer to understand the contribution and double-check the implications, so an important part of this guide is about how to structure a pull request for review. +高质量开源贡献的很大一部分是帮助审阅者理解贡献的内容进而对内容进行细致地检查,因此本指南的一个重要部分是如何构建便于代码审查的拉取请求。 Review comment: “pull request” 翻译成 “拉取” 感觉怪怪的,这个地方有其他更好的翻译吗? ## File path: contributing/code-style-and-quality-preamble.zh.md ## @@ -1,25 +1,25 @@ --- -title: "Apache Flink Code Style and Quality Guide — Preamble" +title: "Apache Flink 代码样式与质量指南 — 序言" --- {% include code-style-navbar.zh.md %} -This is an attempt to capture the code and quality standard that we want to maintain. +本文旨在确立我们要维护的代码样式与质量标准。 -A code contribution (or any piece of code) can be evaluated in various ways: One set of properties is whether the code is correct and efficient. This requires solving the _logical or algorithmic problem_ correctly and well. +评估代码贡献(或任何代码片段)有多种方式:一组指标是代码是否正确和高效。这需要正确地解决逻辑或算法问题。 -Another set of properties is whether the code follows an intuitive design and architecture, whether it is well structured with right separation of concerns, and whether the code is easily understandable and makes its assumptions explicit. That set of properties requires solving the _software engineering problem_ well. A good solution implies that the code is easily testable, maintainable also by other people than the original authors (because it is harder to accidentally break), and efficient to evolve. +另一组指标是代码的设计和架构是否直观、结构是否良好、关注点是否正确、代码是否易于理解以及假设是否明确。这需要很好地解决软件工程问题。好的解决方案意味着代码容易测试,可以由原作者之外的其他人维护(代码不容易被意外破坏),并且可持续优化。 Review comment: `关注点是否正确` 这个感觉有点奇怪 ## File path: contributing/code-style-and-quality-preamble.zh.md ## @@ -1,25 +1,25 @@ --- -title: "Apache Flink Code Style and Quality Guide — Preamble" +title: "Apache Flink 代码样式与质量指南 — 序言" --- {% include code-style-navbar.zh.md %} -This is an attempt to capture the code and quality standard that we want to maintain. +本文旨在确立我们要维护的代码样式与质量标准。 -A code contribution (or any piece of code) can be evaluated in various ways: One set of properties is whether the code is correct and efficient. This requires solving the _logical or algorithmic problem_ correctly and well. +评估代码贡献(或任何代码片段)有多种方式:一组指标是代码是否正确和高效。这需要正确地解决逻辑或算法问题。 -Another set of properties is whether the code follows an intuitive design and architecture, whether it is well structured w
[GitHub] [flink-web] klion26 commented on pull request #245: [FLINK-13678] Translate "Code Style - Preamble" page into Chinese
klion26 commented on pull request #245: URL: https://github.com/apache/flink-web/pull/245#issuecomment-621605899 Seems the original author's account has been deleted. maybe someone else can take over this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] XBaith commented on a change in pull request #245: [FLINK-13678] Translate "Code Style - Preamble" page into Chinese
XBaith commented on a change in pull request #245: URL: https://github.com/apache/flink-web/pull/245#discussion_r417749335 ## File path: contributing/code-style-and-quality-preamble.zh.md ## @@ -1,25 +1,25 @@ --- -title: "Apache Flink Code Style and Quality Guide — Preamble" +title: "Apache Flink 代码样式与质量指南 — 序言" --- {% include code-style-navbar.zh.md %} -This is an attempt to capture the code and quality standard that we want to maintain. +本文旨在确立我们要维护的代码样式与质量标准。 -A code contribution (or any piece of code) can be evaluated in various ways: One set of properties is whether the code is correct and efficient. This requires solving the _logical or algorithmic problem_ correctly and well. +评估代码贡献(或任何代码片段)有多种方式:一组指标是代码是否正确和高效。这需要正确地解决逻辑或算法问题。 -Another set of properties is whether the code follows an intuitive design and architecture, whether it is well structured with right separation of concerns, and whether the code is easily understandable and makes its assumptions explicit. That set of properties requires solving the _software engineering problem_ well. A good solution implies that the code is easily testable, maintainable also by other people than the original authors (because it is harder to accidentally break), and efficient to evolve. +另一组指标是代码的设计和架构是否直观、结构是否良好、关注点是否正确、代码是否易于理解以及假设是否明确。这需要很好地解决软件工程问题。好的解决方案意味着代码容易测试,可以由原作者之外的其他人维护(代码不容易被意外破坏),并且可持续优化。 -While the first set of properties has rather objective approval criteria, the second set of properties is much harder to assess, but is of high importance for an open source project like Apache Flink. To make the code base inviting to many contributors, to make contributions easy to understand for developers that did not write the original code, and to make the code robust in the face of many contributions, well engineered code is crucial.[^1] For well engineered code, it is easier to keep it correct and fast over time. +第一组指标具有比较客观的评价标准,第二组指标较难于评估,然而对于 Apache Flink 这样的开源项目,第二组指标更加重要。为了能够邀请更多的贡献者,为了使非原始开发人员容易上手参与贡献,为了使大量贡献者协作开发的代码保持健壮,对代码进行精心地设计至关重要。[^1] 随着时间的推移,精心设计的代码更容易保持正确和高效。 -This is of course not a full guide on how to write well engineered code. There is a world of big books that try to capture that. This guide is meant as a checklist of best practices, patterns, anti-patterns, and common mistakes that we observed in the context of developing Flink. +本文当然不是代码设计的完全指南。有海量的书籍研究和讨论相关课题。本指南旨在作为一份清单,列举出我们在开发 Flink 过程中所观察到的最佳实践、模式、反模式和常见错误。 -A big part of high-quality open source contributions is about helping the reviewer to understand the contribution and double-check the implications, so an important part of this guide is about how to structure a pull request for review. +高质量开源贡献的很大一部分是帮助审阅者理解贡献的内容进而对内容进行细致地检查,因此本指南的一个重要部分是如何构建便于代码审查的拉取请求。 Review comment: 我觉得可以不用翻译,这应该算是github专门的词 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] klion26 commented on pull request #247: [FLINK-13683] Translate "Code Style - Component Guide" page into Chinese
klion26 commented on pull request #247: URL: https://github.com/apache/flink-web/pull/247#issuecomment-621676247 @chaojianok thanks for your contribution. could you please get rid of the `git merge` commit in the history. you can use `git rebase` or the other git command to achieve it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] zentol commented on a change in pull request #85: [FLINK-16955] Bump Zookeeper 3.4.X to 3.4.14
zentol commented on a change in pull request #85: URL: https://github.com/apache/flink-shaded/pull/85#discussion_r417859654 ## File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-34/pom.xml ## @@ -128,4 +128,4 @@ under the License. - Review comment: could you revert this tiny change? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] klion26 commented on a change in pull request #245: [FLINK-13678] Translate "Code Style - Preamble" page into Chinese
klion26 commented on a change in pull request #245: URL: https://github.com/apache/flink-web/pull/245#discussion_r417948740 ## File path: contributing/code-style-and-quality-preamble.zh.md ## @@ -1,25 +1,25 @@ --- -title: "Apache Flink Code Style and Quality Guide — Preamble" +title: "Apache Flink 代码样式与质量指南 — 序言" --- {% include code-style-navbar.zh.md %} -This is an attempt to capture the code and quality standard that we want to maintain. +本文旨在确立我们要维护的代码样式与质量标准。 -A code contribution (or any piece of code) can be evaluated in various ways: One set of properties is whether the code is correct and efficient. This requires solving the _logical or algorithmic problem_ correctly and well. +评估代码贡献(或任何代码片段)有多种方式:一组指标是代码是否正确和高效。这需要正确地解决逻辑或算法问题。 -Another set of properties is whether the code follows an intuitive design and architecture, whether it is well structured with right separation of concerns, and whether the code is easily understandable and makes its assumptions explicit. That set of properties requires solving the _software engineering problem_ well. A good solution implies that the code is easily testable, maintainable also by other people than the original authors (because it is harder to accidentally break), and efficient to evolve. +另一组指标是代码的设计和架构是否直观、结构是否良好、关注点是否正确、代码是否易于理解以及假设是否明确。这需要很好地解决软件工程问题。好的解决方案意味着代码容易测试,可以由原作者之外的其他人维护(代码不容易被意外破坏),并且可持续优化。 -While the first set of properties has rather objective approval criteria, the second set of properties is much harder to assess, but is of high importance for an open source project like Apache Flink. To make the code base inviting to many contributors, to make contributions easy to understand for developers that did not write the original code, and to make the code robust in the face of many contributions, well engineered code is crucial.[^1] For well engineered code, it is easier to keep it correct and fast over time. +第一组指标具有比较客观的评价标准,第二组指标较难于评估,然而对于 Apache Flink 这样的开源项目,第二组指标更加重要。为了能够邀请更多的贡献者,为了使非原始开发人员容易上手参与贡献,为了使大量贡献者协作开发的代码保持健壮,对代码进行精心地设计至关重要。[^1] 随着时间的推移,精心设计的代码更容易保持正确和高效。 -This is of course not a full guide on how to write well engineered code. There is a world of big books that try to capture that. This guide is meant as a checklist of best practices, patterns, anti-patterns, and common mistakes that we observed in the context of developing Flink. +本文当然不是代码设计的完全指南。有海量的书籍研究和讨论相关课题。本指南旨在作为一份清单,列举出我们在开发 Flink 过程中所观察到的最佳实践、模式、反模式和常见错误。 -A big part of high-quality open source contributions is about helping the reviewer to understand the contribution and double-check the implications, so an important part of this guide is about how to structure a pull request for review. +高质量开源贡献的很大一部分是帮助审阅者理解贡献的内容进而对内容进行细致地检查,因此本指南的一个重要部分是如何构建便于代码审查的拉取请求。 Review comment: 恩,我觉得不翻译也是一种选择,这更像一个专有名词 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] morsapaes opened a new pull request #332: [blog] Flink's application to Google Season of Docs.
morsapaes opened a new pull request #332: URL: https://github.com/apache/flink-web/pull/332 Adding a blogpost to announce Flink's application to [Google Season of Docs](https://developers.google.com/season-of-docs). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] piyushnarang commented on pull request #85: [FLINK-16955] Bump Zookeeper 3.4.X to 3.4.14
piyushnarang commented on pull request #85: URL: https://github.com/apache/flink-shaded/pull/85#issuecomment-621874740 Yeah, let me double check if things continue to work after the excludes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] piyushnarang commented on a change in pull request #85: [FLINK-16955] Bump Zookeeper 3.4.X to 3.4.14
piyushnarang commented on a change in pull request #85: URL: https://github.com/apache/flink-shaded/pull/85#discussion_r418035841 ## File path: flink-shaded-zookeeper-parent/flink-shaded-zookeeper-34/pom.xml ## @@ -128,4 +128,4 @@ under the License. - Review comment: yes, will do. Missed this when I put it up This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-shaded] piyushnarang commented on pull request #85: [FLINK-16955] Bump Zookeeper 3.4.X to 3.4.14
piyushnarang commented on pull request #85: URL: https://github.com/apache/flink-shaded/pull/85#issuecomment-622144494 @zentol added the updates you requested. I did some basic sanity checking / testing after excluding the spotbugs and jsr305 and it seems to work ok. Do you know if there's a way to trigger the Flink full CI run for flink-shaded changes? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] alpinegizmo opened a new pull request #333: [FLINK-17490] Add training page
alpinegizmo opened a new pull request #333: URL: https://github.com/apache/flink-web/pull/333 Now that the documentation has a training section, it would be good to help folks find it by promoting it from the project website. This adds training.md and training.zh.md, and adds a Training entry to the site navigation.  This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] alpinegizmo commented on pull request #333: [FLINK-17490] Add training page
alpinegizmo commented on pull request #333: URL: https://github.com/apache/flink-web/pull/333#issuecomment-622351679 I've created https://issues.apache.org/jira/browse/FLINK-17491 for the Chinese translation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] morsapaes commented on pull request #333: [FLINK-17490] Add training page
morsapaes commented on pull request #333: URL: https://github.com/apache/flink-web/pull/333#issuecomment-622432180 Hey, David! Thanks a lot for doing the whole integration of the training — it's a super valuable resource that indeed deserves more attention. What about linking this from the "Getting Started" dropdown, instead? I'm afraid we're starting to accumulate quite a lot of disparate links in the navigation bar. Once the discussion to revamp the landing page kicks in, then we could make sure that the self-paced training is properly highlighted in there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] XBaith commented on pull request #333: [FLINK-17490] Add training page
XBaith commented on pull request #333: URL: https://github.com/apache/flink-web/pull/333#issuecomment-622439043 > Hey, David! Thanks a lot for doing the whole integration of the training — it's a super valuable resource that indeed deserves more attention. > > What about linking this from the "Getting Started" dropdown, instead? I'm afraid we're starting to accumulate quite a lot of disparate links in the navigation bar. Once the discussion to revamp the landing page kicks in, then we could make sure that the self-paced training is properly highlighted in there. Good idea.I also think that putting "Training" in "Get Started" will help users quickly and systematically learn with Flink This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org