Re: [DISCUSS][2.0] FLIP-340: Remove rescale REST endpoint
+1 On Wed, 19 Jul 2023, 04:25 ConradJam, wrote: > +1 > > Zhu Zhu 于2023年7月19日周三 10:53写道: > > > +1 > > > > Thanks, > > Zhu > > > > Jing Ge 于2023年7月18日周二 19:09写道: > > > > > > +1 > > > > > > On Tue, Jul 18, 2023 at 1:05 PM Maximilian Michels > > wrote: > > > > > > > +1 > > > > > > > > On Tue, Jul 18, 2023 at 12:29 PM Gyula Fóra > wrote: > > > > > > > > > > +1 > > > > > > > > > > On Tue, 18 Jul 2023 at 12:12, Xintong Song > > > > wrote: > > > > > > > > > > > +1 > > > > > > > > > > > > Best, > > > > > > > > > > > > Xintong > > > > > > > > > > > > > > > > > > > > > > > > On Tue, Jul 18, 2023 at 4:25 PM Chesnay Schepler < > > ches...@apache.org> > > > > > > wrote: > > > > > > > > > > > > > The endpoint hasn't been working for years and was only kept to > > > > inform > > > > > > > users about it. Let's finally remove it. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-340%3A+Remove+rescale+REST+endpoint > > > > > > > > > > > > > > > > > > > > > > > > > > >
[jira] [Created] (FLINK-32629) Add support for dynamic CEP
张一帆 created FLINK-32629: --- Summary: Add support for dynamic CEP Key: FLINK-32629 URL: https://issues.apache.org/jira/browse/FLINK-32629 Project: Flink Issue Type: New Feature Components: Library / CEP Affects Versions: 1.18.0 Reporter: 张一帆 Fix For: 1.18.0 When using CEP as a complex event processing engine, when the logic is frequently modified and the threshold is frequently adjusted, the entire program needs to be stopped, the code should be modified, the program should be repackaged, and then submitted to the cluster. Dynamic logic modification and external dynamic injection cannot be realized. Currently, Realized the dynamic injection of CEP logic, based on message-driven logic modification, you can manually inject specific messages into the source end to achieve fine-grained control of logic injection perception -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISUCSS] Deprecate multiple APIs in 1.18
Hi Xintong, > IIUC, you are suggesting to mark the classes with a red background as `@Deprecated` in 1.18? Exactly. They are not marked at the code level but suppose to be deprecated. > - For the ManagedTable related classes, is there any FLIP explicitly that decides to deprecate them? If not, I think it would be nice to have one, to formally decide the deprecation with a vote. I'd expect such a FLIP can be quite lightweight. Currently, there's no formal FLIP yet. I'd like to prepare one to initiate the deprecation process. > - For `SourceFunctionProvider`, please beware there are recently some objections against deprecating `SourceFunction`, and the code changes marking it as `@Deprecated` might be reverted. See [1][2] for more details. So my question is, if `SourceFunction` will not be deprecated until all sub-tasks in FLINK-28045 are resolved, do you still think we should deprecate `SourceFunctionProvider` now? Thanks for the reminder. If `SourceFunction` is no longer being deprecated, then `SourceFunctionProvider` also needs to be retained. I have updated the sheet and removed `SourceFunctionProvider`. > I'm in general +1 to add the missing API annotations. However, I don't have the expertise to comment on the classes and suggestd API levels being listed. I've updated the sheet and added comments on all the APIs that are suggested to be marked as PublicEvolving, explaining the reasons. The rest APIs are either util classes or implementations and hence are suggested to be Internal. I can start a discussion about the suggested API level to find developers who can help review them. Best, Jane On Wed, Jul 19, 2023 at 12:22 PM Xintong Song wrote: > Thanks for the beautiful sheets, Jane. > > 1. This sheet < > > > https://docs.google.com/spreadsheets/d/1dZBNHLuAHYJt3pFU8ZtfUzrYyf2ZFQ6wybDXGS1bHno/edit?usp=sharing > > > > summarizes the user-facing classes and methods that need to be deprecated > > under the flink-table module, some of which are marked with a red > > background and belong to the APIs that need to be depreciated but are not > > explicitly marked in the code. This mainly includes legacy table > > source/sink, legacy table schema, legacy SQL function, and some internal > > APIs designed for Paimon but are now obsolete. > > > IIUC, you are suggesting to mark the classes with a red background as > `@Deprecated` in 1.18? > >- +1 for deprecating `StreamRecordTimestamp` & `ExistingField` in 1.18. >Based on your description, it seems these were not marked by mistake. > Let's >fix them. >- For the ManagedTable related classes, is there any FLIP explicitly >that decides to deprecate them? If not, I think it would be nice to have >one, to formally decide the deprecation with a vote. I'd expect such a > FLIP >can be quite lightweight. >- For `SourceFunctionProvider`, please beware there are recently some >objections against deprecating `SourceFunction`, and the code changes >marking it as `@Deprecated` might be reverted. See [1][2] for more > details. >So my question is, if `SourceFunction` will not be deprecated until all >sub-tasks in FLINK-28045 are resolved, do you still think we should >deprecate `SourceFunctionProvider` now? > > 2. In addition, during the process of organizing, it was found that some > > APIs under the flink-table-api-java and flink-table-common modules do not > > have an explicit API annotation (you can find detailed information in > this > > sheet < > > > https://docs.google.com/spreadsheets/d/1e8M0tUtKkZXEd8rCZtZ0C6Ty9QkNaPySsrCgz0vEID4/edit?usp=sharing > >). > > I suggest explicitly marking the level for these APIs. > > I'm in general +1 to add the missing API annotations. However, I don't have > the expertise to comment on the classes and suggestd API levels being > listed. > > 3. As there are still some internal and test code dependencies on these > > APIs, can we first gradually migrate these dependencies to alternative > > APIs to make the deprecation process relatively easy? > > > That makes sense to me. I think the purpose of trying to mark the APIs as > deprecated in 1.18 is to send users the signal early that these APIs will > be removed. As for the internal and test code dependencies, I don't see any > problem in gradually migrating them. > > Best, > > Xintong > > > [1] https://lists.apache.org/thread/734zhkvs59w2o4d1rsnozr1bfqlr6rgm > > [2] https://issues.apache.org/jira/browse/FLINK-28046 > > > > On Wed, Jul 19, 2023 at 11:41 AM Jane Chan wrote: > > > Hi Xintong, > > > > Thanks for driving this topic. Regarding the Table API deprecation, I can > > provide some details to help with the process. > > > >1. This sheet > >< > > > https://docs.google.com/spreadsheets/d/1dZBNHLuAHYJt3pFU8ZtfUzrYyf2ZFQ6wybDXGS1bHno/edit?usp=sharing > > > > > summarizes > >the user-facing classes and methods that need to be deprecated under > the > >flink-table module, some of which are marked with a red backgroun
Re: [DISUCSS] Deprecate multiple APIs in 1.18
Thank you, Jane. What you said (preparing a FLIP for ManagedTable related classes, not deprecating SourceFunctionProvider, and starting a dedicated discussion for the missing annotations) sounds good to me. In addition, if there's no objections from the community on marking `StreamRecordTimestamp` & `ExistingField` as deprecated, you may consider creating a Jira ticket for it as a sub-task of FLINK-32557. Best, Xintong On Wed, Jul 19, 2023 at 3:48 PM Jane Chan wrote: > Hi Xintong, > > > IIUC, you are suggesting to mark the classes with a red background as > `@Deprecated` in 1.18? > > Exactly. They are not marked at the code level but suppose to be > deprecated. > > > - For the ManagedTable related classes, is there any FLIP explicitly that > decides to deprecate them? If not, I think it would be nice to have one, to > formally decide the deprecation with a vote. I'd expect such a FLIP can be > quite lightweight. > > Currently, there's no formal FLIP yet. I'd like to prepare one to initiate > the deprecation process. > > > - For `SourceFunctionProvider`, please beware there are recently some > objections against deprecating `SourceFunction`, and the code changes > marking it as `@Deprecated` might be reverted. See [1][2] for more details. > So my question is, if `SourceFunction` will not be deprecated until all > sub-tasks in FLINK-28045 are resolved, do you still think we should > deprecate `SourceFunctionProvider` now? > > Thanks for the reminder. If `SourceFunction` is no longer being deprecated, > then `SourceFunctionProvider` also needs to be retained. I have updated the > sheet and removed `SourceFunctionProvider`. > > > I'm in general +1 to add the missing API annotations. However, I don't > have the expertise to comment on the classes and suggestd API levels being > listed. > > I've updated the sheet and added comments on all the APIs that are > suggested to be marked as PublicEvolving, explaining the reasons. The rest > APIs are either util classes or implementations and hence are suggested to > be Internal. I can start a discussion about the suggested API level to find > developers who can help review them. > > Best, > Jane > > On Wed, Jul 19, 2023 at 12:22 PM Xintong Song > wrote: > > > Thanks for the beautiful sheets, Jane. > > > > 1. This sheet < > > > > > > https://docs.google.com/spreadsheets/d/1dZBNHLuAHYJt3pFU8ZtfUzrYyf2ZFQ6wybDXGS1bHno/edit?usp=sharing > > > > > > summarizes the user-facing classes and methods that need to be > deprecated > > > under the flink-table module, some of which are marked with a red > > > background and belong to the APIs that need to be depreciated but are > not > > > explicitly marked in the code. This mainly includes legacy table > > > source/sink, legacy table schema, legacy SQL function, and some > internal > > > APIs designed for Paimon but are now obsolete. > > > > > IIUC, you are suggesting to mark the classes with a red background as > > `@Deprecated` in 1.18? > > > >- +1 for deprecating `StreamRecordTimestamp` & `ExistingField` in > 1.18. > >Based on your description, it seems these were not marked by mistake. > > Let's > >fix them. > >- For the ManagedTable related classes, is there any FLIP explicitly > >that decides to deprecate them? If not, I think it would be nice to > have > >one, to formally decide the deprecation with a vote. I'd expect such a > > FLIP > >can be quite lightweight. > >- For `SourceFunctionProvider`, please beware there are recently some > >objections against deprecating `SourceFunction`, and the code changes > >marking it as `@Deprecated` might be reverted. See [1][2] for more > > details. > >So my question is, if `SourceFunction` will not be deprecated until > all > >sub-tasks in FLINK-28045 are resolved, do you still think we should > >deprecate `SourceFunctionProvider` now? > > > > 2. In addition, during the process of organizing, it was found that some > > > APIs under the flink-table-api-java and flink-table-common modules do > not > > > have an explicit API annotation (you can find detailed information in > > this > > > sheet < > > > > > > https://docs.google.com/spreadsheets/d/1e8M0tUtKkZXEd8rCZtZ0C6Ty9QkNaPySsrCgz0vEID4/edit?usp=sharing > > >). > > > I suggest explicitly marking the level for these APIs. > > > > I'm in general +1 to add the missing API annotations. However, I don't > have > > the expertise to comment on the classes and suggestd API levels being > > listed. > > > > 3. As there are still some internal and test code dependencies on these > > > APIs, can we first gradually migrate these dependencies to alternative > > > APIs to make the deprecation process relatively easy? > > > > > That makes sense to me. I think the purpose of trying to mark the APIs as > > deprecated in 1.18 is to send users the signal early that these APIs will > > be removed. As for the internal and test code dependencies, I don't see > any > > problem in gradually migrating
Re: Re: [DISCUSS] Release 2.0 Work Items
First off, good discussion on these topics. +1 on Xintong's latest proposal in this thread On Wed, Jul 19, 2023 at 5:16 AM Xintong Song wrote: > I went through the remaining Jira tickets with 2.0.0 fix-version and are > not included in FLINK-3975. > > I skipped the 3 umbrella tickets below and their subtasks, which are newly > created for the 2.0 work items. > >- FLINK-32377 Breaking REST API changes >- FLINK-32378 Breaking Metrics system changes >- FLINK-32383 2.0 Breaking configuration changes > > I'd suggest going ahead with the following tickets. > >- Need action in 1.18 > - FLINK-29739: Already listed in the release 2.0 wiki. Needs mark all > Scala APIs as deprecated. >- Need no action in 1.18 > - FLINK-23620: Already listed in the release 2.0 > - FLINK-15470/30246/32437: Behavior changes, no API to be deprecated > > I'd suggest not doing the following tickets. > >- FLINK-11409: Subsumed by "Convert user-facing concrete classes into >interfaces" in the release 2.0 wiki > > I'd suggest leaving the following tickets as TBD, and would be slightly in > favor of not doing them unless someone volunteers to look more into them. > >- FLINK-10113 Drop support for pre 1.6 shared buffer state >- FLINK-10374 [Map State] Let user value serializer handle null values >- FLINK-13928 Make windows api more extendable >- FLINK-17539 Migrate the configuration options which do not follow the >xyz.max/min pattern > > > Best, > > Xintong > > > > On Tue, Jul 18, 2023 at 5:20 PM Wencong Liu wrote: > > > Hi Chesnay, > > Thanks for the reply. I think it is reasonable to remove the > configuration > > argument > > in AbstractUdfStreamOperator#open if it is consistently empty. I'll > > propose a discuss > > about the specific actions in FLINK-6912 at a later time. > > > > > > Best, > > Wencong Liu > > > > > > > > > > > > > > > > > > > > > > > > At 2023-07-18 16:38:59, "Chesnay Schepler" wrote: > > >On 18/07/2023 10:33, Wencong Liu wrote: > > >> For FLINK-6912: > > >> > > >> There are three implementations of RichFunction that actually use > > >> the Configuration parameter in RichFunction#open: > > >> 1. ContinuousFileMonitoringFunction#open: It uses the > configuration > > >> to configure the FileInputFormat. [1] > > >> 2. OutputFormatSinkFunction#open: It uses the configuration > > >> to configure the OutputFormat. [2] > > >> 3. InputFormatSourceFunction#open: It uses the configuration > > >> to configure the InputFormat. [3] > > > > > >And none of them should have any effect since the configuration is > empty. > > > > > >See > > org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator#open. > > >
Re: [DISCUSS] FLIP 333 - Redesign Apache Flink website
+0. I think it has to grow on me. A couple of things from my end: - Have we evaluated if these new designs are an improvement on W3C's Accessibility, Usability & Inclusion? [1]. It is something that the ASF rightfully emphasises. - "there is general consensus in the community that the Flink documentation is very well-organized and easily searchable." -> That's actually not the case, there are numerous FLIPs on this topic [2] [3] which haven't been concluded/implemented. - I don't think we should put the links to the Github repo and the blog posts in the footer: these are some of the most read/visited links. Thanks, Martijn [1] https://www.w3.org/WAI/fundamentals/accessibility-usability-inclusion/ [2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685 [3] https://cwiki.apache.org/confluence/display/FLINK/FLIP-42%3A+Rework+Flink+Documentation On Mon, Jul 17, 2023 at 2:31 PM Maximilian Michels wrote: > +1 > > On Mon, Jul 17, 2023 at 10:45 AM Chesnay Schepler > wrote: > > > > +1 > > > > On 16/07/2023 08:10, Mohan, Deepthi wrote: > > > @Chesnay > > > > > > Thank you for your feedback. > > > > > > An important takeaway from the previous discussion [1] and your > feedback was to keep the design and text/diagram changes separate as each > change for text and diagrams likely require deeper discussion. Therefore, > as a first step I am proposing only UX changes with minimal text changes > for the pages mentioned in the FLIP. > > > > > > The feedback we received from customers cover both aesthetics and > functional aspects of the website. Note that most feedback is focused only > on the main Flink website [2]. > > > > > > 1) New customers who are considering Flink have said about the website > “there is a lot going on”, “looks too complicated”, “I am not sure *why* I > should use this" and similar feedback. The proposed redesign in this FLIP > helps partially address this category of feedback, but we may need to make > the use cases and value proposition “pop” more than we have currently > proposed in the redesign. I’d like to get the community’s thoughts on this. > > > > > > 2) On the look and feel of the website, I’ve already shared feedback > prior that I am repeating here: “like a wiki page thrown together by > developers.” Customers also point out other related Apache project > websites: [3] and [4] as having “modern” user design. The proposed redesign > in this FLIP will help address this feedback. Modernizing the look and feel > of the website will appeal to customers who are used to what they encounter > on other contemporary websites. > > > > > > 3) New and existing Flink developers have said “I am not sure what the > diagram is supposed to depict” - referencing the main diagram on [2] and > have said that the website lacks useful graphics and colors. Apart from > removing the diagram on the main page [2], the current FLIP does propose > major changes to diagrams in the rest of website and we can discuss them > separately as they become available. I’d like to keep the FLIP focused only > on the website redesign. > > > > > > Ultimately, to Chesnay’s point in the earlier discussion in [1], I do > not want to boil the ocean with all the changes at once. In this FLIP, my > proposal is to first work on the UX design as that gives us a good starting > point. We can use it as a framework to make iterative changes and > enhancements to diagrams and the actual website content incrementally. > > > > > > I’ve added a few more screenshots of additional pages to the FLIP that > will give you a clearer picture of the proposed changes for the main page, > What is Flink [Architecture, Applications, and Operations] pages. > > > > > > And finally, I am not proposing any tooling changes. > > > > > > [1] https://lists.apache.org/thread/c3pt00cf77lrtgt242p26lgp9l2z5yc8 > > > [2]https://flink.apache.org/ > > > [3] https://spark.apache.org/ > > > [4] https://kafka.apache.org/ > > > > > > On 7/13/23, 6:25 AM, "Chesnay Schepler" ches...@apache.org>> wrote: > > > > > > > > > CAUTION: This email originated from outside of the organization. Do > not click links or open attachments unless you can confirm the sender and > know the content is safe. > > > > > > > > > > > > > > > > > > > > > On 13/07/2023 08:07, Mohan, Deepthi wrote: > > >> However, even these developers when explicitly asked in our > conversations often comment that the website could do with a redesign > > > > > > Can you go into more detail as to their specific concerns? Are there > > > functional problems with the page, or is this just a matter of "I don't > > > like the way it looks"? > > > > > > > > > What had they trouble with? Which information was > > > missing/unnecessary/too hard to find? > > > > > > > > > The FLIP states that "/we want to modernize the website so that new and > > > existing users can easily find information to understand what Flink is, > > > the primary use cases where Flink is useful, and clearly understand its
Re: [DISCUSS][2.0] FLIP-336: Remove "now" timestamp field from REST responses
+1 On Mon, Jul 17, 2023 at 5:29 AM Xintong Song wrote: > +1 > > Best, > > Xintong > > > > On Thu, Jul 13, 2023 at 9:05 PM Chesnay Schepler > wrote: > > > Hello, > > > > Several REST responses contain a timestamp field of the current time > > > > There is no known use-case for said timestamp, it makes caching of > > responses technically sketchy (since the response differs on each call) > > and it complicates testing since the timestamp field can't be easily > > predicted. > > > > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263424789 > > > > > > Regards, > > > > Chesnay > > > > >
Re: [DISCUSS] FLIP 333 - Redesign Apache Flink website
+1 Thanks for proposing this FLIP, Deepthi. The designs on FLIP-333 [1] look fresh and modern and I feel they achieve the goal in general. A couple of suggestions from my side could be the following: [a] Assuming that no changes are implemented to the Flink documentation, I would like to see a visual with a 'white background' instead of the 'dark mode'. This is primarily for two reasons: Firstly, it provides a more consistent experience for the website visitor going from the home page to the documentation (instead of switching from dark to white mode on the website) and secondly, from an accessibility and inclusivity perspective that was mentioned earlier, we should give the option to either switch between dark and white mode or have something that is universally easy to read and consume (not everyone is comfortable reading white text on dark background). [b] Regarding structuring the home page, right now the Flink website has use cases blending with what seems to be Flink's 'technical characteristics' (i.e. the sections that talk about 'Guaranteed correctness', 'Layered APIs', 'Operational Focus', etc.). As someone new to Flink and considering using the technology, I would like to understand firstly the use cases and secondly dive into the characteristics that make Flink stand out. I would suggest having one 'Use Cases' section above the 'technical characteristics' to have a separation between the two and easily navigate to the Flink use cases pages [2] directly from this section.\ Thank you Markos [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-333%3A+Redesign+Apache+Flink+website [2] https://flink.apache.org/use-cases/ On Wed, Jul 19, 2023 at 10:46 AM Martijn Visser wrote: > +0. I think it has to grow on me. A couple of things from my end: > > - Have we evaluated if these new designs are an improvement on W3C's > Accessibility, Usability & Inclusion? [1]. It is something that the ASF > rightfully emphasises. > - "there is general consensus in the community that the Flink documentation > is very well-organized and easily searchable." -> That's actually not the > case, there are numerous FLIPs on this topic [2] [3] which haven't been > concluded/implemented. > - I don't think we should put the links to the Github repo and the blog > posts in the footer: these are some of the most read/visited links. > > Thanks, > > Martijn > > [1] https://www.w3.org/WAI/fundamentals/accessibility-usability-inclusion/ > [2] > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685 > [3] > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-42%3A+Rework+Flink+Documentation > > > On Mon, Jul 17, 2023 at 2:31 PM Maximilian Michels wrote: > > > +1 > > > > On Mon, Jul 17, 2023 at 10:45 AM Chesnay Schepler > > wrote: > > > > > > +1 > > > > > > On 16/07/2023 08:10, Mohan, Deepthi wrote: > > > > @Chesnay > > > > > > > > Thank you for your feedback. > > > > > > > > An important takeaway from the previous discussion [1] and your > > feedback was to keep the design and text/diagram changes separate as each > > change for text and diagrams likely require deeper discussion. Therefore, > > as a first step I am proposing only UX changes with minimal text changes > > for the pages mentioned in the FLIP. > > > > > > > > The feedback we received from customers cover both aesthetics and > > functional aspects of the website. Note that most feedback is focused > only > > on the main Flink website [2]. > > > > > > > > 1) New customers who are considering Flink have said about the > website > > “there is a lot going on”, “looks too complicated”, “I am not sure *why* > I > > should use this" and similar feedback. The proposed redesign in this FLIP > > helps partially address this category of feedback, but we may need to > make > > the use cases and value proposition “pop” more than we have currently > > proposed in the redesign. I’d like to get the community’s thoughts on > this. > > > > > > > > 2) On the look and feel of the website, I’ve already shared feedback > > prior that I am repeating here: “like a wiki page thrown together by > > developers.” Customers also point out other related Apache project > > websites: [3] and [4] as having “modern” user design. The proposed > redesign > > in this FLIP will help address this feedback. Modernizing the look and > feel > > of the website will appeal to customers who are used to what they > encounter > > on other contemporary websites. > > > > > > > > 3) New and existing Flink developers have said “I am not sure what > the > > diagram is supposed to depict” - referencing the main diagram on [2] and > > have said that the website lacks useful graphics and colors. Apart from > > removing the diagram on the main page [2], the current FLIP does propose > > major changes to diagrams in the rest of website and we can discuss them > > separately as they become available. I’d like to keep the FLIP focused > only > > on the website redesign. > > > > > > > > Ultima
[jira] [Created] (FLINK-32630) The log level should change from info to warn/error if job failed
Matt Wang created FLINK-32630: - Summary: The log level should change from info to warn/error if job failed Key: FLINK-32630 URL: https://issues.apache.org/jira/browse/FLINK-32630 Project: Flink Issue Type: Improvement Components: Client / Job Submission, Runtime / Coordination Affects Versions: 1.17.1 Reporter: Matt Wang When a job fails to submit or run, the following log level should be changed to WARN or ERROR, INFO will confuse users {code:java} 2023-07-14 20:05:26,863 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Job flink_test_job (08eefd50) switched from state FAILING to FAILED. org.apache.flink.runtime.JobException: Recovery is suppressed by FailureRateRestartBackoffTimeStrategy(FailureRateRestartBackoffTimeStrategy(failuresIntervalMS=240,backoffTimeMS=2,maxFailuresPerInterval=100){code} {code:java} {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Flink Job restarting frequently.
Flink is restarting daily once. Flink version: 1.10.0 2023-07-19 12:33:52 org.apache.flink.util.FlinkRuntimeException: Exceeded checkpoint tolerable failure threshold. at org.apache.flink.runtime.checkpoint.CheckpointFailureManager .handleTaskLevelCheckpointException(CheckpointFailureManager.java:87) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator .failPendingCheckpointDueToTaskFailure(CheckpointCoordinator.java:1467) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator .discardCheckpoint(CheckpointCoordinator.java:1377) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator .receiveDeclineMessage(CheckpointCoordinator.java:719) at org.apache.flink.runtime.scheduler.SchedulerBase .lambda$declineCheckpoint$5(SchedulerBase.java:807) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java: 511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask .access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask .run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor .java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor .java:624) at java.lang.Thread.run(Thread.java:748) Please help me, how to fix the issue Job is recovering. but i dont want restart my job. because inprogress file are not marked as done. Regards, Nagireddy Y.
Re: Flink Job restarting frequently.
Flink version: 1.10.0 2023-07-19 12:33:52 org.apache.flink.util.FlinkRuntimeException: Exceeded checkpoint tolerable failure threshold. at org.apache.flink.runtime.checkpoint.CheckpointFailureManager .handleTaskLevelCheckpointException(CheckpointFailureManager.java:87) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator .failPendingCheckpointDueToTaskFailure(CheckpointCoordinator.java:1467) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator .discardCheckpoint(CheckpointCoordinator.java:1377) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator .receiveDeclineMessage(CheckpointCoordinator.java:719) at org.apache.flink.runtime.scheduler.SchedulerBase .lambda$declineCheckpoint$5(SchedulerBase.java:807) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java: 511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask .access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask .run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor .java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor .java:624) at java.lang.Thread.run(Thread.java:748) Please help me, how to fix the issue Job is recovering. but i dont want restart my job. because inprogress file are not marked as done. Regards, Nagireddy Y. On Wed, Jul 19, 2023 at 5:55 PM Y SREEKARA BHARGAVA REDDY < ynagiredd...@gmail.com> wrote: > Flink is restarting daily once. > Flink version: 1.10.0 > 2023-07-19 12:33:52 > org.apache.flink.util.FlinkRuntimeException: Exceeded checkpoint > tolerable failure threshold. > at org.apache.flink.runtime.checkpoint.CheckpointFailureManager > .handleTaskLevelCheckpointException(CheckpointFailureManager.java:87) > at org.apache.flink.runtime.checkpoint.CheckpointCoordinator > .failPendingCheckpointDueToTaskFailure(CheckpointCoordinator.java:1467) > at org.apache.flink.runtime.checkpoint.CheckpointCoordinator > .discardCheckpoint(CheckpointCoordinator.java:1377) > at org.apache.flink.runtime.checkpoint.CheckpointCoordinator > .receiveDeclineMessage(CheckpointCoordinator.java:719) > at org.apache.flink.runtime.scheduler.SchedulerBase > .lambda$declineCheckpoint$5(SchedulerBase.java:807) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java: > 511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent. > ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201( > ScheduledThreadPoolExecutor.java:180) > at java.util.concurrent. > ScheduledThreadPoolExecutor$ScheduledFutureTask.run( > ScheduledThreadPoolExecutor.java:293) > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > > Please help me, how to fix the issue > Job is recovering. but i dont want restart my job. because inprogress file > are not marked as done. > Regards, > Nagireddy Y. > >
Support read ROWKIND metadata by ROW_KIND() function
CDC format like debezium-json and canal-json support read ROWKIND metadata. 1. Our first scenario is syncing data of operational tables into our streaming warehouse. All operational data in mysql should NOT be physically deleted, so we use "is_deleted" column to do logical delete, and there should NOT be any delete operations happen on our streaming warehouse. But as data grows up quickly we need to delete old data such as half year ago in operational table to keep table size manageable and ensure the query performance not to be decreased. These deleted records for maintain purpose should be not synced into our streaming warehouse. So we have to filter our them in our flink sql jobs. But currently it is not convenient to do ROWKIND filtering. That is why I ask flink support read ROWKIND metadata by ROW_KIND() function. Then we can use the following flink sql to do filtering. For example: create table customer_source ( id BIGINT PRIMARY KEY NOT ENFORCED, name STRING, region STRING ) with ( 'connector' = 'kafka', 'format' = 'canal-json', ... ); create table customer_sink ( id BIGINT PRIMARY KEY NOT ENFORCED, name STRING, region STRING ) with ( 'connector' = 'paimon' ... ); INSERT INTO customer_sink SELECT * FROM customer_source WHERE ROW_KIND() <> '-D'; 2. Out secondary scenario is we need sink aggregation result into MQ which does NOT support retract data. Although flink provide upsert kafka connector, but unfortunetly our sink system is NOT kafka, so we have to write customized connector like upsert-kafka again. If flink sql support filter data by ROWKIND, we don't need write any more upsert-xxx connector. For example, create table customer_source ( id BIGINT PRIMARY KEY NOT ENFORCED, name STRING, region STRING ) with ( 'connector' = 'kafka', 'format' = 'canal-json', ... ); create table customer_agg_sink ( region STRING, cust_count BIGINT ) with ( 'connector' = 'MQ', 'format' = 'json', ... ); INSERT INTO customer_agg_sink SELECT * FROM (SELECT region, count(1) as cust_count from customer_source group by region) t WHERE ROW_KIND() <> '-U' AND ROW_KIND() <> '-D'; How do you think? Looking forward to your feedback, thanks!
Re: [DISCUSS] FLIP-335: Removing Flink's Time classes as part of Flink 2.0
The overall Scala-related plan for this FLIP is to ignore the Scala API because of FLIP-265. The CEP Java/Scala version parity (through the PatternScalaAPICompletenessTest) requires us to touch the Scala API, though, because we want to offer an alternative to the deprecated API in FLink 1.x. I wanted to point that out in that paragraph. The alternative would have been to add an exclusion for the newly added method. That sounded like a worse option. Deprecating the Scala API should be independent from the parity of Java and Scala API in Flink 1.x. I rewrote this paragraph in the FLIP. I hope it helps. Matthias On Mon, Jul 17, 2023 at 11:23 AM Chesnay Schepler wrote: > I don't understand this bit:/ > > "One minor Scala change is necessary, though: We need to touch the Scala > implementation of the Pattern class (in flink-cep). Pattern requires a > new method which needs to be implemented in the Scala Pattern class as > well to comply with PatternScalaAPICompletenessTest." > > /FLIP-265//states that /all/ Scala APIs will be removed, which should > also cover CEP. > // > On 13/07/2023 12:08, Matthias Pohl wrote: > > The 2.0 feature list includes the removal of Flink's Time classes in > favor > > of the JDKs java.time.Duration class. There was already a discussion > about > > it in [1] and FLINK-14068 [2] was created as a consequence of this > > discussion. > > > > I started working on marking the APIs as deprecated in FLINK-32570 [3] > > where Chesnay raised a fair point that there isn't a FLIP, yet, to > > formalize this public API change. Therefore, I went ahead and created > > FLIP-335 [4] to have this change properly documented. > > > > I'm not 100% sure whether there are better ways of checking whether we're > > covering everything Public API-related. There are even classes which I > > think might be user-facing but are not labeled accordingly (e.g. > > flink-cep). But I don't have the proper knowledge in these parts of the > > code. Therefore, I would propose marking these methods as deprecated, > > anyway, to be on the safe side. > > > > I'm open to any suggestions on improving the Test Plan of this change. > > > > I'm looking forward to feedback on this FLIP. > > > > Best, > > Matthias > > > > [1]https://lists.apache.org/thread/76yywnwf3lk8qn4dby0vz7yoqx7f7pkj > > [2]https://issues.apache.org/jira/browse/FLINK-14068 > > [3]https://issues.apache.org/jira/browse/FLINK-32570 > > [4] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-335%3A+Removing+Flink%27s+Time+classes > > >
Kubernetes Operator 1.6.0 release planning
Hi Devs! Based on our release schedule, it is about time for the next Flink K8s Operator minor release. There are still some minor work items to be completed this week, but I suggest aiming for next Wednesday (July 26th) as the 1.6.0 release-cut - RC1 date. I am volunteering as the release manager but if someone else wants to do it, I would also be happy to simply give assistance :) Please let me know if you agree or disagree with the suggested timeline. Cheers, Gyula
Re: FLIP-342: Remove brackets around keys returned by MetricGroup#getAllVariables
> > We don't have a well-defined process for breaking behavioral changes. We > could consider adding a new method with a different name. Introducing a new API to make the behavioral change visible was also the suggestion in the deprecation ML thread [1]. getEnvironmentVariables (or even getEnvironment) might be a reasonable change. [1] https://lists.apache.org/thread/vmhzv8fcw2b33pqxp43486owrxbkd5x9 On Tue, Jul 18, 2023 at 1:10 PM Jing Ge wrote: > +1 > > On Tue, Jul 18, 2023 at 12:24 PM Xintong Song > wrote: > > > +1 > > > > Best, > > > > Xintong > > > > > > > > On Tue, Jul 18, 2023 at 5:02 PM Chesnay Schepler > > wrote: > > > > > The FLIP number was changed to 342. > > > > > > On 18/07/2023 10:56, Chesnay Schepler wrote: > > > > MetricGroup#getAllVariables returns all variables associated with the > > > > metric, for example: > > > > > > > > | = abcde| > > > > | = ||0| > > > > > > > > The keys are surrounded by brackets for no particular reason. > > > > > > > > In virtually every use-case for this method the user is stripping the > > > > brackets from keys, as done in: > > > > > > > > * our datadog reporter: > > > > > > > > > > https://github.com/apache/flink/blob/9c3c8afbd9325b5df8291bd831da2d9f8785b30a/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java#L244 > > > > < > > > > > > https://github.com/apache/flink/blob/9c3c8afbd9325b5df8291bd831da2d9f8785b30a/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java#L244 > > > > > > > > * our prometheus reporter (implicitly via a character filter): > > > > > > > > > > https://github.com/apache/flink/blob/9c3c8afbd9325b5df8291bd831da2d9f8785b30a/flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/AbstractPrometheusReporter.java#L236 > > > > < > > > > > > https://github.com/apache/flink/blob/9c3c8afbd9325b5df8291bd831da2d9f8785b30a/flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/AbstractPrometheusReporter.java#L236 > > > > > > > > * our JMX reporter: > > > > > > > > > > https://github.com/apache/flink/blob/9c3c8afbd9325b5df8291bd831da2d9f8785b30a/flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java#L223 > > > > < > > > > > > https://github.com/apache/flink/blob/9c3c8afbd9325b5df8291bd831da2d9f8785b30a/flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java#L223 > > > > > > > > > > > > I propose to change the method spec and implementation to remove the > > > > brackets around keys. > > > > > > > > For migration purposes it may make sense to add a new method with the > > > > new behavior (|getVariables()|) and deprecate the old method. > > > > > > > > > > > > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263425202 > > > > > > > > > > > > >
Re: [DISCUSS] FLIP-327: Support stream-batch unified operator to improve job throughput when processing backlog data
Hi Dong, I have a couple of follow up questions about switching back and forth between streaming and batching mode. Especially around shuffle/watermark strategy, and keyed state backend. First of all, it might not always be beneficial to switch into the batch modes: - Shuffle strategy - Is sorting going to be purely in-memory? If not, obviously spilling to disks might cause larger overheads compared to not sorting the records. - If it will be at least partially in-memory, does Flink have some mechanism to reserve optional memory that can be revoked if a new operator starts up? Can this memory be redistributed? Ideally we should use as much as possible of the available memory to avoid spilling costs, but also being able to revoke that memory - Sometimes sorting, even if we have memory to do that, might be an unnecessary overhead. - Watermarks - Is holding back watermarks always good? If we have tons of data buffered/sorted and waiting to be processed with multiple windows per key and many different keys. When we switch back to `isBacklog=false` we first process all of that data before processing watermarks, for operators that are not using sorted input the state size can explode significantly causing lots of problems. Even for those that can use sorting, switching to sorting or BatchExecutionKeyedStateBackend is not always a good idea, but keeping RocksDB also can be risky. - Keyed state backend - I think you haven't described what happens during switching from streaming to backlog processing. - Switch can be an unnecessary overhead. At the same time, in your current proposal, for `execution.checkpointing.interval-during-backlog > 0` we won't switch to "batch" mode at all. That's a bit of shame, I don't understand why those two things should be coupled together? All in all, shouldn't we aim for some more clever process of switching back and forth between streaming/batch modes for watermark strategy/state backend/sorting based on some metrics? Trying to either predict if switching might help, or trying to estimate if the last switch was beneficial? Maybe something along the lines: - sort only in memory and during sorting count the number of distinct keys (NDK) - maybe allow for spilling if so far in memory we have NDK * 5 >= #records - do not allow to buffer records above a certain threshold, as otherwise checkpointing can explode - switch to `BatchExecutionKeyedStateBackend` only if NDK * 2 >= #records - do not sort if last NDKs (or EMA of NDK?) 1.5 <= #records Or even maybe for starters something even simpler and then test out something more fancy as a follow up? At the same time, `execution.checkpointing.interval-during-backlog=0` seems a weird setting to me, that I would not feel safe recommending to anyone. If processing of a backlog takes a long time, a job might stop making any progress due to some random failures. Especially dangerous if a job switches from streaming mode back to backlog processing due to some reasons, as that could happen months after someone started a job with this strange setting. So should we even have it? I would simply disallow it. I could see a power setting like: `execution.backlog.use-full-batch-mode-on-start (default false)` that would override any heuristic of switching to backlog if someone is submitting a new job that starts with `isBacklog=true`. Or we could limit the scope of this FLIP to only support starting with batch mode and switching only once to streaming, and design a follow up with switching back and forth? I'm looking forwards to hearing/reading out your thoughts. Best, Piotrek śr., 12 lip 2023 o 12:38 Jing Ge napisał(a): > Hi Dong, > > Thanks for your reply! > > Best regards, > Jing > > On Wed, Jul 12, 2023 at 3:25 AM Dong Lin wrote: > > > Hi Jing, > > > > Thanks for the comments. Please see my reply inline. > > > > On Wed, Jul 12, 2023 at 5:04 AM Jing Ge > > wrote: > > > > > Hi Dong, > > > > > > Thanks for the clarification. Now it is clear for me. I got additional > > noob > > > questions wrt the internal sorter. > > > > > > 1. when to call setter to set the internalSorterSupported to be true? > > > > > > > Developer of the operator class (i.e. those classes which implements > > `StreamOperator`) should override the `#getOperatorAttributes()` API to > set > > internalSorterSupported to true, if he/she decides to sort records > > internally in the operator. > > > > > > > 2 > > > *"For those operators whose throughput can be considerably improved > with > > an > > > internal sorter, update it to take advantage of the internal sorter > when > > > its input has isBacklog=true.* > > > *Typically, operators that involve aggregation operation (e.g. join, > > > cogroup, aggregate) on keyed inputs can benefit from using an internal > > > sorter."* > > > > > > *"The operator that performs CoGroup operation will instantiate two > > > internal sorter to sorts recor
[jira] [Created] (FLINK-32631) FlinkSessionJob stuck in Created/Reconciling state because of No Job found error in JobManager
Bhupendra Yadav created FLINK-32631: --- Summary: FlinkSessionJob stuck in Created/Reconciling state because of No Job found error in JobManager Key: FLINK-32631 URL: https://issues.apache.org/jira/browse/FLINK-32631 Project: Flink Issue Type: Bug Components: Kubernetes Operator Affects Versions: 1.16.0 Environment: Local Reporter: Bhupendra Yadav {*}Background{*}: We are using FlinkSessionJob for submitting jobs to a session cluster. {*}Bug{*}: We frequently encounter a problem where the job gets stuck in CREATED/RECONCILING state. On checking flink operator logs we see the error {_}Job could not be found{_}. Full trace [here|https://ideone.com/NuAyEK]. # When a Flink session job is submitted, the Flink operator submits the job to the Flink Cluster. # If the Flink job manager (JM) restarts for some reason, the job may no longer exist in the JM. # Upon reconciliation, the Flink operator queries the JM's REST API for the job using its jobID, but it receives a 404 error, indicating that the job is not found. # The operator then encounters an error and logs it, leading to the job getting stuck in an indefinite state. # Attempting to restart or suspend the job using the operator's provided mechanisms also fails because the operator keeps calling the REST API and receiving the same 404 error. {*}Expected Behavior{*}: Ideally, when the Flink operator reconciles a job and finds that it no longer exists in the Flink Cluster, it should handle the situation gracefully. Instead of getting stuck and logging errors indefinitely, the operator should mark the job as failed or deleted, or set an appropriate status for it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32632) Run Kubernetes test is unstable on AZP
Sergey Nuyanzin created FLINK-32632: --- Summary: Run Kubernetes test is unstable on AZP Key: FLINK-32632 URL: https://issues.apache.org/jira/browse/FLINK-32632 Project: Flink Issue Type: Bug Affects Versions: 1.18.0 Reporter: Sergey Nuyanzin This test https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51447&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=43ba8ce7-ebbf-57cd-9163-444305d74117&l=6213 fails with {noformat} 2023-07-19T17:14:49.8144730Z Jul 19 17:14:49 deployment.apps/flink-task-manager created 2023-07-19T17:15:03.7983703Z Jul 19 17:15:03 job.batch/flink-job-cluster condition met 2023-07-19T17:15:04.0937620Z error: Internal error occurred: error executing command in container: http: invalid Host header 2023-07-19T17:15:04.0988752Z sort: cannot read: '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-11919909188/out/kubernetes_wc_out*': No such file or directory 2023-07-19T17:15:04.1017388Z Jul 19 17:15:04 FAIL WordCount: Output hash mismatch. Got d41d8cd98f00b204e9800998ecf8427e, expected e682ec6622b5e83f2eb614617d5ab2cf. {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISCUSS][2.0] FLIP-338: Remove terminationMode query parameter from job cancellation REST endpoint
+1 On Mon, Jul 17, 2023 at 5:30 AM Xintong Song wrote: > +1 > > Best, > > Xintong > > > > On Thu, Jul 13, 2023 at 9:41 PM Chesnay Schepler > wrote: > > > Hello, > > > > The job cancellation REST endpoint has a terminationMode query > > parameter, which in the past could be set to either CANCEL or STOP, but > > nowadays the job stop endpoint has subsumed the STOP functionality. > > > > Since then the cancel endpoint rejected requests that specified STOP. > > > > I propose to finally remove this parameter, as it currently serves no > > function. > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-338%3A+Remove+terminationMode+query+parameter+from+job+cancellation+REST+endpoint > > > > > > > > Regards, > > > > Chesnay > > >
Re: [DISCUSS][2.0] FLIP-338: Remove terminationMode query parameter from job cancellation REST endpoint
+1 On Wed, Jul 19, 2023 at 5:14 PM Jing Ge wrote: > +1 > > On Mon, Jul 17, 2023 at 5:30 AM Xintong Song > wrote: > > > +1 > > > > Best, > > > > Xintong > > > > > > > > On Thu, Jul 13, 2023 at 9:41 PM Chesnay Schepler > > wrote: > > > > > Hello, > > > > > > The job cancellation REST endpoint has a terminationMode query > > > parameter, which in the past could be set to either CANCEL or STOP, but > > > nowadays the job stop endpoint has subsumed the STOP functionality. > > > > > > Since then the cancel endpoint rejected requests that specified STOP. > > > > > > I propose to finally remove this parameter, as it currently serves no > > > function. > > > > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-338%3A+Remove+terminationMode+query+parameter+from+job+cancellation+REST+endpoint > > > > > > > > > > > > Regards, > > > > > > Chesnay > > > > > >
Re: [DISCUSS][2.0] FLIP-338: Remove terminationMode query parameter from job cancellation REST endpoint
It doesn't need to be part of the Flink 2.0 release perse, but starting to wonder if we'd get more bang for our buck if we started fresh with a v2 REST API vs. one-off cleanups of the current v1 API. @Chesnay Schepler -- wdyt? The v1 REST API seemed to grow naturally from its original use case of supporting the Web UI iiuc, but now another of the core use cases is operational (e.g., supporting the K8s Operator). For the operational use case, it is clear that this wasn't the original design goal (e.g., cases exist that require parsing the included Java stack trace to determine what to do). Maybe @Gyula Fóra also has some experience/suggestions to share on if this would be valuable. (also happy to start a new thread, sorry for co-opting this one) Austin
[jira] [Created] (FLINK-32633) Kubernetes e2e test is not stable
Fang Yong created FLINK-32633: - Summary: Kubernetes e2e test is not stable Key: FLINK-32633 URL: https://issues.apache.org/jira/browse/FLINK-32633 Project: Flink Issue Type: Technical Debt Components: Deployment / Kubernetes, Kubernetes Operator Affects Versions: 1.18.0 Reporter: Fang Yong The output file is: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51444&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=43ba8ce7-ebbf-57cd-9163-444305d74117 Jul 19 17:06:02 Stopping minikube ... Jul 19 17:06:02 * Stopping node "minikube" ... Jul 19 17:06:13 * 1 node stopped. Jul 19 17:06:13 [FAIL] Test script contains errors. Jul 19 17:06:13 Checking for errors... Jul 19 17:06:13 No errors in log files. Jul 19 17:06:13 Checking for exceptions... Jul 19 17:06:13 No exceptions in log files. Jul 19 17:06:13 Checking for non-empty .out files... grep: /home/vsts/work/_temp/debug_files/flink-logs/*.out: No such file or directory Jul 19 17:06:13 No non-empty .out files. Jul 19 17:06:13 Jul 19 17:06:13 [FAIL] 'Run Kubernetes test' failed after 4 minutes and 28 seconds! Test exited with exit code 1 Jul 19 17:06:13 17:06:13 ##[group]Environment Information Jul 19 17:06:13 Jps -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-309: Support using larger checkpointing interval when source is processing backlog
+1(binding) Best regards, Yuxia - 原始邮件 - 发件人: "Guowei Ma" 收件人: "dev" 发送时间: 星期三, 2023年 7 月 19日 下午 1:54:52 主题: Re: [VOTE] FLIP-309: Support using larger checkpointing interval when source is processing backlog +1(binding) Best, Guowei On Wed, Jul 19, 2023 at 11:18 AM Hang Ruan wrote: > +1 (non-binding) > > Thanks for driving. > > Best, > Hang > > Leonard Xu 于2023年7月19日周三 10:42写道: > > > Thanks Dong for the continuous work. > > > > +1(binding) > > > > Best, > > Leonard > > > > > On Jul 18, 2023, at 10:16 PM, Jingsong Li > > wrote: > > > > > > +1 binding > > > > > > Thanks Dong for continuous driving. > > > > > > Best, > > > Jingsong > > > > > > On Tue, Jul 18, 2023 at 10:04 PM Jark Wu wrote: > > >> > > >> +1 (binding) > > >> > > >> Best, > > >> Jark > > >> > > >> On Tue, 18 Jul 2023 at 20:30, Piotr Nowojski > > wrote: > > >> > > >>> +1 (binding) > > >>> > > >>> Piotrek > > >>> > > >>> wt., 18 lip 2023 o 08:51 Jing Ge > > napisał(a): > > >>> > > +1(binding) > > > > Best regards, > > Jing > > > > On Tue, Jul 18, 2023 at 8:31 AM Rui Fan <1996fan...@gmail.com> > wrote: > > > > > +1(binding) > > > > > > Best, > > > Rui Fan > > > > > > > > > On Tue, Jul 18, 2023 at 12:04 PM Dong Lin > > wrote: > > > > > >> Hi all, > > >> > > >> We would like to start the vote for FLIP-309: Support using larger > > >> checkpointing interval when source is processing backlog [1]. This > > >>> FLIP > > > was > > >> discussed in this thread [2]. > > >> > > >> The vote will be open until at least July 21st (at least 72 > hours), > > >> following > > >> the consensus voting process. > > >> > > >> Cheers, > > >> Yunfeng and Dong > > >> > > >> [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-309 > > >> > > >> > > > > > > > >>> > > > %3A+Support+using+larger+checkpointing+interval+when+source+is+processing+backlog > > >> [2] > > https://lists.apache.org/thread/l1l7f30h7zldjp6ow97y70dcthx7tl37 > > >> > > > > > > > >>> > > > > >
Re: Kubernetes Operator 1.6.0 release planning
Thanks Gyula for driving this release. +1 for the timeline Best, Rui Fan On Wed, Jul 19, 2023 at 11:03 PM Gyula Fóra wrote: > Hi Devs! > > Based on our release schedule, it is about time for the next Flink K8s > Operator minor release. > > There are still some minor work items to be completed this week, but I > suggest aiming for next Wednesday (July 26th) as the 1.6.0 release-cut - > RC1 date. > > I am volunteering as the release manager but if someone else wants to do > it, I would also be happy to simply give assistance :) > > Please let me know if you agree or disagree with the suggested timeline. > > Cheers, > Gyula >
Re: Kubernetes Operator 1.6.0 release planning
thank you gyula , for driving it. +1(non binding) Bests, Samrat On Thu, 20 Jul 2023 at 8:02 AM, Rui Fan <1996fan...@gmail.com> wrote: > Thanks Gyula for driving this release. > > +1 for the timeline > > Best, > Rui Fan > > On Wed, Jul 19, 2023 at 11:03 PM Gyula Fóra wrote: > > > Hi Devs! > > > > Based on our release schedule, it is about time for the next Flink K8s > > Operator minor release. > > > > There are still some minor work items to be completed this week, but I > > suggest aiming for next Wednesday (July 26th) as the 1.6.0 release-cut - > > RC1 date. > > > > I am volunteering as the release manager but if someone else wants to do > > it, I would also be happy to simply give assistance :) > > > > Please let me know if you agree or disagree with the suggested timeline. > > > > Cheers, > > Gyula > > >
Re: [VOTE] FLIP-309: Support using larger checkpointing interval when source is processing backlog
+1 (binding) Thanks, Zhu yuxia 于2023年7月20日周四 09:23写道: > > +1(binding) > > Best regards, > Yuxia > > - 原始邮件 - > 发件人: "Guowei Ma" > 收件人: "dev" > 发送时间: 星期三, 2023年 7 月 19日 下午 1:54:52 > 主题: Re: [VOTE] FLIP-309: Support using larger checkpointing interval when > source is processing backlog > > +1(binding) > Best, > Guowei > > > On Wed, Jul 19, 2023 at 11:18 AM Hang Ruan wrote: > > > +1 (non-binding) > > > > Thanks for driving. > > > > Best, > > Hang > > > > Leonard Xu 于2023年7月19日周三 10:42写道: > > > > > Thanks Dong for the continuous work. > > > > > > +1(binding) > > > > > > Best, > > > Leonard > > > > > > > On Jul 18, 2023, at 10:16 PM, Jingsong Li > > > wrote: > > > > > > > > +1 binding > > > > > > > > Thanks Dong for continuous driving. > > > > > > > > Best, > > > > Jingsong > > > > > > > > On Tue, Jul 18, 2023 at 10:04 PM Jark Wu wrote: > > > >> > > > >> +1 (binding) > > > >> > > > >> Best, > > > >> Jark > > > >> > > > >> On Tue, 18 Jul 2023 at 20:30, Piotr Nowojski > > > wrote: > > > >> > > > >>> +1 (binding) > > > >>> > > > >>> Piotrek > > > >>> > > > >>> wt., 18 lip 2023 o 08:51 Jing Ge > > > napisał(a): > > > >>> > > > +1(binding) > > > > > > Best regards, > > > Jing > > > > > > On Tue, Jul 18, 2023 at 8:31 AM Rui Fan <1996fan...@gmail.com> > > wrote: > > > > > > > +1(binding) > > > > > > > > Best, > > > > Rui Fan > > > > > > > > > > > > On Tue, Jul 18, 2023 at 12:04 PM Dong Lin > > > wrote: > > > > > > > >> Hi all, > > > >> > > > >> We would like to start the vote for FLIP-309: Support using larger > > > >> checkpointing interval when source is processing backlog [1]. This > > > >>> FLIP > > > > was > > > >> discussed in this thread [2]. > > > >> > > > >> The vote will be open until at least July 21st (at least 72 > > hours), > > > >> following > > > >> the consensus voting process. > > > >> > > > >> Cheers, > > > >> Yunfeng and Dong > > > >> > > > >> [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-309 > > > >> > > > >> > > > > > > > > > > >>> > > > > > %3A+Support+using+larger+checkpointing+interval+when+source+is+processing+backlog > > > >> [2] > > > https://lists.apache.org/thread/l1l7f30h7zldjp6ow97y70dcthx7tl37 > > > >> > > > > > > > > > > >>> > > > > > > > >
Re: [DISCUSS][2.0] FLIP-341: Remove MetricGroup methods accepting an int as a name
+1, there’re methods offer parameter with String which allows users to migrate. Best, Leonard > On Jul 18, 2023, at 9:26 PM, Jing Ge wrote: > > +1 > > On Tue, Jul 18, 2023 at 1:11 PM Chesnay Schepler wrote: > >> Good catch; i've fixed the list. >> >> On 18/07/2023 12:20, Xintong Song wrote: >>> +1 in general. >>> >>> I think the list of affected public interfaces in the FLIP is not >> accurate. >>> >>>- `#counter(int, Counter)` is missed >>>- `#meter(int)` should be `#meter(int, Meter)` >>>- `#group(int)` should be `#addGroup(int)` >>> >>> >>> Best, >>> >>> Xintong >>> >>> >>> >>> On Tue, Jul 18, 2023 at 4:39 PM Chesnay Schepler >> wrote: >>> The MetricGroup interface contains methods to create groups and metrics using an int as a name. The original intention was to allow pattern like |group.addGroup("subtaskIndex").addGroup(0)| , but this didn't really work out, with |addGroup(String, String)| serving this use case much better. Metric methods accept an int mostly for consistency, but there's no good use-case for it. These methods also offer hardly any convenience since all they do is save potential users from using |String.valueOf| on one argument. That's doesn't seem valuable enough for something that doubles the size of the interface. I propose to remove said method. >> https://cwiki.apache.org/confluence/display/FLINK/FLIP-341%3A+Remove+MetricGroup+methods+accepting+an+int+as+a+name >> >>
[DISCUSS] FLIP-346: Deprecate ManagedTable related APIs
Hi, devs, I would like to start a discussion on FLIP-346: Deprecate ManagedTable related APIs[1]. These APIs were initially designed for Flink Table Store, which has joined the Apache Incubator as a separate project called Apache Paimon(incubating). Since they are obsolete and not used by Paimon anymore, I propose to deprecate them in v1.18 and further remove them before v2.0. Looking forward to your feedback. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-346%3A+Deprecate+ManagedTable+related+APIs Best regards, Jane
Re: [DISCUSS] FLIP-346: Deprecate ManagedTable related APIs
+1 On Thu, Jul 20, 2023 at 12:31 PM Jane Chan wrote: > > Hi, devs, > > I would like to start a discussion on FLIP-346: Deprecate ManagedTable > related APIs[1]. > > These APIs were initially designed for Flink Table Store, which has > joined the Apache Incubator as a separate project called Apache > Paimon(incubating). > > Since they are obsolete and not used by Paimon anymore, I propose to > deprecate them in v1.18 and further remove them before v2.0. > > Looking forward to your feedback. > > [1] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-346%3A+Deprecate+ManagedTable+related+APIs > > Best regards, > Jane
Re: [DISCUSS] FLIP-346: Deprecate ManagedTable related APIs
+1 Best, Xintong On Thu, Jul 20, 2023 at 1:25 PM Jingsong Li wrote: > +1 > > On Thu, Jul 20, 2023 at 12:31 PM Jane Chan wrote: > > > > Hi, devs, > > > > I would like to start a discussion on FLIP-346: Deprecate ManagedTable > > related APIs[1]. > > > > These APIs were initially designed for Flink Table Store, which has > > joined the Apache Incubator as a separate project called Apache > > Paimon(incubating). > > > > Since they are obsolete and not used by Paimon anymore, I propose to > > deprecate them in v1.18 and further remove them before v2.0. > > > > Looking forward to your feedback. > > > > [1] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-346%3A+Deprecate+ManagedTable+related+APIs > > > > Best regards, > > Jane >
[jira] [Created] (FLINK-32634) Deprecate StreamRecordTimestamp and ExistingField
Jane Chan created FLINK-32634: - Summary: Deprecate StreamRecordTimestamp and ExistingField Key: FLINK-32634 URL: https://issues.apache.org/jira/browse/FLINK-32634 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.18.0 Reporter: Jane Chan Fix For: 1.18.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)