[jira] [Created] (FLINK-6072) TestCase CheckpointStateRestoreTest::testSetState should be fail
Guowei Ma created FLINK-6072: Summary: TestCase CheckpointStateRestoreTest::testSetState should be fail Key: FLINK-6072 URL: https://issues.apache.org/jira/browse/FLINK-6072 Project: Flink Issue Type: Bug Components: State Backends, Checkpointing Reporter: Guowei Ma Assignee: Guowei Ma Priority: Minor This case should be fail because three KeyGroupStateHandles which have same KeyGroupRange[0,0] are received by 'CheckpointCoordinator'. 1. In this case these ‘statefulExec’s should use three different KeyGroupRanges such as [0,0],[1,1],[2,2]. 2. Add another test case which can verify 'CheckpointCoordiantor' throws a exception when receive overlapped KeyGroupRange. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-5892) Recover job state at the granularity of operator
[ https://issues.apache.org/jira/browse/FLINK-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944327#comment-15944327 ] Guowei Ma commented on FLINK-5892: -- hi [~Zentol] I am discussing the proposal with [~srichter]. I think I will do it after the discussion finish. > Recover job state at the granularity of operator > > > Key: FLINK-5892 > URL: https://issues.apache.org/jira/browse/FLINK-5892 > Project: Flink > Issue Type: New Feature > Components: State Backends, Checkpointing >Reporter: Guowei Ma >Assignee: Guowei Ma > > JobGraph has no `Operator` info so `ExecutionGraph` can only recovery at the > granularity of task. > This leads to the result that the operator of the job may not recover the > state from a save point even if the save point has the state of operator. > > https://docs.google.com/document/d/19suTRF0nh7pRgeMnIEIdJ2Fq-CcNVt5_hR3cAoICf7Q/edit#. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-10469) FileChannel may not write the whole buffer in a single call to FileChannel.write(Buffer buffer)
[ https://issues.apache.org/jira/browse/FLINK-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16633622#comment-16633622 ] Guowei Ma commented on FLINK-10469: --- +1 > FileChannel may not write the whole buffer in a single call to > FileChannel.write(Buffer buffer) > --- > > Key: FLINK-10469 > URL: https://issues.apache.org/jira/browse/FLINK-10469 > Project: Flink > Issue Type: Bug > Components: Core >Affects Versions: 1.4.1, 1.4.2, 1.5.3, 1.6.0, 1.6.1, 1.7.0, 1.5.4, 1.6.2 >Reporter: Yun Gao >Priority: Major > > Currently all the calls to _FileChannel.write(ByteBuffer src)_ assumes that > this method will not return before the whole buffer is written, like the one > in _AsynchronousFileIOChannel.write()._ > > However, this assumption may not be right for all the environments. We have > encountered the case that only part of a buffer was written on a cluster with > a high IO load, and the target file got messy. > > To fix this issue, I think we should add a utility method in the > org.apache.flink.util.IOUtils to ensure the whole buffer is written with a > loop,and replace all the calls to _FileChannel.write(ByteBuffer)_ with this > new method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-13840) Let StandaloneJobClusterEntrypoint use user code class loader
[ https://issues.apache.org/jira/browse/FLINK-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920265#comment-16920265 ] Guowei Ma commented on FLINK-13840: --- As Till said, it is an easy way to add a different directory than lib to PackagedProgram. I wrote a draft doc about how to deal with this. Any comments are welcome! [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit#] > Let StandaloneJobClusterEntrypoint use user code class loader > - > > Key: FLINK-13840 > URL: https://issues.apache.org/jira/browse/FLINK-13840 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > Fix For: 1.10.0 > > > In order to resolve class loading issues when using the > {{StandaloneJobClusterEntryPoint}}, it would be better to run the user code > in the user code class loader which supports child first class loading. At > the moment, the user code jar is part of the system class path and, hence, > part of the system class loader. > An easy way to solve this problem would be to place the user code in a > different directory than {{lib}} and then specify this path as an additional > classpath when creating the {{PackagedProgram}}. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
Guowei Ma created FLINK-13993: - Summary: Using FlinkUserCodeClassLoaders to load the user class in the perjob mode Key: FLINK-13993 URL: https://issues.apache.org/jira/browse/FLINK-13993 Project: Flink Issue Type: Improvement Components: Runtime / Coordination Affects Versions: 1.9.0, 1.10.0 Reporter: Guowei Ma Currently, Flink has the FlinkUserCodeClassLoader, which is using to load user’s class. However, the user class and the system class are all loaded by the system classloader in the perjob mode. This introduces some conflicts. This document[1] gives a proposal that makes the FlinkUserClassLoader load the user class in perjob mode. (disscuss with Till[2]) [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] [2] [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-13993: -- Fix Version/s: 1.10.0 > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Priority: Major > Fix For: 1.10.0 > > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Closed] (FLINK-13840) Let StandaloneJobClusterEntrypoint use user code class loader
[ https://issues.apache.org/jira/browse/FLINK-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-13840. - Resolution: Duplicate > Let StandaloneJobClusterEntrypoint use user code class loader > - > > Key: FLINK-13840 > URL: https://issues.apache.org/jira/browse/FLINK-13840 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > Fix For: 1.10.0 > > > In order to resolve class loading issues when using the > {{StandaloneJobClusterEntryPoint}}, it would be better to run the user code > in the user code class loader which supports child first class loading. At > the moment, the user code jar is part of the system class path and, hence, > part of the system class loader. > An easy way to solve this problem would be to place the user code in a > different directory than {{lib}} and then specify this path as an additional > classpath when creating the {{PackagedProgram}}. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16927484#comment-16927484 ] Guowei Ma commented on FLINK-13993: --- Thanks for reminding me. I'll close it. [~till.rohrmann] > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Priority: Major > Fix For: 1.10.0 > > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930613#comment-16930613 ] Guowei Ma commented on FLINK-13993: --- I will do it. thanks. [~till.rohrmann] > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Fix For: 1.10.0 > > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-13993: -- Remaining Estimate: 168h Original Estimate: 168h > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Fix For: 1.10.0 > > Original Estimate: 168h > Remaining Estimate: 168h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-13993: -- Remaining Estimate: 144h (was: 168h) Original Estimate: 144h (was: 168h) > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Fix For: 1.10.0 > > Original Estimate: 144h > Remaining Estimate: 144h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-13993: -- Remaining Estimate: 48h (was: 144h) Original Estimate: 48h (was: 144h) > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Fix For: 1.10.0 > > Original Estimate: 48h > Remaining Estimate: 48h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-13993: -- Labels: pull-request-available (was: ) > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Original Estimate: 48h > Remaining Estimate: 48h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-13993: -- Remaining Estimate: 30h (was: 48h) Original Estimate: 30h (was: 48h) > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Original Estimate: 30h > Remaining Estimate: 30h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16933201#comment-16933201 ] Guowei Ma commented on FLINK-13993: --- Sorry, I change the time. Actually, what I mean is one man week. > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Original Estimate: 30h > Remaining Estimate: 30h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-13993) Using FlinkUserCodeClassLoaders to load the user class in the perjob mode
[ https://issues.apache.org/jira/browse/FLINK-13993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16933201#comment-16933201 ] Guowei Ma edited comment on FLINK-13993 at 9/19/19 9:25 AM: Sorry [~trohrmann] , I change the time. Actually, what I mean is one man week. was (Author: maguowei): Sorry, I change the time. Actually, what I mean is one man week. > Using FlinkUserCodeClassLoaders to load the user class in the perjob mode > - > > Key: FLINK-13993 > URL: https://issues.apache.org/jira/browse/FLINK-13993 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0, 1.10.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0 > > Original Estimate: 30h > Remaining Estimate: 30h > > Currently, Flink has the FlinkUserCodeClassLoader, which is using to load > user’s class. However, the user class and the system class are all loaded by > the system classloader in the perjob mode. This introduces some conflicts. > This document[1] gives a proposal that makes the FlinkUserClassLoader load > the user class in perjob mode. (disscuss with Till[2]) > > [1][https://docs.google.com/document/d/1fH2Cwrrmps5RxxvVuUdeprruvDNabEaIHPyYps28WM8/edit#heading=h.815t5dodlxh7] > [2] > [https://docs.google.com/document/d/1SUhFt1BmsGMLUYVa72SWLbNrrWzunvcjAlEm8iusvq0/edit] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20297) Make `SerializerTestBase::getTestData` return List
[ https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243025#comment-17243025 ] Guowei Ma commented on FLINK-20297: --- Sorry for replying so late. [~dwysakowicz] There are some cases that we could not extend `SerializerTestBase`. For example when we could write a `UnitSerializerTest extends SerializerTestBase` the compiler would report following error: {code:java} Error:(18, 26) overriding method getTestData in class SerializerTestBase of type ()Array[Unit]; method getTestData has incompatible type override protected def getTestData: Array[Unit] = null {code} I am not sure the specific reason. But maybe the not object type(any subtype of `AnyValue`) could not use as T[], which causes the problem. > Make `SerializerTestBase::getTestData` return List > - > > Key: FLINK-20297 > URL: https://issues.apache.org/jira/browse/FLINK-20297 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Minor > Labels: pull-request-available > > Currently `SerializerTestBase::getTestData` return T[], which can not be > override by the Scala. It means that developer could not add scala serializer > test based on `SerializerTestBase` > So I would propose to change the `SerializerTestBase::getTestData` to return > List -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-20297) Make `SerializerTestBase::getTestData` return List
[ https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243025#comment-17243025 ] Guowei Ma edited comment on FLINK-20297 at 12/3/20, 8:57 AM: - Sorry for replying so late. [~dwysakowicz] There are some cases that we could not extend `SerializerTestBase`. For example when we write a `UnitSerializerTest extends SerializerTestBase` the compiler would report following error: {code:java} Error:(18, 26) overriding method getTestData in class SerializerTestBase of type ()Array[Unit]; method getTestData has incompatible type override protected def getTestData: Array[Unit] = null {code} I am not sure the specific reason. But maybe the not object type(any subtype of `AnyValue`) could not use as T[], which causes the problem. was (Author: maguowei): Sorry for replying so late. [~dwysakowicz] There are some cases that we could not extend `SerializerTestBase`. For example when we could write a `UnitSerializerTest extends SerializerTestBase` the compiler would report following error: {code:java} Error:(18, 26) overriding method getTestData in class SerializerTestBase of type ()Array[Unit]; method getTestData has incompatible type override protected def getTestData: Array[Unit] = null {code} I am not sure the specific reason. But maybe the not object type(any subtype of `AnyValue`) could not use as T[], which causes the problem. > Make `SerializerTestBase::getTestData` return List > - > > Key: FLINK-20297 > URL: https://issues.apache.org/jira/browse/FLINK-20297 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Minor > Labels: pull-request-available > > Currently `SerializerTestBase::getTestData` return T[], which can not be > override by the Scala. It means that developer could not add scala serializer > test based on `SerializerTestBase` > So I would propose to change the `SerializerTestBase::getTestData` to return > List -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20297) Make `SerializerTestBase::getTestData` return List
[ https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243055#comment-17243055 ] Guowei Ma commented on FLINK-20297: --- I don't find any other case. Properly you are right. I think we could add the `UnitSerializerTest` to the white name list in the `TypeSerializerTestConverageTest` > Make `SerializerTestBase::getTestData` return List > - > > Key: FLINK-20297 > URL: https://issues.apache.org/jira/browse/FLINK-20297 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Minor > Labels: pull-request-available > > Currently `SerializerTestBase::getTestData` return T[], which can not be > override by the Scala. It means that developer could not add scala serializer > test based on `SerializerTestBase` > So I would propose to change the `SerializerTestBase::getTestData` to return > List -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20297) Make `SerializerTestBase::getTestData` return List
[ https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-20297. - Resolution: Won't Fix > Make `SerializerTestBase::getTestData` return List > - > > Key: FLINK-20297 > URL: https://issues.apache.org/jira/browse/FLINK-20297 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Minor > Labels: pull-request-available > > Currently `SerializerTestBase::getTestData` return T[], which can not be > override by the Scala. It means that developer could not add scala serializer > test based on `SerializerTestBase` > So I would propose to change the `SerializerTestBase::getTestData` to return > List -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20652) Improve the document for making the user could write a DataStream job that could be execute in the batch execution mode.
Guowei Ma created FLINK-20652: - Summary: Improve the document for making the user could write a DataStream job that could be execute in the batch execution mode. Key: FLINK-20652 URL: https://issues.apache.org/jira/browse/FLINK-20652 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.12.0 Reporter: Guowei Ma There are some users, that are from the `DataSet` or new Flink user, want to write a `DataStream` job executed in the batch mode. But they could not find a really source could do it. Some guys ask me offline "How to change the `HadoopInputFormat` so that it could run in the batch execution mode?" So I would propose to add some description about which source are support the batch execution mode in the "DataStream/Connector" section or "DataStream/execution" section. Of course there might be some other way that could get this purpose. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20652) Improve the document for making the user could write a DataStream job that could be execute in the batch execution mode.
[ https://issues.apache.org/jira/browse/FLINK-20652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20652: -- Description: There are some users, that are from the `DataSet` or new Flink user, want to write a `DataStream` job executed in the batch mode. But they could not find a really source could do it. Some guys ask me offline "How to change the `HadoopInputFormat` so that it could run in the batch execution mode?" So I would propose to add some description about which source are support the batch execution mode in the "DataStream/Connector" section or "DataStream/execution mode" section. Of course there might be some other way that could get this purpose. was: There are some users, that are from the `DataSet` or new Flink user, want to write a `DataStream` job executed in the batch mode. But they could not find a really source could do it. Some guys ask me offline "How to change the `HadoopInputFormat` so that it could run in the batch execution mode?" So I would propose to add some description about which source are support the batch execution mode in the "DataStream/Connector" section or "DataStream/execution" section. Of course there might be some other way that could get this purpose. > Improve the document for making the user could write a DataStream job that > could be execute in the batch execution mode. > > > Key: FLINK-20652 > URL: https://issues.apache.org/jira/browse/FLINK-20652 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Major > > There are some users, that are from the `DataSet` or new Flink user, want to > write a `DataStream` job executed in the batch mode. > But they could not find a really source could do it. Some guys ask me > offline "How to change the `HadoopInputFormat` so that it could run in the > batch execution mode?" > > So I would propose to add some description about which source are support the > batch execution mode in the "DataStream/Connector" section or > "DataStream/execution mode" section. > > Of course there might be some other way that could get this purpose. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19833) Rename Sink API Writer interface to SinkWriter
[ https://issues.apache.org/jira/browse/FLINK-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17224659#comment-17224659 ] Guowei Ma commented on FLINK-19833: --- thanks [~kkl0u] no problem. > Rename Sink API Writer interface to SinkWriter > -- > > Key: FLINK-19833 > URL: https://issues.apache.org/jira/browse/FLINK-19833 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Aljoscha Krettek >Assignee: Guowei Ma >Priority: Major > Fix For: 1.12.0 > > > This makes it more consistent with {{SourceReader}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19936) Make SinkITCase more stable
Guowei Ma created FLINK-19936: - Summary: Make SinkITCase more stable Key: FLINK-19936 URL: https://issues.apache.org/jira/browse/FLINK-19936 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint These lead to the `SinkITCase` unstable. This pr resolves the `SinkITCase`s unstable by these two uncertain stuff. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19936: -- Summary: Make SinkITCase stable (was: Make SinkITCase more stable) > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can accept the last checkpoint > These lead to the `SinkITCase` unstable. > This pr resolves the `SinkITCase`s unstable by these two uncertain stuff. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19936: -- Description: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint These lead to the `SinkITCase` unstable. This pr makes the `SinkITCase`s unstable by “resolving” these two uncertain stuff. was: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint These lead to the `SinkITCase` unstable. This pr resolves the `SinkITCase`s unstable by these two uncertain stuff. > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can accept the last checkpoint > These lead to the `SinkITCase` unstable. > This pr makes the `SinkITCase`s unstable by “resolving” these two uncertain > stuff. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19936: -- Description: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint This patch does following two things to resolve `SinkITCase`'s unstable owning to these two uncertainty # Changing how to verifying the output of `GlobalCommitter` for the case 1 # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter received all the elements. was: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint These lead to the `SinkITCase` unstable. This pr makes the `SinkITCase`s unstable by “resolving” these two uncertain stuff. > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can accept the last checkpoint > This patch does following two things to resolve `SinkITCase`'s unstable > owning to these two uncertainty > # Changing how to verifying the output of `GlobalCommitter` for the case 1 > # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter > received all the elements. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19936: -- Description: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint This patch does following two things to resolve `SinkITCase`'s unstable owning to these two uncertainty # Changing how to verifying the output of `GlobalCommitter` for the case 1 # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter received all the elements. https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0 was: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint This patch does following two things to resolve `SinkITCase`'s unstable owning to these two uncertainty # Changing how to verifying the output of `GlobalCommitter` for the case 1 # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter received all the elements. > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can accept the last checkpoint > This patch does following two things to resolve `SinkITCase`'s unstable > owning to these two uncertainty > # Changing how to verifying the output of `GlobalCommitter` for the case 1 > # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter > received all the elements. > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19936: -- Description: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can receive the last checkpoint This patch does following two things to resolve `SinkITCase`'s unstable owning to these two uncertainty # Changing how to verifying the output of `GlobalCommitter` for the case 1 # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter received all the elements. [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0] was: In the streaming execution mode there are two uncertain things: # The order of receiving the checkpoint complete.(`Committer` receive first or `GlobalCommitter` receive first) # Whether the operator can accept the last checkpoint This patch does following two things to resolve `SinkITCase`'s unstable owning to these two uncertainty # Changing how to verifying the output of `GlobalCommitter` for the case 1 # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter received all the elements. https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0 > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can receive the last checkpoint > This patch does following two things to resolve `SinkITCase`'s unstable > owning to these two uncertainty > # Changing how to verifying the output of `GlobalCommitter` for the case 1 > # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter > received all the elements. > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225802#comment-17225802 ] Guowei Ma commented on FLINK-19936: --- Thanks [~sewen] . I think it might be the same reason. Would you like to share the maven log url ? Because I want to double check the log to verify it. > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can receive the last checkpoint > This patch does following two things to resolve `SinkITCase`'s unstable > owning to these two uncertainty > # Changing how to verifying the output of `GlobalCommitter` for the case 1 > # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter > received all the elements. > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19958) Unified exception signature in Sink API
Guowei Ma created FLINK-19958: - Summary: Unified exception signature in Sink API Key: FLINK-19958 URL: https://issues.apache.org/jira/browse/FLINK-19958 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma In the current Sink API some method does not throw any exception, which should throw intuitive for example `SinkWriter::write`. Some method throw the normal `Exception`, which might be too general. So in this pr we want to change all the methods that needed throw exception with IOException. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19958) Unified exception signature in Sink API
[ https://issues.apache.org/jira/browse/FLINK-19958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19958: -- Description: In the current Sink API some method does not throw any exception, which should throw intuitively for example `SinkWriter::write`. Some method throw the normal `Exception`, which might be too general. So in this pr we want to add a IOException to all the I/O related methods signature. was: In the current Sink API some method does not throw any exception, which should throw intuitive for example `SinkWriter::write`. Some method throw the normal `Exception`, which might be too general. So in this pr we want to change all the methods that needed throw exception with IOException. > Unified exception signature in Sink API > --- > > Key: FLINK-19958 > URL: https://issues.apache.org/jira/browse/FLINK-19958 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the current Sink API some method does not throw any exception, which > should throw intuitively for example `SinkWriter::write`. Some method throw > the normal `Exception`, which might be too general. > > So in this pr we want to add a IOException to all the I/O related methods > signature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19958) Unified methods signature's exception in Sink API
[ https://issues.apache.org/jira/browse/FLINK-19958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19958: -- Summary: Unified methods signature's exception in Sink API (was: Unified exception signature in Sink API) > Unified methods signature's exception in Sink API > - > > Key: FLINK-19958 > URL: https://issues.apache.org/jira/browse/FLINK-19958 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the current Sink API some method does not throw any exception, which > should throw intuitively for example `SinkWriter::write`. Some method throw > the normal `Exception`, which might be too general. > > So in this pr we want to add a IOException to all the I/O related methods > signature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19958) Adds the IOException to all the I/O related methods' signature.
[ https://issues.apache.org/jira/browse/FLINK-19958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19958: -- Summary: Adds the IOException to all the I/O related methods' signature. (was: Unified methods signature's exception in Sink API) > Adds the IOException to all the I/O related methods' signature. > --- > > Key: FLINK-19958 > URL: https://issues.apache.org/jira/browse/FLINK-19958 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the current Sink API some method does not throw any exception, which > should throw intuitively for example `SinkWriter::write`. Some method throw > the normal `Exception`, which might be too general. > > So in this pr we want to add a IOException to all the I/O related methods > signature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19958) Add the IOException to all the I/O related methods' signature.
[ https://issues.apache.org/jira/browse/FLINK-19958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19958: -- Summary: Add the IOException to all the I/O related methods' signature. (was: Adds the IOException to all the I/O related methods' signature.) > Add the IOException to all the I/O related methods' signature. > -- > > Key: FLINK-19958 > URL: https://issues.apache.org/jira/browse/FLINK-19958 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the current Sink API some method does not throw any exception, which > should throw intuitively for example `SinkWriter::write`. Some method throw > the normal `Exception`, which might be too general. > > So in this pr we want to add a IOException to all the I/O related methods > signature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19958) Add the IOException to all the I/O related methods' signature in Sink API.
[ https://issues.apache.org/jira/browse/FLINK-19958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19958: -- Summary: Add the IOException to all the I/O related methods' signature in Sink API. (was: Add the IOException to all the I/O related methods' signature.) > Add the IOException to all the I/O related methods' signature in Sink API. > -- > > Key: FLINK-19958 > URL: https://issues.apache.org/jira/browse/FLINK-19958 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > > In the current Sink API some method does not throw any exception, which > should throw intuitively for example `SinkWriter::write`. Some method throw > the normal `Exception`, which might be too general. > > So in this pr we want to add a IOException to all the I/O related methods > signature. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19963) Let the `SinkWriter` support using the `TimerService`
Guowei Ma created FLINK-19963: - Summary: Let the `SinkWriter` support using the `TimerService` Key: FLINK-19963 URL: https://issues.apache.org/jira/browse/FLINK-19963 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma Some `Sink` needs to register `TimeService`, for example `StreamingFileWriter` So this pr exposes the `TimeService` to the `SinkWriter`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19936: -- Attachment: image-2020-11-04-19-01-34-744.png > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > Attachments: image-2020-11-04-19-01-34-744.png > > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can receive the last checkpoint > This patch does following two things to resolve `SinkITCase`'s unstable > owning to these two uncertainty > # Changing how to verifying the output of `GlobalCommitter` for the case 1 > # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter > received all the elements. > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19936) Make SinkITCase stable
[ https://issues.apache.org/jira/browse/FLINK-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17225991#comment-17225991 ] Guowei Ma commented on FLINK-19936: --- !image-2020-11-04-19-01-34-744.png! Thanks [~sewen] It is the same reason. `GlobalCommitter` receives the checkpoint complete before other three Committer > Make SinkITCase stable > -- > > Key: FLINK-19936 > URL: https://issues.apache.org/jira/browse/FLINK-19936 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Major > Labels: pull-request-available > Attachments: image-2020-11-04-19-01-34-744.png > > > In the streaming execution mode there are two uncertain things: > # The order of receiving the checkpoint complete.(`Committer` receive first > or `GlobalCommitter` receive first) > # Whether the operator can receive the last checkpoint > This patch does following two things to resolve `SinkITCase`'s unstable > owning to these two uncertainty > # Changing how to verifying the output of `GlobalCommitter` for the case 1 > # Let the `FiniteTestSource` exit only when the Committer/GlobalCommitter > received all the elements. > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8815&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20007) SinkTransformationTranslator fail to handle the PartitionTransformation
Guowei Ma created FLINK-20007: - Summary: SinkTransformationTranslator fail to handle the PartitionTransformation Key: FLINK-20007 URL: https://issues.apache.org/jira/browse/FLINK-20007 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma In current version `SinkTransformationTranslator` connects the `SinkWriter` with a `PartitionerTransformation` if the input transformation of `SinkTransformation` is `PartitionTransformation`. This would lead to `NullPointExcetion`. Actually `SinkTransformationTranslator` should connect the `Writer` to the real upstream node if input of the `SinkTransformation` is `PartitionTransformation`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20031) Keep the uid of SinkWriter same as the SinkTransformation
Guowei Ma created FLINK-20031: - Summary: Keep the uid of SinkWriter same as the SinkTransformation Key: FLINK-20031 URL: https://issues.apache.org/jira/browse/FLINK-20031 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma In this case that we want to migrate the StreamingFileSink to the new sink api we might need to let user set the SinkWriter's uid same as the StreamingFileSink's. So that SinkWriter operator has the opportunity to reuse the old state. (This is just a option.) For this we need to let SinkWriter operator's uid is the same as the SinkTransformation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19697) Make the committer retry-able
[ https://issues.apache.org/jira/browse/FLINK-19697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19697: -- Summary: Make the committer retry-able (was: Make the streaming committer retry-able) > Make the committer retry-able > - > > Key: FLINK-19697 > URL: https://issues.apache.org/jira/browse/FLINK-19697 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20113) Test K8s High Availability Service
[ https://issues.apache.org/jira/browse/FLINK-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17231080#comment-17231080 ] Guowei Ma commented on FLINK-20113: --- Could assign this task to me. I can work for this. > Test K8s High Availability Service > -- > > Key: FLINK-20113 > URL: https://issues.apache.org/jira/browse/FLINK-20113 > Project: Flink > Issue Type: Sub-task > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Priority: Critical > Fix For: 1.12.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-20113) Test K8s High Availability Service
[ https://issues.apache.org/jira/browse/FLINK-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17231080#comment-17231080 ] Guowei Ma edited comment on FLINK-20113 at 11/13/20, 2:16 AM: -- Hi, [~rmetzger] Could you assign this task to me. I would like to do it. was (Author: maguowei): Could assign this task to me. I can work for this. > Test K8s High Availability Service > -- > > Key: FLINK-20113 > URL: https://issues.apache.org/jira/browse/FLINK-20113 > Project: Flink > Issue Type: Sub-task > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Priority: Critical > Fix For: 1.12.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20203) Could not find any document about how to build a Flink image from local build.
Guowei Ma created FLINK-20203: - Summary: Could not find any document about how to build a Flink image from local build. Key: FLINK-20203 URL: https://issues.apache.org/jira/browse/FLINK-20203 Project: Flink Issue Type: Improvement Affects Versions: 1.12.0 Reporter: Guowei Ma If user wants to use or try some feature that does not include in the "official" Flink image the user might need to build a docker image based on his local build. But there is such document([https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html)] So I would like to propose that we might need to introduce some documentation about how to build the image from local build. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20113) Test K8s High Availability Service
[ https://issues.apache.org/jira/browse/FLINK-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234284#comment-17234284 ] Guowei Ma commented on FLINK-20113: --- [~fly_in_gis] ok. > Test K8s High Availability Service > -- > > Key: FLINK-20113 > URL: https://issues.apache.org/jira/browse/FLINK-20113 > Project: Flink > Issue Type: Sub-task > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Assignee: Guowei Ma >Priority: Critical > Fix For: 1.12.0 > > > Added in https://issues.apache.org/jira/browse/FLINK-12884 > > [General Information about the Flink 1.12 release > testing|https://cwiki.apache.org/confluence/display/FLINK/1.12+Release+-+Community+Testing] > When testing a feature, consider the following aspects: > - Is the documentation easy to understand > - Are the error messages, log messages, APIs etc. easy to understand > - Is the feature working as expected under normal conditions > - Is the feature working / failing as expected with invalid input, induced > errors etc. > If you find a problem during testing, please file a ticket > (Priority=Critical; Fix Version = 1.12.0), and link it in this testing ticket. > During the testing, and once you are finished, please write a short summary > of all things you have tested. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20206) Failed to start the session, but there is no clear prompt.
Guowei Ma created FLINK-20206: - Summary: Failed to start the session, but there is no clear prompt. Key: FLINK-20206 URL: https://issues.apache.org/jira/browse/FLINK-20206 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes Affects Versions: 1.12.0 Reporter: Guowei Ma Attachments: image-2020-11-18-15-12-13-530.png Use ./bin/kubernetes-session.sh to start a k8s session clustter. The log showes the session cluster successfully start but it not. Personally I prefer the yarn-session way, which could make me have a clear expectation. So I would like to propose that Flink could give more detail information about whether session cluster create success or not. !image-2020-11-18-15-12-13-530.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18121) Support creating Docker image from local Flink distribution
[ https://issues.apache.org/jira/browse/FLINK-18121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234342#comment-17234342 ] Guowei Ma commented on FLINK-18121: --- +1 from the local file system. When I test https://issues.apache.org/jira/browse/FLINK-20206 I find I have to connect docker hub or build a http server myself. For me it is a little inconvenient. For example sometime the connection would be lost or timeout. > Support creating Docker image from local Flink distribution > --- > > Key: FLINK-18121 > URL: https://issues.apache.org/jira/browse/FLINK-18121 > Project: Flink > Issue Type: Improvement > Components: flink-docker >Affects Versions: docker-1.11.0.0 >Reporter: Till Rohrmann >Priority: Major > > Currently, > https://github.com/apache/flink-docker/blob/dev-master/Dockerfile-debian.template > only supports to create a Docker image from a Flink distribution which is > hosted on a web server. I think it would be helpful if we could also create a > Docker image from a Flink distribution which is stored on one's local file > system. That way, one would not have to upload the file or start a web server > for serving it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20211) Can not get the JobManager web ip according to the document
Guowei Ma created FLINK-20211: - Summary: Can not get the JobManager web ip according to the document Key: FLINK-20211 URL: https://issues.apache.org/jira/browse/FLINK-20211 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes Affects Versions: 1.12.0 Reporter: Guowei Ma According to [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui] 's LoadBalancer section I use the following cmd kubectl get services/cluster-id But I could not get the EXTERNAL-IP it always non -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20211) Can not get the JobManager web ip according to the document
[ https://issues.apache.org/jira/browse/FLINK-20211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20211: -- Description: According to [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui] 's LoadBalancer section I use the following cmd kubectl get services/cluster-id But I could not get the EXTERNAL-IP it is always non was: According to [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui] 's LoadBalancer section I use the following cmd kubectl get services/cluster-id But I could not get the EXTERNAL-IP it always non > Can not get the JobManager web ip according to the document > --- > > Key: FLINK-20211 > URL: https://issues.apache.org/jira/browse/FLINK-20211 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > According to > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui] > 's LoadBalancer section I use the following cmd > kubectl get services/cluster-id > But I could not get the EXTERNAL-IP it is always non > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (FLINK-18121) Support creating Docker image from local Flink distribution
[ https://issues.apache.org/jira/browse/FLINK-18121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234342#comment-17234342 ] Guowei Ma edited comment on FLINK-18121 at 11/18/20, 9:13 AM: -- +1 from the local file system. When I test https://issues.apache.org/jira/browse/FLINK-20113 I find I have to connect docker hub or build a http server myself. For me it is a little inconvenient. For example sometime the connection would be lost or timeout. was (Author: maguowei): +1 from the local file system. When I test https://issues.apache.org/jira/browse/FLINK-20206 I find I have to connect docker hub or build a http server myself. For me it is a little inconvenient. For example sometime the connection would be lost or timeout. > Support creating Docker image from local Flink distribution > --- > > Key: FLINK-18121 > URL: https://issues.apache.org/jira/browse/FLINK-18121 > Project: Flink > Issue Type: Improvement > Components: flink-docker >Affects Versions: docker-1.11.0.0 >Reporter: Till Rohrmann >Priority: Major > > Currently, > https://github.com/apache/flink-docker/blob/dev-master/Dockerfile-debian.template > only supports to create a Docker image from a Flink distribution which is > hosted on a web server. I think it would be helpful if we could also create a > Docker image from a Flink distribution which is stored on one's local file > system. That way, one would not have to upload the file or start a web server > for serving it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20214) Unnecessary warning log when starting a k8s session cluster
Guowei Ma created FLINK-20214: - Summary: Unnecessary warning log when starting a k8s session cluster Key: FLINK-20214 URL: https://issues.apache.org/jira/browse/FLINK-20214 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes Affects Versions: 1.12.0 Reporter: Guowei Ma 2020-11-18 17:46:36,727 WARN org.apache.flink.kubernetes.kubeclient.decorators.HadoopConfMountDecorator [] - Found 0 files in directory null/etc/hadoop, skip to mount the Hadoop Configuration ConfigMap. 2020-11-18 17:46:36,727 WARN org.apache.flink.kubernetes.kubeclient.decorators.HadoopConfMountDecorator [] - Found 0 files in directory null/etc/hadoop, skip to create the Hadoop Configuration ConfigMap. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20203) Could not find any document about how to build a Flink image from local build.
[ https://issues.apache.org/jira/browse/FLINK-20203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20203: -- Component/s: Deployment / Kubernetes > Could not find any document about how to build a Flink image from local build. > -- > > Key: FLINK-20203 > URL: https://issues.apache.org/jira/browse/FLINK-20203 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > If user wants to use or try some feature that does not include in the > "official" Flink image the user might need to build a docker image based on > his local build. But there is such > document([https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html)] > So I would like to propose that we might need to introduce some documentation > about how to build the image from local build. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20203) Could not find any document about how to build a Flink image from local build.
[ https://issues.apache.org/jira/browse/FLINK-20203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20203: -- Description: If user wants to use or try some feature that does not include in the "official" Flink image the user might need to build a docker image based on his local build. But there is no such document in [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html|https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html)] So I would like to propose that we might need to introduce some documentation about how to build the image from local build. was: If user wants to use or try some feature that does not include in the "official" Flink image the user might need to build a docker image based on his local build. But there is such document([https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html)] So I would like to propose that we might need to introduce some documentation about how to build the image from local build. > Could not find any document about how to build a Flink image from local build. > -- > > Key: FLINK-20203 > URL: https://issues.apache.org/jira/browse/FLINK-20203 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > If user wants to use or try some feature that does not include in the > "official" Flink image the user might need to build a docker image based on > his local build. But there is no such document in > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html|https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html)] > So I would like to propose that we might need to introduce some documentation > about how to build the image from local build. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20215) Keep the "Access the Flink UI " document same in “Kubernetes Setup” and "Native Kubernetes Setup Beta"
Guowei Ma created FLINK-20215: - Summary: Keep the "Access the Flink UI " document same in “Kubernetes Setup” and "Native Kubernetes Setup Beta" Key: FLINK-20215 URL: https://issues.apache.org/jira/browse/FLINK-20215 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes Affects Versions: 1.12.0 Reporter: Guowei Ma Both the "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html"; and "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui"; have section about "how to access the Flink UI". But the description is a little different from each other, I think maybe we could make them same. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20215) Keep the "Access the Flink UI " document same in “Kubernetes Setup” and "Native Kubernetes Setup Beta"
[ https://issues.apache.org/jira/browse/FLINK-20215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20215: -- Priority: Critical (was: Major) > Keep the "Access the Flink UI " document same in “Kubernetes Setup” and > "Native Kubernetes Setup Beta" > -- > > Key: FLINK-20215 > URL: https://issues.apache.org/jira/browse/FLINK-20215 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > Both the > "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html"; > and > "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui"; > have section about "how to access the Flink UI". > But the description is a little different from each other, I think maybe we > could make them same. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20215) Keep the "Access the Flink UI " document same in “Kubernetes Setup” and "Native Kubernetes Setup Beta"
[ https://issues.apache.org/jira/browse/FLINK-20215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20215: -- Description: Both the "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html"; and "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui"; have a section about "how to access the Flink UI". But the description is a little different from each other, I think maybe we could make them same. was: Both the "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html"; and "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui"; have section about "how to access the Flink UI". But the description is a little different from each other, I think maybe we could make them same. > Keep the "Access the Flink UI " document same in “Kubernetes Setup” and > "Native Kubernetes Setup Beta" > -- > > Key: FLINK-20215 > URL: https://issues.apache.org/jira/browse/FLINK-20215 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > Both the > "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html"; > and > "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui"; > have a section about "how to access the Flink UI". > But the description is a little different from each other, I think maybe we > could make them same. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20203) Could not find any document about how to build a Flink image from local build.
[ https://issues.apache.org/jira/browse/FLINK-20203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20203: -- Component/s: (was: Deployment / Kubernetes) Documentation > Could not find any document about how to build a Flink image from local build. > -- > > Key: FLINK-20203 > URL: https://issues.apache.org/jira/browse/FLINK-20203 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > If user wants to use or try some feature that does not include in the > "official" Flink image the user might need to build a docker image based on > his local build. But there is no such document in > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html|https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/docker.html)] > So I would like to propose that we might need to introduce some documentation > about how to build the image from local build. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20225) Break link in the document
Guowei Ma created FLINK-20225: - Summary: Break link in the document Key: FLINK-20225 URL: https://issues.apache.org/jira/browse/FLINK-20225 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.12.0 Reporter: Guowei Ma In the [doc|https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/kubernetes.html#start-a-job-cluster] there is a broken link: A _Flink Job cluster_ is a dedicated cluster which runs a single job. You can find more details [here|https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/kubernetes.html#start-a-job-cluster]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20215) Keep the "Access the Flink UI " document same in “Kubernetes Setup” and "Native Kubernetes Setup Beta"
[ https://issues.apache.org/jira/browse/FLINK-20215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20215: -- Component/s: (was: Deployment / Kubernetes) Documentation > Keep the "Access the Flink UI " document same in “Kubernetes Setup” and > "Native Kubernetes Setup Beta" > -- > > Key: FLINK-20215 > URL: https://issues.apache.org/jira/browse/FLINK-20215 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > Both the > "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html"; > and > "https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui"; > have a section about "how to access the Flink UI". > But the description is a little different from each other, I think maybe we > could make them same. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20211) Can not get the JobManager web ip according to the document
[ https://issues.apache.org/jira/browse/FLINK-20211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20211: -- Component/s: (was: Deployment / Kubernetes) Documentation > Can not get the JobManager web ip according to the document > --- > > Key: FLINK-20211 > URL: https://issues.apache.org/jira/browse/FLINK-20211 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > > According to > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#accessing-job-manager-ui] > 's LoadBalancer section I use the following cmd > kubectl get services/cluster-id > But I could not get the EXTERNAL-IP it is always non > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20113) Test K8s High Availability Service
[ https://issues.apache.org/jira/browse/FLINK-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234734#comment-17234734 ] Guowei Ma commented on FLINK-20113: --- I test four scenarios # Kubernetes ## Session Cluster ### Deploy a session cluster to the k8s ### Access the JobManager Web ### Check the master have the KubernetesLeaderElector log ### Submit a StateMachineExample.jar job ### Verify that there are some complete checkpoint ### Kill the jobmaster pod ### Verify that job could recovery from previous checkpoint ## Perjob Cluster ### Build a perjob image registry.cn-beijing.aliyuncs.com/streamcompute/flink:k8s-ha-per-job ### Deploy Perjob cluster ### Access the JobManager Web ### Check the master have the KubernetesLeaderElector log ### Verify that there are some complete checkpoints ### Kill the pod ### Verify that job could recovery from previous checkpoint # Native Kubernetes ## Session Cluster ### Start a native k8s session ### Access the JobManager web ### Check the KubernetesLeaderElector log ### Submit a StateMachineExample.jar job ### Verify that there are some complete checkpoints. ### Kill the pod ### Verify that job could recovery from previous checkpoint ## Start Application ### Start a flink application ### Access the JobManager web ### Check the KubernetesLeaderElector log ### Kill the pod ### Verify that job could recovery from previous checkpoint - In general the new HA service is work. Most problems I found are about the log and documentation. > Test K8s High Availability Service > -- > > Key: FLINK-20113 > URL: https://issues.apache.org/jira/browse/FLINK-20113 > Project: Flink > Issue Type: Sub-task > Components: Deployment / Kubernetes >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Assignee: Guowei Ma >Priority: Critical > Fix For: 1.12.0 > > > Added in https://issues.apache.org/jira/browse/FLINK-12884 > > [General Information about the Flink 1.12 release > testing|https://cwiki.apache.org/confluence/display/FLINK/1.12+Release+-+Community+Testing] > When testing a feature, consider the following aspects: > - Is the documentation easy to understand > - Are the error messages, log messages, APIs etc. easy to understand > - Is the feature working as expected under normal conditions > - Is the feature working / failing as expected with invalid input, induced > errors etc. > If you find a problem during testing, please file a ticket > (Priority=Critical; Fix Version = 1.12.0), and link it in this testing ticket. > During the testing, and once you are finished, please write a short summary > of all things you have tested. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20083) OrcFsStreamingSinkITCase times out
[ https://issues.apache.org/jira/browse/FLINK-20083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20083: -- Attachment: image-2020-11-20-08-03-43-073.png > OrcFsStreamingSinkITCase times out > -- > > Key: FLINK-20083 > URL: https://issues.apache.org/jira/browse/FLINK-20083 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, > ORC, SequenceFile) >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Assignee: Guowei Ma >Priority: Critical > Labels: test-stability > Fix For: 1.12.0 > > Attachments: image-2020-11-20-08-03-43-073.png > > > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=8661&view=logs&j=66592496-52df-56bb-d03e-37509e1d9d0f&t=ae0269db-6796-5583-2e5f-d84757d711aa > {code} > [ERROR] > OrcFsStreamingSinkITCase>FsStreamingSinkITCaseBase.testPart:84->FsStreamingSinkITCaseBase.test:120->FsStreamingSinkITCaseBase.check:133 > » TestTimedOut > [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 26.871 s <<< FAILURE! - in org.apache.flink.orc.OrcFsStreamingSinkITCase > [ERROR] testPart(org.apache.flink.orc.OrcFsStreamingSinkITCase) Time > elapsed: 20.052 s <<< ERROR! > org.junit.runners.model.TestTimedOutException: test timed out after 20 seconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:231) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:119) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77) > at > org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115) > at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:355) > at java.util.Iterator.forEachRemaining(Iterator.java:115) > at > org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:114) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.check(FsStreamingSinkITCaseBase.scala:133) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:120) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPart(FsStreamingSinkITCaseBase.scala:84) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20083) OrcFsStreamingSinkITCase times out
[ https://issues.apache.org/jira/browse/FLINK-20083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235798#comment-17235798 ] Guowei Ma commented on FLINK-20083: --- I found that `SplitFetcher` exits too early(line:387) that leads some split are not handled(line:388 389). !image-2020-11-20-08-03-43-073.png! > OrcFsStreamingSinkITCase times out > -- > > Key: FLINK-20083 > URL: https://issues.apache.org/jira/browse/FLINK-20083 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, > ORC, SequenceFile) >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Assignee: Guowei Ma >Priority: Critical > Labels: test-stability > Fix For: 1.12.0 > > Attachments: image-2020-11-20-08-03-43-073.png > > > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=8661&view=logs&j=66592496-52df-56bb-d03e-37509e1d9d0f&t=ae0269db-6796-5583-2e5f-d84757d711aa > {code} > [ERROR] > OrcFsStreamingSinkITCase>FsStreamingSinkITCaseBase.testPart:84->FsStreamingSinkITCaseBase.test:120->FsStreamingSinkITCaseBase.check:133 > » TestTimedOut > [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 26.871 s <<< FAILURE! - in org.apache.flink.orc.OrcFsStreamingSinkITCase > [ERROR] testPart(org.apache.flink.orc.OrcFsStreamingSinkITCase) Time > elapsed: 20.052 s <<< ERROR! > org.junit.runners.model.TestTimedOutException: test timed out after 20 seconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:231) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:119) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77) > at > org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115) > at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:355) > at java.util.Iterator.forEachRemaining(Iterator.java:115) > at > org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:114) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.check(FsStreamingSinkITCaseBase.scala:133) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:120) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPart(FsStreamingSinkITCaseBase.scala:84) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20083) OrcFsStreamingSinkITCase times out
[ https://issues.apache.org/jira/browse/FLINK-20083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-20083. - Resolution: Fixed > OrcFsStreamingSinkITCase times out > -- > > Key: FLINK-20083 > URL: https://issues.apache.org/jira/browse/FLINK-20083 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, > ORC, SequenceFile) >Affects Versions: 1.12.0 >Reporter: Robert Metzger >Assignee: Guowei Ma >Priority: Critical > Labels: test-stability > Fix For: 1.12.0 > > Attachments: image-2020-11-20-08-03-43-073.png > > > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=8661&view=logs&j=66592496-52df-56bb-d03e-37509e1d9d0f&t=ae0269db-6796-5583-2e5f-d84757d711aa > {code} > [ERROR] > OrcFsStreamingSinkITCase>FsStreamingSinkITCaseBase.testPart:84->FsStreamingSinkITCaseBase.test:120->FsStreamingSinkITCaseBase.check:133 > » TestTimedOut > [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 26.871 s <<< FAILURE! - in org.apache.flink.orc.OrcFsStreamingSinkITCase > [ERROR] testPart(org.apache.flink.orc.OrcFsStreamingSinkITCase) Time > elapsed: 20.052 s <<< ERROR! > org.junit.runners.model.TestTimedOutException: test timed out after 20 seconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:231) > at > org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:119) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103) > at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77) > at > org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115) > at > org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:355) > at java.util.Iterator.forEachRemaining(Iterator.java:115) > at > org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:114) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.check(FsStreamingSinkITCaseBase.scala:133) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:120) > at > org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPart(FsStreamingSinkITCaseBase.scala:84) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20297) Make `SerializerTestBase::getTestData` return List
[ https://issues.apache.org/jira/browse/FLINK-20297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20297: -- Priority: Minor (was: Major) > Make `SerializerTestBase::getTestData` return List > - > > Key: FLINK-20297 > URL: https://issues.apache.org/jira/browse/FLINK-20297 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Guowei Ma >Priority: Minor > > Currently `SerializerTestBase::getTestData` return T[], which can not be > override by the Scala. It means that developer could not add scala serializer > test based on `SerializerTestBase` > So I would propose to change the `SerializerTestBase::getTestData` to return > List -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20297) Make `SerializerTestBase::getTestData` return List
Guowei Ma created FLINK-20297: - Summary: Make `SerializerTestBase::getTestData` return List Key: FLINK-20297 URL: https://issues.apache.org/jira/browse/FLINK-20297 Project: Flink Issue Type: Improvement Components: API / Type Serialization System Reporter: Guowei Ma Currently `SerializerTestBase::getTestData` return T[], which can not be override by the Scala. It means that developer could not add scala serializer test based on `SerializerTestBase` So I would propose to change the `SerializerTestBase::getTestData` to return List -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20337) Make migrate `StreamingFileSink` to `FileSink` possible
Guowei Ma created FLINK-20337: - Summary: Make migrate `StreamingFileSink` to `FileSink` possible Key: FLINK-20337 URL: https://issues.apache.org/jira/browse/FLINK-20337 Project: Flink Issue Type: Improvement Components: API / DataStream Affects Versions: 1.12.0 Reporter: Guowei Ma Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to figure out how to migrate from `StreamingFileSink` to `FileSink` for the user who uses the `StreamingFileSink` currently. The pr wants provide a way that make it possible. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20338) Make the SinkWriter load previous sink's state.
Guowei Ma created FLINK-20338: - Summary: Make the SinkWriter load previous sink's state. Key: FLINK-20338 URL: https://issues.apache.org/jira/browse/FLINK-20338 Project: Flink Issue Type: Sub-task Components: API / DataStream Affects Versions: 1.12.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20337) Make migrate `StreamingFileSink` to `FileSink` possible
[ https://issues.apache.org/jira/browse/FLINK-20337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20337: -- Description: Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to figure out how to migrate from `StreamingFileSink` to `FileSink` for the user who uses the `StreamingFileSink` currently. For this purpose we propose to let the new sink writer operator could load the previous sink's state and then the `SinkWriter` could have the chance to handle the old state. was: Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to figure out how to migrate from `StreamingFileSink` to `FileSink` for the user who uses the `StreamingFileSink` currently. The pr wants provide a way that make it possible. > Make migrate `StreamingFileSink` to `FileSink` possible > --- > > Key: FLINK-20337 > URL: https://issues.apache.org/jira/browse/FLINK-20337 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Blocker > > Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the > exactly once semantics both in the streaming and batch execution mode. > We need to figure out how to migrate from `StreamingFileSink` to `FileSink` > for the user who uses the `StreamingFileSink` currently. > For this purpose we propose to let the new sink writer operator could load > the previous sink's state and then the `SinkWriter` could have the chance to > handle the old state. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20338) Make the StatefulSinkWriterOperator load previous sink's state.
[ https://issues.apache.org/jira/browse/FLINK-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20338: -- Summary: Make the StatefulSinkWriterOperator load previous sink's state. (was: Make the SinkWriter load previous sink's state.) > Make the StatefulSinkWriterOperator load previous sink's state. > --- > > Key: FLINK-20338 > URL: https://issues.apache.org/jira/browse/FLINK-20338 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Blocker > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-20339) `FileWriter` support to load StreamingFileSink's state.
Guowei Ma created FLINK-20339: - Summary: `FileWriter` support to load StreamingFileSink's state. Key: FLINK-20339 URL: https://issues.apache.org/jira/browse/FLINK-20339 Project: Flink Issue Type: Sub-task Components: API / DataStream Affects Versions: 1.12.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20337) Make migrate `StreamingFileSink` to `FileSink` possible
[ https://issues.apache.org/jira/browse/FLINK-20337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20337: -- Description: Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to figure out how to migrate from `StreamingFileSink` to `FileSink` for the user who uses the `StreamingFileSink` currently. For this purpose we propose to let the new sink writer operator could load the previous StreamingFileSink's state and then the `SinkWriter` could have the opertunity to handle the old state. was: Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to figure out how to migrate from `StreamingFileSink` to `FileSink` for the user who uses the `StreamingFileSink` currently. For this purpose we propose to let the new sink writer operator could load the previous sink's state and then the `SinkWriter` could have the chance to handle the old state. > Make migrate `StreamingFileSink` to `FileSink` possible > --- > > Key: FLINK-20337 > URL: https://issues.apache.org/jira/browse/FLINK-20337 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Blocker > > Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the > exactly once semantics both in the streaming and batch execution mode. > We need to figure out how to migrate from `StreamingFileSink` to `FileSink` > for the user who uses the `StreamingFileSink` currently. > For this purpose we propose to let the new sink writer operator could load > the previous StreamingFileSink's state and then the `SinkWriter` could have > the opertunity to handle the old state. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20338) Make the `StatefulSinkWriterOperator` load previous `StreamingFileSink`'s state.
[ https://issues.apache.org/jira/browse/FLINK-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20338: -- Summary: Make the `StatefulSinkWriterOperator` load previous `StreamingFileSink`'s state. (was: Make the StatefulSinkWriterOperator load previous sink's state.) > Make the `StatefulSinkWriterOperator` load previous `StreamingFileSink`'s > state. > > > Key: FLINK-20338 > URL: https://issues.apache.org/jira/browse/FLINK-20338 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Blocker > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20337) Make migrate `StreamingFileSink` to `FileSink` possible
[ https://issues.apache.org/jira/browse/FLINK-20337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20337: -- Description: Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to provide a way for the user who uses `StreamingFileSink` to migrate from `StreamingFileSink` to `FileSink`. For this purpose we propose to let the new sink writer operator could load the previous StreamingFileSink's state and then the `SinkWriter` could have the opertunity to handle the old state. was: Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the exactly once semantics both in the streaming and batch execution mode. We need to figure out how to migrate from `StreamingFileSink` to `FileSink` for the user who uses the `StreamingFileSink` currently. For this purpose we propose to let the new sink writer operator could load the previous StreamingFileSink's state and then the `SinkWriter` could have the opertunity to handle the old state. > Make migrate `StreamingFileSink` to `FileSink` possible > --- > > Key: FLINK-20337 > URL: https://issues.apache.org/jira/browse/FLINK-20337 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Blocker > > Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the > exactly once semantics both in the streaming and batch execution mode. We > need to provide a way for the user who uses `StreamingFileSink` to migrate > from `StreamingFileSink` to `FileSink`. > For this purpose we propose to let the new sink writer operator could load > the previous StreamingFileSink's state and then the `SinkWriter` could have > the opertunity to handle the old state. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-20338) Make the `StatefulSinkWriterOperator` load `StreamingFileSink`'s state.
[ https://issues.apache.org/jira/browse/FLINK-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-20338: -- Summary: Make the `StatefulSinkWriterOperator` load `StreamingFileSink`'s state. (was: Make the `StatefulSinkWriterOperator` load previous `StreamingFileSink`'s state.) > Make the `StatefulSinkWriterOperator` load `StreamingFileSink`'s state. > --- > > Key: FLINK-20338 > URL: https://issues.apache.org/jira/browse/FLINK-20338 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Blocker > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-20337) Make migrate `StreamingFileSink` to `FileSink` possible
[ https://issues.apache.org/jira/browse/FLINK-20337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17239098#comment-17239098 ] Guowei Ma commented on FLINK-20337: --- Sorry [~kkl0u] for responsing so late. I think it is good for other to follow what is happen if we merge the two tasks together. So I would close the two sub tasks. > Make migrate `StreamingFileSink` to `FileSink` possible > --- > > Key: FLINK-20337 > URL: https://issues.apache.org/jira/browse/FLINK-20337 > Project: Flink > Issue Type: Improvement > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Assignee: Guowei Ma >Priority: Critical > Fix For: 1.12.0 > > > Flink-1.12 introduces the `FileSink` in FLINK-19510, which can guarantee the > exactly once semantics both in the streaming and batch execution mode. We > need to provide a way for the user who uses `StreamingFileSink` to migrate > from `StreamingFileSink` to `FileSink`. > For this purpose we propose to let the new sink writer operator could load > the previous StreamingFileSink's state and then the `SinkWriter` could have > the opertunity to handle the old state. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20338) Make the `StatefulSinkWriterOperator` load `StreamingFileSink`'s state.
[ https://issues.apache.org/jira/browse/FLINK-20338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-20338. - Resolution: Won't Fix > Make the `StatefulSinkWriterOperator` load `StreamingFileSink`'s state. > --- > > Key: FLINK-20338 > URL: https://issues.apache.org/jira/browse/FLINK-20338 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > Labels: pull-request-available > Fix For: 1.12.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-20339) `FileWriter` support to load StreamingFileSink's state.
[ https://issues.apache.org/jira/browse/FLINK-20339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-20339. - Resolution: Won't Fix > `FileWriter` support to load StreamingFileSink's state. > --- > > Key: FLINK-20339 > URL: https://issues.apache.org/jira/browse/FLINK-20339 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Critical > Fix For: 1.12.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17038) Decouple resolving Type and creating TypeInformation process
Guowei Ma created FLINK-17038: - Summary: Decouple resolving Type and creating TypeInformation process Key: FLINK-17038 URL: https://issues.apache.org/jira/browse/FLINK-17038 Project: Flink Issue Type: Improvement Components: API / DataStream Affects Versions: 1.10.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17038) Decouple resolving Type and creating TypeInformation process
[ https://issues.apache.org/jira/browse/FLINK-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-17038: -- Parent: FLINK-15674 Issue Type: Sub-task (was: Improvement) > Decouple resolving Type and creating TypeInformation process > > > Key: FLINK-17038 > URL: https://issues.apache.org/jira/browse/FLINK-17038 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.10.0 >Reporter: Guowei Ma >Priority: Critical > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17039) Introduce TypeInformationExtractor interface
Guowei Ma created FLINK-17039: - Summary: Introduce TypeInformationExtractor interface Key: FLINK-17039 URL: https://issues.apache.org/jira/browse/FLINK-17039 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17038) Decouple resolving Type from creating TypeInformation process
[ https://issues.apache.org/jira/browse/FLINK-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-17038: -- Summary: Decouple resolving Type from creating TypeInformation process (was: Decouple resolving Type and creating TypeInformation process) > Decouple resolving Type from creating TypeInformation process > - > > Key: FLINK-17038 > URL: https://issues.apache.org/jira/browse/FLINK-17038 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Affects Versions: 1.10.0 >Reporter: Guowei Ma >Priority: Critical > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17039) Introduce TypeInformationExtractor
[ https://issues.apache.org/jira/browse/FLINK-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-17039: -- Summary: Introduce TypeInformationExtractor (was: Introduce TypeInformationExtractor interface) > Introduce TypeInformationExtractor > -- > > Key: FLINK-17039 > URL: https://issues.apache.org/jira/browse/FLINK-17039 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Critical > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17041) Migrate current TypeInformation creation to the TypeInformationExtractor framework
Guowei Ma created FLINK-17041: - Summary: Migrate current TypeInformation creation to the TypeInformationExtractor framework Key: FLINK-17041 URL: https://issues.apache.org/jira/browse/FLINK-17041 Project: Flink Issue Type: Sub-task Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17039) Introduce TypeInformationExtractor famework
[ https://issues.apache.org/jira/browse/FLINK-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-17039: -- Summary: Introduce TypeInformationExtractor famework (was: Introduce TypeInformationExtractor) > Introduce TypeInformationExtractor famework > --- > > Key: FLINK-17039 > URL: https://issues.apache.org/jira/browse/FLINK-17039 > Project: Flink > Issue Type: Sub-task > Components: API / DataStream >Reporter: Guowei Ma >Priority: Critical > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19260) Update documentation based on bin/flink output
Guowei Ma created FLINK-19260: - Summary: Update documentation based on bin/flink output Key: FLINK-19260 URL: https://issues.apache.org/jira/browse/FLINK-19260 Project: Flink Issue Type: Bug Components: Command Line Client, Documentation Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19260) Update documentation based on bin/flink output
[ https://issues.apache.org/jira/browse/FLINK-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19260: -- Parent: FLINK-11779 Issue Type: Sub-task (was: Bug) > Update documentation based on bin/flink output > -- > > Key: FLINK-19260 > URL: https://issues.apache.org/jira/browse/FLINK-19260 > Project: Flink > Issue Type: Sub-task > Components: Command Line Client, Documentation >Reporter: Guowei Ma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19261) Update document according to RestOptions
Guowei Ma created FLINK-19261: - Summary: Update document according to RestOptions Key: FLINK-19261 URL: https://issues.apache.org/jira/browse/FLINK-19261 Project: Flink Issue Type: Sub-task Components: Documentation Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19261) Update documentation about `RestOptions`
[ https://issues.apache.org/jira/browse/FLINK-19261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19261: -- Summary: Update documentation about `RestOptions` (was: Update document according to RestOptions) > Update documentation about `RestOptions` > > > Key: FLINK-19261 > URL: https://issues.apache.org/jira/browse/FLINK-19261 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Guowei Ma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19260) Update documentation based on bin/flink output
[ https://issues.apache.org/jira/browse/FLINK-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19260: -- Component/s: (was: Command Line Client) > Update documentation based on bin/flink output > -- > > Key: FLINK-19260 > URL: https://issues.apache.org/jira/browse/FLINK-19260 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Guowei Ma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19373) `jmx.server.port`'s document is missing.
Guowei Ma created FLINK-19373: - Summary: `jmx.server.port`'s document is missing. Key: FLINK-19373 URL: https://issues.apache.org/jira/browse/FLINK-19373 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.12.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19374) update the `table.exec.state.ttl`'s documentation
Guowei Ma created FLINK-19374: - Summary: update the `table.exec.state.ttl`'s documentation Key: FLINK-19374 URL: https://issues.apache.org/jira/browse/FLINK-19374 Project: Flink Issue Type: Task Components: Documentation Affects Versions: 1.12.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19375) The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document is missing
Guowei Ma created FLINK-19375: - Summary: The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document is missing Key: FLINK-19375 URL: https://issues.apache.org/jira/browse/FLINK-19375 Project: Flink Issue Type: Task Components: Documentation Affects Versions: 1.12.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19375) The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document is needed to be update
[ https://issues.apache.org/jira/browse/FLINK-19375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma updated FLINK-19375: -- Summary: The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document is needed to be update (was: The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document needed update) > The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document is > needed to be update > > > Key: FLINK-19375 > URL: https://issues.apache.org/jira/browse/FLINK-19375 > Project: Flink > Issue Type: Task > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-19375) The `kubernetes.secrets` and `kubernetes.env.secretKeyRef`'s document needed update
[jira] [Created] (FLINK-19376) `table.generated-code.max-length` and `table.sql-dialec` 's document is needed to be updated
Guowei Ma created FLINK-19376: - Summary: `table.generated-code.max-length` and `table.sql-dialec` 's document is needed to be updated Key: FLINK-19376 URL: https://issues.apache.org/jira/browse/FLINK-19376 Project: Flink Issue Type: Task Components: Documentation Affects Versions: 1.12.0 Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-19261) Update documentation about `RestOptions`
[ https://issues.apache.org/jira/browse/FLINK-19261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-19261. - Resolution: Invalid > Update documentation about `RestOptions` > > > Key: FLINK-19261 > URL: https://issues.apache.org/jira/browse/FLINK-19261 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Guowei Ma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-19260) Update documentation based on bin/flink output
[ https://issues.apache.org/jira/browse/FLINK-19260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guowei Ma closed FLINK-19260. - Resolution: Invalid > Update documentation based on bin/flink output > -- > > Key: FLINK-19260 > URL: https://issues.apache.org/jira/browse/FLINK-19260 > Project: Flink > Issue Type: Sub-task > Components: Documentation >Reporter: Guowei Ma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-19373) `jmx.server.port`'s document is missing.
[ https://issues.apache.org/jira/browse/FLINK-19373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17200726#comment-17200726 ] Guowei Ma commented on FLINK-19373: --- It makes sense. I will do it. > `jmx.server.port`'s document is missing. > > > Key: FLINK-19373 > URL: https://issues.apache.org/jira/browse/FLINK-19373 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.12.0 >Reporter: Guowei Ma >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)