[jira] [Created] (FLINK-24243) Clean up the code and avoid warning messages introduced by deprecated API
Dian Fu created FLINK-24243: --- Summary: Clean up the code and avoid warning messages introduced by deprecated API Key: FLINK-24243 URL: https://issues.apache.org/jira/browse/FLINK-24243 Project: Flink Issue Type: Improvement Components: API / Python Reporter: Dian Fu Assignee: Dian Fu Currently, there are quite a few warning messages when executing PyFlink jobs, e.g. {code} Process finished with exit code 0 /usr/local/Cellar/python@3.7/3.7.10_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py:883: ResourceWarning: subprocess 75115 is still running ResourceWarning, source=self) ResourceWarning: Enable tracemalloc to get the object allocation traceback /Users/dianfu/code/src/apache/flink/flink-python/pyflink/table/udf.py:326: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working if not isinstance(input_types, collections.Iterable) \ /Users/dianfu/code/src/apache/flink/flink-python/pyflink/table/table_environment.py:537: DeprecationWarning: Deprecated in 1.10. Use create_table instead. warnings.warn("Deprecated in 1.10. Use create_table instead.", DeprecationWarning) /Users/dianfu/venv/examples-37/lib/python3.7/site-packages/future/standard_library/__init__.py:65: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp 2021-09-10 15:03:47,335 - apache_beam.typehints.native_type_compatibility - INFO - Using Any for unsupported type: typing.Sequence[~T] /Users/dianfu/code/src/apache/flink/flink-python/pyflink/fn_execution/state_impl.py:677: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working class RemovableConcatIterator(collections.Iterator): /Users/dianfu/code/src/apache/flink/flink-python/pyflink/fn_execution/utils/operation_utils.py:19: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working from collections import Generator {code} We should clean up the code and avoid warning messages introduced by deprecated API by using latest API. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24244) Add logging about whether it's executed in loopback mode
Dian Fu created FLINK-24244: --- Summary: Add logging about whether it's executed in loopback mode Key: FLINK-24244 URL: https://issues.apache.org/jira/browse/FLINK-24244 Project: Flink Issue Type: Improvement Components: API / Python Reporter: Dian Fu Assignee: Dian Fu Fix For: 1.14.0 Currently, it's unclear whether a job is running in loopback mode or process mode, it would be great to add some logging to make it clear. This would be helpful for debugging. It makes it clear whether a failed test is running in loopback mode or process mode. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24245) Fix the problem caused by multiple jobs sharing the loopback mode address stored in the environment variable in PyFlink
Huang Xingbo created FLINK-24245: Summary: Fix the problem caused by multiple jobs sharing the loopback mode address stored in the environment variable in PyFlink Key: FLINK-24245 URL: https://issues.apache.org/jira/browse/FLINK-24245 Project: Flink Issue Type: Bug Components: API / Python Affects Versions: 1.14.0, 1.15.0 Reporter: Huang Xingbo Assignee: Huang Xingbo Fix For: 1.14.0, 1.15.0 In loopback mode, we store the loopback address in Environment which will cause other jobs will also run in the loopback. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24246) Bump Pulsar to 2.8.1
Yufan Sheng created FLINK-24246: --- Summary: Bump Pulsar to 2.8.1 Key: FLINK-24246 URL: https://issues.apache.org/jira/browse/FLINK-24246 Project: Flink Issue Type: Improvement Components: Connectors / Pulsar Affects Versions: 1.14.0, 1.15.0 Reporter: Yufan Sheng Fix For: 1.14.0 Pulsar 2.8.1 has been released, the hack for getting TxnId from Pulsar Transaction could be removed after bump flink-connector-pulsar's pulsar-client-all to 2.8.1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24247) Uniformly limit the value of ABS(offset) in all window
liwei li created FLINK-24247: Summary: Uniformly limit the value of ABS(offset) in all window Key: FLINK-24247 URL: https://issues.apache.org/jira/browse/FLINK-24247 Project: Flink Issue Type: Improvement Components: API / DataStream Reporter: liwei li when i test Window TVF offset([FLINK-23747|https://issues.apache.org/jira/browse/FLINK-23747?focusedCommentId=17410923&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17410923]), i found that TUMBLE WINDOW limit the value of ABS (offset) to be less than window size, but HOP\CUMULATE Windows does not have this limit. Do we uniformly add this restriction to all of them? If add limit is appropriate, give me the ticket and I'll create a pr. {code:java} checkArgument( Math.abs(offset) < size, String.format( "Tumbling Window parameters must satisfy abs(offset) < size, bot got size %dms and offset %dms.", size, offset)); {code} [https://github.com/apache/flink/blob/c13b749782d0d72647ec6d73ce1eb22b57f2be7d/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/window/slicing/SliceAssigners.java#L154] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24248) flink-clients dependency missing in Gradle Example
Konstantin Knauf created FLINK-24248: Summary: flink-clients dependency missing in Gradle Example Key: FLINK-24248 URL: https://issues.apache.org/jira/browse/FLINK-24248 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.13.2 Reporter: Konstantin Knauf The Gradle example on the "Project Configuration" page misses ``` compile "org.apache.flink:flink-clients_${scalaBinaryVersion}:${flinkVersion}" ``` in order to be able to run the program locally. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24249) login from keytab fail when disk damage
YuAngZhang created FLINK-24249: -- Summary: login from keytab fail when disk damage Key: FLINK-24249 URL: https://issues.apache.org/jira/browse/FLINK-24249 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.13.2 Reporter: YuAngZhang flink on yarn will localize user keytab on local machine disk, trigger checkpoint will fail when jobmanager mkdirs on hdfs when the disk damage,but the flink job not fail,so I can't recover from checkpoint the exception like this {code:java} java.io.IOException: Failed on local exception: java.io.IOException: Login failure for joey from keytab /data01/yarn/nm/usercache/ joey/appcache/application_1631093653028_0015/container_e134_1631093653028_0015_01_01/krb5.keytab; Host Details : local host is: "localhost/10.1.1.37"; destination host is: "localhost":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.ipc.Client.call(Client.java:1474) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.ipc.Client.call(Client.java:1401) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at com.sun.proxy.$Proxy41.mkdirs(Unknown Source) ~[?:?]at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source) ~[?:?]at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at com.sun.proxy.$Proxy42.mkdirs(Unknown Source) ~[?:?]at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2742) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2713) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1819) ~[flink-shaded-hadoop-2-uber-2.6.5-10.0.jar:2.6.5-10.0]at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.mkdirs(HadoopFileSystem.java:183) ~[flink-dist_2.11-1.13.2.jar:1.13.2]at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.initializeLocationForCheckpoint(FsCheckpointStorageAccess.java:129) ~[flink-dist_2.11-1.13.2.jar:1.13.2]at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.initializeCheckpoint(CheckpointCoordinator.java:689) ~[flink-dist_2.11-1.13.2.jar:1.13.2]at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.lambda$startTriggeringCheckpoint$2(CheckpointCoordinator.java:543) ~[flink-dist_2.11-1.13.2.jar:1.13.2]at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) [?:1.8.0_181]at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) [?:1.8.0_181]at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) [?:1.8.0_181]at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_181]at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_181]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]at java.uti
[jira] [Created] (FLINK-24250) Add De/Serialization API to tear-down user code
Arvid Heise created FLINK-24250: --- Summary: Add De/Serialization API to tear-down user code Key: FLINK-24250 URL: https://issues.apache.org/jira/browse/FLINK-24250 Project: Flink Issue Type: Improvement Components: API / Type Serialization System Reporter: Arvid Heise FLINK-17306 added {{open}} to {{(De)SerializationSchema}}. We should provide a symmetric {{closeX}} method. See [ML De/Serialization API to tear-down user code|https://lists.apache.org/thread.html/r31d36e076bd192bc87ff8c896557ac0e2812229ee0f2c67a1b9e6e12%40%3Cuser.flink.apache.org%3E] for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24251) Make default constructor of BinaryStringData private
Caizhi Weng created FLINK-24251: --- Summary: Make default constructor of BinaryStringData private Key: FLINK-24251 URL: https://issues.apache.org/jira/browse/FLINK-24251 Project: Flink Issue Type: Improvement Components: Table SQL / Runtime Reporter: Caizhi Weng In FLINK-23289 we add a not null checking for {{BinarySection}}. After the change the default constructor of {{BinaryStringData}} will now construct a {{BinaryStringData}} with {{null}} Java object and {{null}} {{BinarySection}}. This is different from the behavior before where the default constructor constructs an empty binary string. Although {{BinaryStringData}} is an internal class, it might confuse some developers (actually I myself have been confused) if they build their programs around this class. So we should make the default constructor construct an empty binary string again without breaking the not null checking. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Automated architectural tests
Hi Ingo, Thanks for driving this discussion. Some use cases come to my mind, maybe those rules could be checked by the same way. 1. new introduced `StreamExecNode` is implemented json serialization/deserialization. Currently it is checked in `JsonSerdeCoverageTest`. 2. new introduced `RelNode` could be covered by all `MetadataHandler`s. Currently this rule not exists yet. Best regards, JING ZHANG Ingo Bürk 于2021年9月9日周四 下午6:49写道: > Great! I'll work on getting the PR into an actual, proper shape now, > including looking at found violations more carefully and eventually > freezing current violations (maybe removing some quick-wins). > > One more thing I just ran into is that ArchUnit doesn't explicitly support > Scala; while many things just work (since it's still byte code), > Scala-specific concepts like traits seem to cause issues. I'll have to > exclude Scala code from the checks for now, I think. > > > Ingo > > On Tue, Sep 7, 2021 at 5:03 PM Chesnay Schepler > wrote: > > > I would say that's fine time-wise. > > > > On 07/09/2021 15:29, Ingo Bürk wrote: > > > Thanks, Chesnay. I updated the PR to use a separate module now, and ran > > it > > > on a few modules (some Table API modules and a couple connectors). The > CI > > > seemed to take ~2.5min for executing the tests; that's certainly not > > > negligible. On the other hand, even the few tests implemented already > > found > > > several violations ("several" is an understatement, but I manually > > verified > > > some of them, not all of them). > > > > > > On Mon, Sep 6, 2021 at 3:44 PM Chesnay Schepler > > wrote: > > > > > >> While flink-tests is currently the best choice in that it has the > > >> biggest classpath, it is also the module already requiring the most > time > > >> on CI. > > >> > > >> Furthermore, given that we ideally cover all APIs (including > connectors > > >> & formats), having that mess of dependencies in flink-tests may > > >> interfere with existing / future tests. > > >> > > >> As such I would prefer a separate module, as annoying as that may be. > > >> > > >> On 06/09/2021 15:26, Ingo Bürk wrote: > > >>> I just quickly chatted with the author/maintainer of ArchUnit, and a > > >> module > > >>> which depends on every module that should be tested seems to be the > > best > > >>> solution. How do you feel about using flink-tests for this vs. > having a > > >>> separate module for this purpose? > > >>> > > >>> > > >>> Ingo > > >>> > > >>> On Mon, Sep 6, 2021 at 3:04 PM Ingo Bürk wrote: > > >>> > > Hi Chesnay, > > > > Those are all great questions, and I want to tackle those as well. > For > > >> the > > moment I went per-module, but runtime-wise that isn't ideal the more > > modules we'd activate this in. ArchUnit does cache classes between > > >> tests, > > but if we run them individually per module, we'd still add up quite > a > > >> bit > > of execution time (a single module in my IDE is around 10s with the > > >> tests I > > currently have implemented, but I suspect the bottleneck here is the > > importing of classes, not the number of tests). Ideally we'd just > run > > >> them > > once in a module with a big enough classpath to cover everything. If > > we > > have such a place, that would probably be our best shot. I'll also > > keep > > investigating here, of course. > > > > For now I just pushed a solution to avoid the overlap when executing > > it > > per-module by matching on the URI. It's not the prettiest solution, > > but > > does work; but that's more to not fail the tests in unrelated > modules > > >> and > > doesn't help much with execution time. > > > > > > Ingo > > > > On Mon, Sep 6, 2021 at 1:57 PM Chesnay Schepler > > > wrote: > > > > > Do you have an estimate for long these tests would run for? > > > > > > For project-wide tests, what are the options for setting that up? > > > If we let the tests run per-module then I guess they'd overlap > > > considerably (because other Flink modules are being put on the > > > classpath), which isn't ideal. > > > > > > On 06/09/2021 13:51, David Morávek wrote: > > >> Hi Ingo, > > >> > > >> +1 for this effort. This could automate lot of "written rules" > that > > >> are > > >> easy to forget about / not to be aware of (such as that each test > > >> should > > >> extend the TestLogger as Till has already mentioned). > > >> > > >> I went trough your examples and ArchUnit looks really powerful and > > >> expressive while still being easy to read. > > >> > > >> Best, > > >> D. > > >> > > >> On Mon, Sep 6, 2021 at 1:00 PM Ingo Bürk > > wrote: > > >> > > >>> Thanks for your input Chesnay! > > >>> > > >>> The limitations of ArchUnit probably mostly stem from the fact > that > > >> it > > >>> operates on byte code and thus can't access anything not > acces
Re: [DISCUSS] Automated architectural tests
Thanks, JING ZHANG. The first one is definitely doable, I will add it to my list. The second one I'll have to take a look at. On Fri, Sep 10, 2021 at 2:27 PM JING ZHANG wrote: > Hi Ingo, > Thanks for driving this discussion. > Some use cases come to my mind, maybe those rules could be checked by the > same way. > 1. new introduced `StreamExecNode` is implemented json > serialization/deserialization. Currently it is checked in > `JsonSerdeCoverageTest`. > 2. new introduced `RelNode` could be covered by all `MetadataHandler`s. > Currently this rule not exists yet. > > Best regards, > JING ZHANG > > > Ingo Bürk 于2021年9月9日周四 下午6:49写道: > > > Great! I'll work on getting the PR into an actual, proper shape now, > > including looking at found violations more carefully and eventually > > freezing current violations (maybe removing some quick-wins). > > > > One more thing I just ran into is that ArchUnit doesn't explicitly > support > > Scala; while many things just work (since it's still byte code), > > Scala-specific concepts like traits seem to cause issues. I'll have to > > exclude Scala code from the checks for now, I think. > > > > > > Ingo > > > > On Tue, Sep 7, 2021 at 5:03 PM Chesnay Schepler > > wrote: > > > > > I would say that's fine time-wise. > > > > > > On 07/09/2021 15:29, Ingo Bürk wrote: > > > > Thanks, Chesnay. I updated the PR to use a separate module now, and > ran > > > it > > > > on a few modules (some Table API modules and a couple connectors). > The > > CI > > > > seemed to take ~2.5min for executing the tests; that's certainly not > > > > negligible. On the other hand, even the few tests implemented already > > > found > > > > several violations ("several" is an understatement, but I manually > > > verified > > > > some of them, not all of them). > > > > > > > > On Mon, Sep 6, 2021 at 3:44 PM Chesnay Schepler > > > wrote: > > > > > > > >> While flink-tests is currently the best choice in that it has the > > > >> biggest classpath, it is also the module already requiring the most > > time > > > >> on CI. > > > >> > > > >> Furthermore, given that we ideally cover all APIs (including > > connectors > > > >> & formats), having that mess of dependencies in flink-tests may > > > >> interfere with existing / future tests. > > > >> > > > >> As such I would prefer a separate module, as annoying as that may > be. > > > >> > > > >> On 06/09/2021 15:26, Ingo Bürk wrote: > > > >>> I just quickly chatted with the author/maintainer of ArchUnit, and > a > > > >> module > > > >>> which depends on every module that should be tested seems to be the > > > best > > > >>> solution. How do you feel about using flink-tests for this vs. > > having a > > > >>> separate module for this purpose? > > > >>> > > > >>> > > > >>> Ingo > > > >>> > > > >>> On Mon, Sep 6, 2021 at 3:04 PM Ingo Bürk > wrote: > > > >>> > > > Hi Chesnay, > > > > > > Those are all great questions, and I want to tackle those as well. > > For > > > >> the > > > moment I went per-module, but runtime-wise that isn't ideal the > more > > > modules we'd activate this in. ArchUnit does cache classes between > > > >> tests, > > > but if we run them individually per module, we'd still add up > quite > > a > > > >> bit > > > of execution time (a single module in my IDE is around 10s with > the > > > >> tests I > > > currently have implemented, but I suspect the bottleneck here is > the > > > importing of classes, not the number of tests). Ideally we'd just > > run > > > >> them > > > once in a module with a big enough classpath to cover everything. > If > > > we > > > have such a place, that would probably be our best shot. I'll also > > > keep > > > investigating here, of course. > > > > > > For now I just pushed a solution to avoid the overlap when > executing > > > it > > > per-module by matching on the URI. It's not the prettiest > solution, > > > but > > > does work; but that's more to not fail the tests in unrelated > > modules > > > >> and > > > doesn't help much with execution time. > > > > > > > > > Ingo > > > > > > On Mon, Sep 6, 2021 at 1:57 PM Chesnay Schepler < > ches...@apache.org > > > > > > wrote: > > > > > > > Do you have an estimate for long these tests would run for? > > > > > > > > For project-wide tests, what are the options for setting that up? > > > > If we let the tests run per-module then I guess they'd overlap > > > > considerably (because other Flink modules are being put on the > > > > classpath), which isn't ideal. > > > > > > > > On 06/09/2021 13:51, David Morávek wrote: > > > >> Hi Ingo, > > > >> > > > >> +1 for this effort. This could automate lot of "written rules" > > that > > > >> are > > > >> easy to forget about / not to be aware of (such as that each > test > > > >> should > > > >> extend the TestLogger as Till has already mentioned). > > > >>>
[jira] [Created] (FLINK-24252) HybridSource should return TypeInformation
Arvid Heise created FLINK-24252: --- Summary: HybridSource should return TypeInformation Key: FLINK-24252 URL: https://issues.apache.org/jira/browse/FLINK-24252 Project: Flink Issue Type: Improvement Reporter: Arvid Heise Because {{HybridSource}} will never bind the actual type, it would be a good additional to implement {{ResultTypeQueryable}} to improve the usability. The type should be fetched or inferred from the wrapped sources. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24253) Load JdbcDialects via Service Loaders
Seth Wiesman created FLINK-24253: Summary: Load JdbcDialects via Service Loaders Key: FLINK-24253 URL: https://issues.apache.org/jira/browse/FLINK-24253 Project: Flink Issue Type: Improvement Components: Connectors / JDBC Reporter: Seth Wiesman Assignee: Seth Wiesman The JDBC connector currently supports a hardcoded set of JDBC dialects. To support other JDBC datastores without adding additional feature complexity into Flink, we should allow plugging in additional dialects. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24254) Support hints configuration hints
Timo Walther created FLINK-24254: Summary: Support hints configuration hints Key: FLINK-24254 URL: https://issues.apache.org/jira/browse/FLINK-24254 Project: Flink Issue Type: Bug Reporter: Timo Walther -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24255) Test Environment / Mini Cluster do not forward configuration.
Stephan Ewen created FLINK-24255: Summary: Test Environment / Mini Cluster do not forward configuration. Key: FLINK-24255 URL: https://issues.apache.org/jira/browse/FLINK-24255 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.13.2 Reporter: Stephan Ewen Assignee: Stephan Ewen Fix For: 1.14.0 When using {{StreamExecutionEnvironment getExecutionEnvironment(Configuration)}}, the config should determine the characteristics of the execution. The config is for example passed to the local environment in the local execution case, and used during the instantiation of the MiniCluster. But when using the {{TestStreamEnvironment}} and the {{MiniClusterWithClientRule}}, the config is ignored. The issue is that the {{StreamExecutionEnvironmentFactory}} in {{TestStreamEnvironment}} ignores the config that is passed to it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24256) Add JavaScript SDK for Stateful Functions
Igal Shilman created FLINK-24256: Summary: Add JavaScript SDK for Stateful Functions Key: FLINK-24256 URL: https://issues.apache.org/jira/browse/FLINK-24256 Project: Flink Issue Type: Improvement Components: Stateful Functions Reporter: Igal Shilman -- This message was sent by Atlassian Jira (v8.3.4#803005)