godfrey he created FLINK-20436:
--
Summary: Simplify type parameter of ExecNod
Key: FLINK-20436
URL: https://issues.apache.org/jira/browse/FLINK-20436
Project: Flink
Issue Type: Sub-task
godfrey he created FLINK-20437:
--
Summary: Port ExecNode to Java
Key: FLINK-20437
URL: https://issues.apache.org/jira/browse/FLINK-20437
Project: Flink
Issue Type: Sub-task
Reporter:
Matthias created FLINK-20438:
Summary:
org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest fails
due to missing output
Key: FLINK-20438
URL: https://issues.apache.org/jira/browse/FLINK-20438
+1 for the migration
(I agree with Dawid, for me the most important benefit is better support of
parameterized tests).
Regards,
Roman
On Mon, Nov 30, 2020 at 9:42 PM Arvid Heise wrote:
> Hi Till,
>
> immediate benefit would be mostly nested tests for a better test structure
> and new paramete
Zhu Zhu created FLINK-20439:
---
Summary: Consider simplifying or removing mechanism to
scheduleOrUpdateConsumers
Key: FLINK-20439
URL: https://issues.apache.org/jira/browse/FLINK-20439
Project: Flink
zouyunhe created FLINK-20440:
Summary: `LAST_VALUE` aggregate function can not be used in Hop
window
Key: FLINK-20440
URL: https://issues.apache.org/jira/browse/FLINK-20440
Project: Flink
Issue
Till Rohrmann created FLINK-20441:
-
Summary: Deprecate CheckpointConfig.setPreferCheckpointForRecovery
Key: FLINK-20441
URL: https://issues.apache.org/jira/browse/FLINK-20441
Project: Flink
I
-1
-The flink-python jar contains 2 license files in the root directory and
another 2 in the META-INF directory. This should be reduced down to 1
under META-INF. I'm inclined to block the release on this because the
root license is BSD.
- The flink-python jar appears to bundle lz4 (native libr
Thanks a lot for checking the release candidate so quickly.
I agree that the BSD License file in the root of the jar is a red flag.
I filed a ticket for addressing the issues Chesnay found:
https://issues.apache.org/jira/browse/FLINK-20442
I'm hereby officially cancelling this release candidate.
Robert Metzger created FLINK-20442:
--
Summary: Fix license documentation mistakes in flink-python.jar
Key: FLINK-20442
URL: https://issues.apache.org/jira/browse/FLINK-20442
Project: Flink
Is
Gee created FLINK-20443:
---
Summary: ContinuousProcessingTimeTrigger lost data in last
interval in per window
Key: FLINK-20443
URL: https://issues.apache.org/jira/browse/FLINK-20443
Project: Flink
Issue
Arvid Heise created FLINK-20444:
---
Summary: Chain AsyncWaitOperator to new sources
Key: FLINK-20444
URL: https://issues.apache.org/jira/browse/FLINK-20444
Project: Flink
Issue Type: Improvement
Hey,
Is it currently possible to obtain the state that was created by SQL query
via the State Processor API? I am able to load the checkpoint via the State
Processor API, but I wasn't able to think of a way to access the internal
state of my JOIN Query.
Best Regards,
Dom.
Ke Li created FLINK-20445:
-
Summary: NoMatchingTableFactoryException
Key: FLINK-20445
URL: https://issues.apache.org/jira/browse/FLINK-20445
Project: Flink
Issue Type: Bug
Components: Table
Ke Li created FLINK-20446:
-
Summary: NoMatchingTableFactoryException
Key: FLINK-20446
URL: https://issues.apache.org/jira/browse/FLINK-20446
Project: Flink
Issue Type: Bug
Components: Table
When using Flink 1.9.3 with Blink Planner, I tried to union some tables with
Chinese constants and get an OutOfMemoryError of Java heap space.
Here are the code and the error message.
I turned to the old planner and it works.
Then I upgraded Flink to 1.11.2 and it also works.
Also, it does work wh
*Code in Flink 1.9.3:*
import org.apache.flink.streaming.api.datastream.DataStream;
import
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.java.St
Zhenwei Feng created FLINK-20447:
Summary: Querying grouy by PK does not work
Key: FLINK-20447
URL: https://issues.apache.org/jira/browse/FLINK-20447
Project: Flink
Issue Type: Improvement
Hi Dom,
+ user mail list
Once you got to know the state descriptor, I think you could query the join
state. The state name is easy to get via [1], it should be "left-records" and
"right-records", and you could check what kind of join and whether has unique
key to decide what kind of state (val
Rui Li created FLINK-20448:
--
Summary: Obsolete generated avro classes
Key: FLINK-20448
URL: https://issues.apache.org/jira/browse/FLINK-20448
Project: Flink
Issue Type: Test
Components: Fo
Robert Metzger created FLINK-20449:
--
Summary: UnalignedCheckpointITCase times out
Key: FLINK-20449
URL: https://issues.apache.org/jira/browse/FLINK-20449
Project: Flink
Issue Type: Bug
21 matches
Mail list logo