Tzu-Li (Gordon) Tai created FLINK-16706:
---
Summary: Update Stateful Functions master branch version to
2.0-SNAPSHOT
Key: FLINK-16706
URL: https://issues.apache.org/jira/browse/FLINK-16706
Project
jiangbo created FLINK-16707:
---
Summary: Fix potential memory leak of rest server when using
session/standalone cluster for version 1.8
Key: FLINK-16707
URL: https://issues.apache.org/jira/browse/FLINK-16707
tangshangwen created FLINK-16708:
Summary: When a JDBC connection has been closed, the retry policy
of the JDBCUpsertOutputFormat cannot take effect and may result in data loss
Key: FLINK-16708
URL: https://issues
Jun Zhang created FLINK-16709:
-
Summary: add a set command to set job name when submit job on sql
client
Key: FLINK-16709
URL: https://issues.apache.org/jira/browse/FLINK-16709
Project: Flink
Is
Hi, after looking at your code, i think i might find the root cause.
The reason is the additional `AssignerWithPeriodicWatermarks` [1] you added
in the long version.
Since the temporal table join could only get joined result of [3000, 6500,
8500], so the watermark this operator would generate will
Gary Yao created FLINK-16710:
Summary: Log Upload blocks Main Thread in TaskExecutor
Key: FLINK-16710
URL: https://issues.apache.org/jira/browse/FLINK-16710
Project: Flink
Issue Type: Bug
Thanks for summarising the discussion points, Till.
# Configuration
## Env variables
Agree, this looks like an independent effort.
## dynamic program arguments
Indeed, jobmanager.sh needs small extension. It can be addressed
independently but I think it has chance to be addressed in this release
Hi all!
The main point I wanted to throw into the discussion is the following:
- With more and more use cases, more and more tools go into Flink
- If everything becomes a "core feature", it will make the project hard
to develop in the future. Thinking "library" / "plugin" / "extension" style
w
Hey,
generally, that's what I thought more or less. I think I understand the
behavior itself, thanks for explaining it to me.
But what actually concerns me is the fact that this
*assignTimestampsAndWatermarks* is required if You will select this Long
field, which basically means that the type of s
Dear community,
happy to share this week's community digest featuring "Flink Forward
Virtual Conference 2020", a small update on Flink 1.10.1, a better
Filesystem connector for the Table API & SQL, new source/sink interfaces
for the Table API and a bit more.
Flink Development
==
* [r
Thanks for the comment, Stephan.
- If everything becomes a "core feature", it will make the project hard
> to develop in the future. Thinking "library" / "plugin" / "extension" style
> where possible helps.
Completely agree. It is much more important to design a mechanism than
focusing on a sp
Jingsong Lee created FLINK-16711:
Summary: ParquetColumnarRowSplitReader read footer from wrong end
Key: FLINK-16711
URL: https://issues.apache.org/jira/browse/FLINK-16711
Project: Flink
Issu
Zhijiang created FLINK-16712:
Summary: Refactor StreamTask to construct final fields
Key: FLINK-16712
URL: https://issues.apache.org/jira/browse/FLINK-16712
Project: Flink
Issue Type: Task
Thanks Jingsong for bringing up this discussion. +1 to this proposal. I think
Bowen's proposal makes much sense to me.
This is also a painful problem for PyFlink users. Currently there is no
built-in easy-to-use table source/sink and it requires users to write a lot of
code to trying out PyFlin
Thanks Bowen, Jark and Dian for your feedback and suggestions.
I reorganize with your suggestions, and try to expose DDLs:
1.datagen source:
- easy startup/test for streaming job
- performance testing
DDL:
CREATE TABLE user (
id BIGINT,
age INT,
description STRING
) WITH (
'conne
Thanks for the comments, Stephan & Becket.
@Stephan
I see your concern, and I completely agree with you that we should first
think about the "library" / "plugin" / "extension" style if possible.
If GPUs are sliced and assigned during scheduling, there may be reason,
> although it looks that it w
16 matches
Mail list logo