Shimin Yang created FLINK-11084:
---
Summary: Incorrect ouput after two successive split and select
Key: FLINK-11084
URL: https://issues.apache.org/jira/browse/FLINK-11084
Project: Flink
Issue Typ
boshu Zheng created FLINK-11083:
---
Summary: CRowSerializerConfigSnapshot is not instantiable
Key: FLINK-11083
URL: https://issues.apache.org/jira/browse/FLINK-11083
Project: Flink
Issue Type: Bu
Hi Timo,
Thank you for the valuable feedbacks.
First of all, I think we don't need to align the SQL functionality to
Descriptor. Because SQL is a more standard API, we should be as cautious as
possible to extend the SQL syntax. If something can be done in a standard
way, we shouldn't introduce so
Hi Timo/Shuyi/Lin,
Thanks for the discussions. It seems that we are converging to something
meaningful. Here are some of my thoughts:
1. +1 on MVP DDL
3. Markers for source or sink seem more about permissions on tables that belong
to a security component. Unless the table is created differently
zhijiang created FLINK-11082:
Summary: Increase backlog only if it is available for consumption
Key: FLINK-11082
URL: https://issues.apache.org/jira/browse/FLINK-11082
Project: Flink
Issue Type:
Hi Timo and Shuyi,
thanks for your feedback.
1. Scope
agree with you we should focus on the MVP DDL first.
2. Constraints
yes, this can be a follow-up issue.
3. Sources/Sinks
If a TABLE has both read/write access requirements, should we declare it
using
`CREATE [SOURCE_SINK|BOTH] TABLE tableNa
Hi all,
Various discussion in the mailing list & JIRA tickets [2] had been brought
up in the past regarding the windowing operation performance. As we
experiment internally with some of our extreme use cases, we found out that
using a slice-based implementation can optimize Flink's windowing mecha
Hi all,
the community recently updated Flink's LICENSE and NOTICE files with
FLINK-10987 [1]. Since this is a quite tricky topic, we added a licensing
guide [2] which tries to sum up the most important points you should pay
attention to when adding or changing Flink's dependencies. I highly
encour
Till Rohrmann created FLINK-11081:
-
Summary: Support binding port range for REST server
Key: FLINK-11081
URL: https://issues.apache.org/jira/browse/FLINK-11081
Project: Flink
Issue Type: Impr
Hi Jark and Shuyi,
thanks for pushing the DDL efforts forward. I agree that we should aim
to combine both Shuyi's design and your design.
Here are a couple of concerns that I think we should address in the design:
1. Scope: Let's focuses on a MVP DDL for CREATE TABLE statements first.
I thin
Chesnay Schepler created FLINK-11080:
Summary: Define flink-connector-elasticsearch6 uber-jar
dependencies via artifactSet
Key: FLINK-11080
URL: https://issues.apache.org/jira/browse/FLINK-11080
P
Chesnay Schepler created FLINK-11079:
Summary: Update LICENSE and NOTICE files for flnk-storm-examples
Key: FLINK-11079
URL: https://issues.apache.org/jira/browse/FLINK-11079
Project: Flink
TisonKun created FLINK-11078:
Summary: Capability to define the numerical range for running
TaskExecutors
Key: FLINK-11078
URL: https://issues.apache.org/jira/browse/FLINK-11078
Project: Flink
I
Hi Shuyi,
It's exciting to see we can make such a great progress here.
Regarding to the watermark:
Watermarks can be defined on any columns (including computed-column) in
table schema.
The computed column can be computed from existing columns using builtin
functions and *UserDefinedFunctions* (S
Hi Till,
It is true that after the first job submission, there will be no ambiguity
in terms of whether a cached table is used or not. That is the same for the
cache() without returning a CachedTable.
Conceptually one could think of cache() as introducing a caching operator
> from which you need
aitozi created FLINK-11077:
--
Summary: Make subtask aware of the timeout of checkpoint and abort
the current ongoing asynccheckpoint
Key: FLINK-11077
URL: https://issues.apache.org/jira/browse/FLINK-11077
Pro
Hi Jark and Shaoxuan,
Thanks a lot for the summary. I think we are making great progress here.
Below are my thoughts.
*(1) watermark definition
IMO, it's better to keep it consistent with the rowtime extractors and
watermark strategies defined in
https://ci.apache.org/projects/flink/flink-docs-st
Maciej BryĆski created FLINK-11076:
--
Summary: Processing for Avro generated classes is very slow
Key: FLINK-11076
URL: https://issues.apache.org/jira/browse/FLINK-11076
Project: Flink
Issue
Hi everyone,
thanks for starting the discussion. In general, I like the idea of
making Flink SQL queries more concise.
However, I don't like to diverge from standard SQL. So far, we managed
to add a lot of operators and functionality while being standard
compliant. Personally, I don't see a
Tzu-Li (Gordon) Tai created FLINK-11075:
---
Summary: Remove redundant code path in CompatibilityUtil
Key: FLINK-11075
URL: https://issues.apache.org/jira/browse/FLINK-11075
Project: Flink
Hi XueFu, Jark,
Thanks for your feedback. That's really helpful.
Since Flink has already supported some complex types like MAP and ARRAY,
it would be possible to add some higher-order functions to deal with MAP and
ARRAY, like Presto[1,2] and Spark have done.
As for "syntax for the lambda funct
Hi all,
Thank Kurt, you see more benefits of the unification than I do.
I quite agree Kurt's views. DataStream, DataSet and Table are remained
independent for now, and subsumed DataSet in data stream in the future. The
collection execution mode is replaced by mini cluster. The high-level
semantic
Dian Fu created FLINK-11074:
---
Summary: Improve the harness test to make it possible test with
state backend
Key: FLINK-11074
URL: https://issues.apache.org/jira/browse/FLINK-11074
Project: Flink
I
23 matches
Mail list logo