Hi Tison,
Thanks for bringing this.
I think it's fine to break the back compatibility of client API now that
ClusterClient is not well designed for public usage.
But from my perspective, we should postpone any modification to existing
interfaces until we come to an agreement on new client API. Ot
Fabian Hueske created FLINK-14115:
-
Summary: Translate DataStream Code Walkthrough to Chinese
Key: FLINK-14115
URL: https://issues.apache.org/jira/browse/FLINK-14115
Project: Flink
Issue Type
Fabian Hueske created FLINK-14116:
-
Summary: Translate changes on Getting Started Overview to Chinese
Key: FLINK-14116
URL: https://issues.apache.org/jira/browse/FLINK-14116
Project: Flink
Is
Fabian Hueske created FLINK-14117:
-
Summary: Translate changes on documentation index page to Chinese
Key: FLINK-14117
URL: https://issues.apache.org/jira/browse/FLINK-14117
Project: Flink
Is
Yingjie Cao created FLINK-14118:
---
Summary: Reduce the unnecessary flushing when there is no data
available for flush
Key: FLINK-14118
URL: https://issues.apache.org/jira/browse/FLINK-14118
Project: Flin
After some development and thinking, I have a general understanding.
+1 to registering a source/sink does not fit into the SQL world.
I am OK to have a deprecated registerTemporarySource/Sink to compatible with
old ways.
Best,
Jingsong Lee
---
Hi,
+1 to strive for reaching consensus on the remaining topics. We are close to
the truth. It will waste a lot of time if we resume the topic some time later.
+1 to “1-part/override” and I’m also fine with Timo’s “cat.db.fun” way to
override a catalog function.
I’m not sure about “system.sy
Hi all,
Sorry to join this party late. Big +1 to this flip, especially for the
dropping
"registerTableSink & registerTableSource" part. These are indeed legacy
and we should try to unify them through CatalogTable after we introduce
the concept of Catalog.
>From my understanding, what we can regis
Hi Xiaogang,
Thanks for your reply.
According to the feature discussion thread[1] client API enhancement is a
planned
feature towards 1.10 and thus I think this thread is valid if we can reach
a consensus
and introduce new client API in this development cycle.
Best,
tison.
[1]
https://lists.apa
Hi,
The Job1 is a simple ETL job and doesn’t consume much state size (only Kafka
offset), so it should work well.
The Job2 is an unbounded join which will put the two input stream data into
state in Join operator.
As the input stream is unlimited and 100GB per day as you described, if you are
I agree that NewClusterClient and ClusterClient can be merged now that there is
no pre-FLIP-6 code base anymore.
Side note, there are a lot of methods in ClusterClient that should not really
be there, in my opinion:
- all the getOptimizedPlan*() method
- the run() methods. In the end, only sub
Jark Wu created FLINK-14119:
---
Summary: Clean idle state for RetractableTopNFunction
Key: FLINK-14119
URL: https://issues.apache.org/jira/browse/FLINK-14119
Project: Flink
Issue Type: Bug
Hi,
I think this discussion and the one for FLIP-64 are very connected. To resolve
the differences, think we have to think about the basic principles and find
consensus there. The basic questions I see are:
- Do we want to support overriding builtin functions?
- Do we want to support overridi
No reason to keep the separation. The NewClusterClient interface was only
introduced to add new methods and not having to implement them for the
other ClusterClient implementations.
Cheers,
Till
On Wed, Sep 18, 2019 at 3:17 PM Aljoscha Krettek
wrote:
> I agree that NewClusterClient and ClusterC
Till Rohrmann created FLINK-14120:
-
Summary: SystemProcessingTimeServiceTest.testImmediateShutdown
failed on Travis
Key: FLINK-14120
URL: https://issues.apache.org/jira/browse/FLINK-14120
Project: Fli
That makes sense. I suggest we add one note to the KIP to avoid confusion
On Wed, Sep 18, 2019 at 9:51 AM Xintong Song wrote:
> @tao
>
> I think we cannot limit the cpu usage of a slot, nor isolate the usages
> between slots. We do have cpu limits for the task executor in some
> scenarios, such
Hi Yijie,
Thanks for sharing the pulsar FLIP.
Would you mind enabling comments/suggestions on the google doc link? This
way the contributors from the community can comment on the doc.
Best,
Rong
On Mon, Sep 16, 2019 at 5:43 AM Yijie Shen
wrote:
> Hello everyone,
>
> I've drafted a FLIP that de
John Lonergan created FLINK-14121:
-
Summary: upgrade tocommons-compress:1.19 due to CVE
Key: FLINK-14121
URL: https://issues.apache.org/jira/browse/FLINK-14121
Project: Flink
Issue Type: Impr
Hi Aljoscha,
Thanks for the summary and these are great questions to be answered. The
answer to your first question is clear: there is a general agreement to
override built-in functions with temp functions.
However, your second and third questions are sort of related, as a function
reference can
Hi,
I think it makes sense to start voting at this point.
Option 1: Only 1-part identifiers
PROS:
- allows shadowing built-in functions
CONS:
- incosistent with all the other objects, both permanent & temporary
- does not allow shadowing catalog functions
Option 2: Special keyword for built-in fu
Last additional comment on Option 2. The reason why I prefer option 3 is
that in option 3 all objects internally are identified with 3 parts. This
makes it easier to handle at different locations e.g. while persisting
views, as all objects have uniform representation.
On Thu, 19 Sep 2019, 07:31 Da
Hi,
For #2, as Xuefu and I discussed offline, the key point is to introduce a
keyword to SQL DDL to distinguish temp function that override built-in
functions v.s. temp functions that override catalog functions. It can be
something else than "GLOBAL", like "BUILTIN" (e.g. "CREATE BUILTIN TEMP
FUNC
Hi Dawid,
"GLOBAL" is a temporary keyword that was given to the approach. It can be
changed to something else for better.
The difference between this and the #3 approach is that we only need the
keyword for this create DDL. For other places (such as function
referencing), no keyword or special na
@Bowen I am not suggesting introducing additional catalog. I think we need
to get rid of the current built-in catalog.
@Xuefu in option #3 we also don't need additional referencing the special
catalog anywhere else besides in the CREATE statement. The resolution
behaviour is exactly the same in bo
Re: The reason why I prefer option 3 is that in option 3 all objects
internally are identified with 3 parts.
True, but the problem we have is not about how to differentiate each type
objects internally. Rather, it's rather about how a user referencing an
object unambiguously and consistently.
Tha
@Dawid, Re: we also don't need additional referencing the specialcatalog
anywhere.
True. But once we allow such reference, then user can do so in any possible
place where a function name is expected, for which we have to handle.
That's a big difference, I think.
Thanks,
Xuefu
On Wed, Sep 18, 201
Hi Stephan,
Sorry for the belated reply. You are right that the functionality proposed
in this FLIP can be implemented out of the Flink core as an ecosystem
project.
The main motivation of this FLIP is two folds:
1. Improve the performance of intermediate result sharing in the same
session.
Usin
Seth Wiesman created FLINK-14122:
Summary: Extend State Processor API to read ListCheckpointed
operator state
Key: FLINK-14122
URL: https://issues.apache.org/jira/browse/FLINK-14122
Project: Flink
Hi Yijie,
Could you please follow the FLIP process to start a new FLIP [DISCUSSION]
thread in the mailing list?
https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals#FlinkImprovementProposals-Process
I see two FLIP-69 discussion in the mailing list now. So there is a FLIP
Hi dawid:
Can temporary tables achieve the same capabilities as catalog table?
like statistics: CatalogTableStatistics, CatalogColumnStatistics,
PartitionStatistics
like partition support: we have added some catalog equivalent interfaces on
TableSource/TableSink: getPartitions, getPartitionFieldN
Hi all,
As communicated in an email thread, I'm proposing Flink SQL ddl enhancement. I
have a draft design doc that I'd like to convert it to a FLIP. Thus, it would
be great if anyone who can grant me the write access to Confluence. My
Confluence ID is zjuwangg.
It would be nice if any of you
Hi everyone,
Thanks all for joining the discussion in the doc[1].
It seems that the discussion is converged and there is a consensus on the
current FLIP document.
If there is no objection, I would like to convert it into cwiki FLIP page
and start voting process.
For more details, please refer to
+1 to start vote process.
Best,
Kurt
On Thu, Sep 19, 2019 at 10:54 AM Jark Wu wrote:
> Hi everyone,
>
> Thanks all for joining the discussion in the doc[1].
> It seems that the discussion is converged and there is a consensus on the
> current FLIP document.
> If there is no objection, I would
Hi JingsongLee,
>From my understanding they can. Underneath they will be CatalogTables. The
difference is the lifetime of the tables. Plus some of the user facing
interfaces cannot be persisted e.g. datastream. Therefore we must have a
separate methods for that. In the end the temporary tables are
Hi all,
I would like to start the vote for FLIP-66 [1], which is discussed and
reached a consensus in the discussion thread[2].
The vote will be open for at least 72 hours. I'll try to close it after
Sep. 24 08:00 UTC, unless there is an objection or not enough votes.
Thanks,
Jark
[1]:
https://
I agree with Xuefu that inconsistent handling with all the other objects is
not a big problem.
Regarding to option#3, the special "system.system" namespace may confuse
users.
Users need to know the set of built-in function names to know when to use
"system.system" namespace.
What will happen if us
liupengcheng created FLINK-14123:
Summary: Change taskmanager.memory.fraction default value to 0.6
Key: FLINK-14123
URL: https://issues.apache.org/jira/browse/FLINK-14123
Project: Flink
Issue
Hey all,
Recently, I used flink to do secondary development, when compile flink
master(up-to-date) by using Java 1.8.0_77, got errors as follow:
compile (default-compile) on project flink-table-api-java: Compilation
failure
/home/*/zzsmdfj/sflink/flink-table/flink-table-api-java/src/main/java/org/
38 matches
Mail list logo