Zhu Zhu created FLINK-15582:
---
Summary: Enable batch scheduling tests in
LegacySchedulerBatchSchedulingTest for DefaultScheduler as well
Key: FLINK-15582
URL: https://issues.apache.org/jira/browse/FLINK-15582
I guess one of the most important results of this experiment is to have a
good tuning guide available for users who are past the initial try-out
phase because the default settings will be kind of a compromise. I assume
that this is part of the outstanding FLIP-49 documentation task.
If we limit Ro
+1 for the JVM metaspace and overhead changes.
On Tue, Jan 14, 2020 at 11:19 AM Till Rohrmann wrote:
> I guess one of the most important results of this experiment is to have a
> good tuning guide available for users who are past the initial try-out
> phase because the default settings will be k
Arvid Heise created FLINK-15583:
---
Summary: Scala walkthrough archetype does not compile on Java 11
Key: FLINK-15583
URL: https://issues.apache.org/jira/browse/FLINK-15583
Project: Flink
Issue T
Hi Mehmet,
You can subscribe to dev mailing list by sending a email to
*dev-subscr...@flink.apache.org
, *not dev@flink.apache.org.
Hope this helps.
Mehmet Ozan Güven 于2020年1月14日周二 下午7:25写道:
>
>
--
Benchao Li
School of Electronics Engineering and Computer Science, Peking University
Tel:+86-1
forward to Mehmet.
Best,
tison.
Benchao Li 于2020年1月14日周二 下午7:28写道:
> Hi Mehmet,
>
> You can subscribe to dev mailing list by sending a email to
> *dev-subscr...@flink.apache.org
> , *not dev@flink.apache.org.
> Hope this helps.
>
> Mehmet Ozan Güven 于2020年1月14日周二 下午7:25写道:
>
> >
> >
>
> --
>
+1
Thanks a lot for driving this. @ForwardXu
Best,
Hequn
On Mon, Jan 13, 2020 at 10:07 AM Kurt Young wrote:
> +1
>
> Best,
> Kurt
>
>
> On Tue, Jan 7, 2020 at 2:59 PM Jingsong Li wrote:
>
> > +1 non-binding. Thanks Forward for driving this.
> >
> > Considering that it is made up of independent
Benoît Paris created FLINK-15584:
Summary: Give nested data type of ROWs in ValidationException
Key: FLINK-15584
URL: https://issues.apache.org/jira/browse/FLINK-15584
Project: Flink
Issue Ty
Jark Wu created FLINK-15585:
---
Summary: Improve function identifier string in plan digest
Key: FLINK-15585
URL: https://issues.apache.org/jira/browse/FLINK-15585
Project: Flink
Issue Type: Improveme
Hi all,
Votes on FLIP-90 have been supported by 3 commitors. Thanks everyone for
voting. Related discussions in [1] & [2] & [3]. The results of this vote
are in [4] & [5].
Best,
Forward
[1]
https://docs.google.com/document/d/1JfaFYIFOAY8P2pFhOYNCQ9RTzwF4l85_bnTvImOLKMk/edit#heading=h.76mb88c
Clearing the `flink.size` option and setting `process.size` could indeed be
a solution. The thing I'm wondering is what would happen if the user has
configured `task.heap.size` and `managed.size` instead of `flink.size`?
Would we also ignore these settings? If not, then we risk to run into the
situ
Piotr Nowojski created FLINK-15586:
--
Summary: BucketingSink is ignoring plugins when trying to
re-instantiate the HadoopFileSystem
Key: FLINK-15586
URL: https://issues.apache.org/jira/browse/FLINK-15586
Hi all!
Great that we have already tried out new FLIP-49 with the bigger jobs.
I am also +1 for the JVM metaspace and overhead changes.
Regarding 0.3 vs 0.4 for managed memory, +1 for having more managed memory
for Rocksdb limiting case.
In general, this looks mostly to be about memory distribu
I think that problem exists anyways and is independent of the "-tm" option.
You can have a combination of `task.heap.size` and `managed.size` and
`framework.heap.size` that conflicts with `flink.size`. In that case, we
get an exception during the startup (incompatible memory configuration)?
That i
I like the idea of having a larger default "flink.size" in the config.yaml.
Maybe we don't need to double it, but something like 1280m would be okay?
On Tue, Jan 14, 2020 at 3:47 PM Andrey Zagrebin
wrote:
> Hi all!
>
> Great that we have already tried out new FLIP-49 with the bigger jobs.
>
> I
LCID Fire created FLINK-15587:
-
Summary: Flink image does not run on OpenShift
Key: FLINK-15587
URL: https://issues.apache.org/jira/browse/FLINK-15587
Project: Flink
Issue Type: Bug
R
Bowen Li created FLINK-15588:
Summary: check registered udf via catalog API cannot be a scala
inner class
Key: FLINK-15588
URL: https://issues.apache.org/jira/browse/FLINK-15588
Project: Flink
I
Hi Cam,
could you share a bit more details about your job (e.g. which sources are
you using, what are your settings, etc.). Ideally you can provide a minimal
example in order to better understand the program.
>From a high level perspective, there might be different problems: First of
all, Flink d
Hi Till,
Thanks for your response.
Our sources are S3 and Kinesis. We have run several tests, and we are able
to take savepoint/checkpoint, but only when S3 complete reading. And at
that point, our pipeline has watermarks for other operators, but not the
source operator. We are not running `PROCE
Hi all,
Stephan, Till and me had another offline discussion today. Here is the
outcome of our brainstorm.
We agreed to have process.size in the default settings with the explanation
of flink.size alternative in the comment.
This way we keep -tm as a shortcut to process.size only and any
inconsist
Hi all,
Stephan, Till and me had another offline discussion today. Here is the
outcome of our brainstorm.
*managed fraction 0.4*
just confirmed what we already discussed here.
*process.size = 1536Mb (1,5Gb)*
We agreed to have process.size in the default settings with the explanation
of flink.siz
Hi devs,
I've updated the wiki according to feedbacks. Please take another look.
Thanks!
On Fri, Jan 10, 2020 at 2:24 PM Bowen Li wrote:
> Thanks everyone for the prompt feedback. Please see my response below.
>
> > In Postgress, the TIME/TIMESTAMP WITH TIME ZONE has the
> java.time.Instant s
Bowen Li created FLINK-15589:
Summary: remove beta tag from catalog and hive doc
Key: FLINK-15589
URL: https://issues.apache.org/jira/browse/FLINK-15589
Project: Flink
Issue Type: Task
Bowen Li created FLINK-15590:
Summary: add section for current catalog and current database
Key: FLINK-15590
URL: https://issues.apache.org/jira/browse/FLINK-15590
Project: Flink
Issue Type: Task
Hi, Cam,
I think you might want to know why the web page does not show the watermark
of the source.
Currently, the web only shows the "input" watermark. The source only
outputs the watermark so the web shows you that there is "No Watermark".
Actually Flink has "output" watermark metrics. I think F
Bowen Li created FLINK-15591:
Summary: support CREATE TEMPORARY TABLE/VIEW in DDL
Key: FLINK-15591
URL: https://issues.apache.org/jira/browse/FLINK-15591
Project: Flink
Issue Type: Task
+1 (non-binding)
Best,
Danny Chan
在 2019年12月31日 +0800 PM5:09,Forward Xu ,写道:
> Hi all,
>
> I'd like to start the vote of FLIP-90 [1] since that we have reached an
> agreement on the design in the discussion thread [2].
>
> This vote will be open for at least 72 hours. Unless there is an objection,
Hi dev,
I'd like to kick off a discussion on the improvement of TableSourceFactory
and TableSinkFactory.
Motivation:
Now the main needs and problems are:
1.Connector can't get TableConfig [1], and some behaviors really need to be
controlled by the user's table configuration. In the era of catalog
Thanks all for the discussion.
We agreed to have process.size in the default settings with the explanation
> of flink.size alternative in the comment.
> This way we keep -tm as a shortcut to process.size only and any
> inconsistencies fail fast as if configured in yaml.
>
The conclusions sounds g
Jeff Zhang created FLINK-15592:
--
Summary: Streaming sql throw hive related sql when it doesn't use
any hive table
Key: FLINK-15592
URL: https://issues.apache.org/jira/browse/FLINK-15592
Project: Flink
+1 (non-binding)
*Best Regards,*
*Zhenghua Gao*
On Wed, Jan 15, 2020 at 10:11 AM Danny Chan wrote:
> +1 (non-binding)
>
> Best,
> Danny Chan
> 在 2019年12月31日 +0800 PM5:09,Forward Xu ,写道:
> > Hi all,
> >
> > I'd like to start the vote of FLIP-90 [1] since that we have reached an
> > agreement on
Thanks for the discussion, Stephan, Till and Andrey.
+1 for the managed fraction (0.4) and process.size (1.5G).
*JVM overhead min 196 -> 192Mb (128 + 64)*
> small correction for better power 2 alignment of sizes
>
Sorry, this was a typo (and the same for the jira comment which is
copy-pasted). It
Bowen Li created FLINK-15593:
Summary: add doc to remind users not using Hive aggregate
functions in streaming mode
Key: FLINK-15593
URL: https://issues.apache.org/jira/browse/FLINK-15593
Project: Flink
PengFei Li created FLINK-15594:
--
Summary: Streaming SQL end-to-end test (Blink planner) failed with
output hash mismatch
Key: FLINK-15594
URL: https://issues.apache.org/jira/browse/FLINK-15594
Project: F
Jingsong Lee created FLINK-15595:
Summary: Resolution Order is chaotic not FLIP-68 defined
Key: FLINK-15595
URL: https://issues.apache.org/jira/browse/FLINK-15595
Project: Flink
Issue Type:
hehuiyuan created FLINK-15596:
-
Summary: Support key-value messages for kafka producer for flink
SQL \Tbale
Key: FLINK-15596
URL: https://issues.apache.org/jira/browse/FLINK-15596
Project: Flink
Hi Guowei,
Thanks for your response.
What I understand from you, one operator has two watermarks? If so, one
operator's output watermark would be an input watermark of the next
operator? Does it sounds redundant?
Or do you mean the Web UI only show the input watermarks of every operator,
but sin
38 matches
Mail list logo