Zhu Zhu created FLINK-15169:
---
Summary: Errors happen in the scheduling of DefaultScheduler is
not shown in WebUI
Key: FLINK-15169
URL: https://issues.apache.org/jira/browse/FLINK-15169
Project: Flink
Huang Xingbo created FLINK-15168:
Summary: Exception is thrown when using kafka source connector
with flink planner
Key: FLINK-15168
URL: https://issues.apache.org/jira/browse/FLINK-15168
Project: Fli
Hi all,
I think current design is good.
My understanding is:
For execution mode: bounded mode and continuous mode, It's totally
different. I don't think we have the ability to integrate the two models at
present. It's about scheduling, memory, algorithms, States, etc. we
shouldn't confuse them.
Rui Li created FLINK-15167:
--
Summary: SQL CLI library option doesn't work for Hive connector
Key: FLINK-15167
URL: https://issues.apache.org/jira/browse/FLINK-15167
Project: Flink
Issue Type: Bug
thank u lucas
lucas.wu 于2019年12月10日周二 下午2:12写道:
> You can use ` ` to surround the field
>
>
> 原始邮件
> 发件人:lakeshenshenleifight...@gmail.com
> 收件人:dev...@flink.apache.org; useru...@flink.apache.org
> 发送时间:2019年12月10日(周二) 14:05
> 主题:Flink SQL Kafka topic DDL ,the kafka' json field conflict with fli
You can use ` ` to surround the field
原始邮件
发件人:lakeshenshenleifight...@gmail.com
收件人:dev...@flink.apache.org; useru...@flink.apache.org
发送时间:2019年12月10日(周二) 14:05
主题:Flink SQL Kafka topic DDL ,the kafka' json field conflict with flinkSQL
Keywords
Hi community, when I use Flink SQL DDL ,the kaf
Hi Piotr,
Sorry for the misunderstanding, chaining does work with multiple output
right now, I mean, it's also a very important feature, and it should work
with N-ary selectable input operators.
We all think that providing N-ary selectable input operator is a very
important thing, it makes TwoInpu
Hi community, when I use Flink SQL DDL ,the kafka' json field conflict with
flink SQL Keywords,my thought is that using the UDTF to solve it . Is there
graceful way to solve this problem?
Hi Hanan,
I created a fix for the problem. Would you please try it from your side?
https://github.com/apache/flink/pull/10371
Best Regards
Peter Huang
On Tue, Nov 26, 2019 at 8:07 AM Peter Huang
wrote:
> Hi Hanan,
>
> After investigating the issue by using the test case you provided, I think
Hi Tison,
Yes, you are right. I think I made the wrong argument in the doc.
Basically, the packaging jar problem is only for platform users. In our
internal deploy service,
we further optimized the deployment latency by letting users to packaging
flink-runtime together with the uber jar, so that w
Hi Yang,
Thanks for the suggestion. Actually I forgot to share the original
google doc with you. Feel free to comment directly on it, so that I may
revise it based on people's feedback before syncing to confluence.
https://docs.google.com/document/d/1aAwVjdZByA-0CHbgv16Me-vjaaDMCfhX7TzVVTuifYM/edi
> 3. What do you mean about the package? Do users need to compile their jars
inlcuding flink-clients, flink-optimizer, flink-table codes?
The answer should be no because they exist in system classpath.
Best,
tison.
Yang Wang 于2019年12月10日周二 下午12:18写道:
> Hi Peter,
>
> Thanks a lot for starting
Hi Peter,
Thanks a lot for starting this discussion. I think this is a very useful
feature.
Not only for Yarn, i am focused on flink on Kubernetes integration and come
across the same
problem. I do not want the job graph generated on client side. Instead, the
user jars are built in
a user-defined
Yingjie Cao created FLINK-15166:
---
Summary: Shuffle data compression wrongly decrease the buffer
reference count.
Key: FLINK-15166
URL: https://issues.apache.org/jira/browse/FLINK-15166
Project: Flink
Hi all,
The voting time for "Improve the Pyflink command line options (Adjustment to
FLIP-78)" has passed. I'm closing the vote now.
There were 5 +1 votes, 4 of which are binding:
- Jark (binding)
- Jincheng (binding)
- Hequn (binding)
- Aljoscha (binding)
- Dian (non-binding)
There were no dis
Thanks everyone for the votes!
I’ll summarize the voting result in a separate email.
Best,
Wei
> 在 2019年12月5日,18:00,Aljoscha Krettek 写道:
>
> +1 (binding)
>
>> On 5. Dec 2019, at 10:58, Hequn Cheng wrote:
>>
>> +1 (binding)
>>
>> Best,
>> Hequn
>>
>> On Thu, Dec 5, 2019 at 5:43 PM jincheng
Hi Yu & Gary,
Thanks a lot for your work and looking forward to the 1.10 release. :)
Best, Hequn
On Tue, Dec 10, 2019 at 1:29 AM Gary Yao wrote:
> Hi all,
>
> We have just created the release-1.10 branch. Please remember to merge bug
> fixes to both release-1.10 and master branches from now on
I agree with Dawid's point that the boundedness information should come
from the source itself (e.g. the end timestamp), not through
env.boundedSouce()/continuousSource().
I think if we want to support something like `env.source()` that derive the
execution mode from source, `supportsBoundedness(Bo
Hi Hequn,
+1 (non-binding)
- verified checksums and hashes
- built from local with MacOS 10.14 and JDK8
- do some check in the SQL-CLI
- run some tests in IDE
Best,
Danny Chan
在 2019年12月5日 +0800 PM9:39,Hequn Cheng ,写道:
> Hi everyone,
>
> Please review and vote on the release candidate #3 for the
Zhenqiu Huang created FLINK-15165:
-
Summary: Flink Hive Upsert Support
Key: FLINK-15165
URL: https://issues.apache.org/jira/browse/FLINK-15165
Project: Flink
Issue Type: Wish
Repo
Dear All,
Recently, the Flink community starts to improve the yarn cluster descriptor
to make job jar and config files configurable from CLI. It improves the
flexibility of Flink deployment Yarn Per Job Mode. For platform users who
manage tens of hundreds of streaming pipelines for the whole org
Hi All!
A while back, Marta opened a PR to create a documentation style guide as
part of FLIP-42[1][2]. Unfortunately, the review stalled out as everyone
involved got busy with Flink Forward Europe. Since we are approaching the
final stretch for Flink 1.10 and I expect to see an influx in document
Hi all,
We have just created the release-1.10 branch. Please remember to merge bug
fixes to both release-1.10 and master branches from now on if you want the
fix
to be included in the Flink 1.10 release.
Best,
Yu & Gary
On Tue, Nov 19, 2019 at 4:44 PM Yu Li wrote:
> Hi devs,
>
> Per the featur
Zhenqiu Huang created FLINK-15164:
-
Summary: Introduce ParquetColumnarRowSplitReader to parquet format
Key: FLINK-15164
URL: https://issues.apache.org/jira/browse/FLINK-15164
Project: Flink
I
Gary Yao created FLINK-15163:
Summary: japicmp should use 1.9 as the old version
Key: FLINK-15163
URL: https://issues.apache.org/jira/browse/FLINK-15163
Project: Flink
Issue Type: Bug
C
Timo Walther created FLINK-15162:
Summary: Merge Java and Scala ClosureCleaner
Key: FLINK-15162
URL: https://issues.apache.org/jira/browse/FLINK-15162
Project: Flink
Issue Type: Improvement
Jark Wu created FLINK-15161:
---
Summary: Introduce TypeTransformation interface and basic
transformations
Key: FLINK-15161
URL: https://issues.apache.org/jira/browse/FLINK-15161
Project: Flink
Issue
Hi Hequn,
+1 (non-binding)
- verified checksums and hashes
- built from sources (Scala 2.11, Scala 2.12)
- build a custom docker image and run several test jobs on Kubernetes
Best,
Fabian Paul
--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
One more thing. In the current proposal, with the
supportsBoundedness(Boundedness) method and the boundedness coming from
either continuousSource or boundedSource I could not find how this
information is fed back to the SplitEnumerator.
Best,
Dawid
On 09/12/2019 13:52, Becket Qin wrote:
> Hi Daw
Hi Becket,
I am not sure if I understood the last paragraph correctly, but let me
clarify my thoughts.
I would not add any bounded/batch specific methods to the DataStream.
Imo all the user facing bounded/batch specific methods should be exposed
through the new BoundedDataStream interface.
1. U
hehuiyuan created FLINK-15159:
-
Summary: the string of json is mapped to VARCHAR or STRING?
Key: FLINK-15159
URL: https://issues.apache.org/jira/browse/FLINK-15159
Project: Flink
Issue Type: Wish
Dawid Wysakowicz created FLINK-15160:
Summary: Clean up is not applied if there are no incoming events
for a key.
Key: FLINK-15160
URL: https://issues.apache.org/jira/browse/FLINK-15160
Project: F
+1 for the migration.
*10 parallel builds with 300 minute timeouts * is very useful for tasks that
takes long time like e2e tests.
And in Travis, looks like we compile entire project for every cron task even if
they use same profile, eg:
`name: e2e - misc - hadoop 2.8
name: e2e
hehuiyuan created FLINK-15158:
-
Summary: Why convert integer to bigdecimal for formart-json when
kafka is used
Key: FLINK-15158
URL: https://issues.apache.org/jira/browse/FLINK-15158
Project: Flink
Aljoscha Krettek created FLINK-15157:
Summary: Make ScalaShell ensureYarnConfig() and
fetchConnectionInfo() public
Key: FLINK-15157
URL: https://issues.apache.org/jira/browse/FLINK-15157
Project:
Robert Metzger created FLINK-15156:
--
Summary: Warn user if System.exit() is called in user code
Key: FLINK-15156
URL: https://issues.apache.org/jira/browse/FLINK-15156
Project: Flink
Issue T
Hi Dawid,
Thanks for the comments. This actually brings another relevant question
about what does a "bounded source" imply. I actually had the same
impression when I look at the Source API. Here is what I understand after
some discussion with Stephan. The bounded source has the following impacts.
Rockey Cui created FLINK-15155:
--
Summary: Join with a LookupableTableSource: the defined order
lookup keys are inconsistent
Key: FLINK-15155
URL: https://issues.apache.org/jira/browse/FLINK-15155
Projec
Andrea Cardaci created FLINK-15154:
--
Summary: Change Flink binding addresses in local mode
Key: FLINK-15154
URL: https://issues.apache.org/jira/browse/FLINK-15154
Project: Flink
Issue Type:
+1 for not actively developing connector for Kafka 0.8 / 0.9 and mark them
as deprecated.
Thanks,
Jiangjie (Becket) Qin
On Mon, Dec 9, 2019 at 7:40 PM Yu Li wrote:
> +1 for dropping the official support (no longer actively develop them).
>
> Thanks for bring this up Chesnay!
>
> Best Regards,
+1 for dropping the official support (no longer actively develop them).
Thanks for bring this up Chesnay!
Best Regards,
Yu
On Mon, 9 Dec 2019 at 11:57, Jingsong Li wrote:
> Thanks Chesnay,
>
> +1 to make it official that we no longer actively develop them but user can
> still use.
>
> Best,
>
Yang Wang created FLINK-15153:
-
Summary: Service selector needs to contain jobmanager component
label
Key: FLINK-15153
URL: https://issues.apache.org/jira/browse/FLINK-15153
Project: Flink
Issue
Feng Jiajie created FLINK-15152:
---
Summary: Job running without periodic checkpoint for stop failed
at the beginning
Key: FLINK-15152
URL: https://issues.apache.org/jira/browse/FLINK-15152
Project: Flink
Sounds like something we could do in 1.11 then, as part of simplification /
cleanup
On Mon, Dec 9, 2019 at 11:18 AM Yu Li wrote:
> +1 from my side.
>
> FWIW, shall we also include @user ML into this discussion?
>
> Best Regards,
> Yu
>
>
> On Mon, 9 Dec 2019 at 15:11, Congxian Qiu wrote:
>
> >
Zhenghua Gao created FLINK-15151:
Summary: Use new type system in
TableSourceUtil.computeIndexMapping of blink planner
Key: FLINK-15151
URL: https://issues.apache.org/jira/browse/FLINK-15151
Project:
+1 from my side.
FWIW, shall we also include @user ML into this discussion?
Best Regards,
Yu
On Mon, 9 Dec 2019 at 15:11, Congxian Qiu wrote:
> +1 from my side.
>
> Best,
> Congxian
>
>
> Yun Tang 于2019年12月6日周五 下午12:30写道:
>
> > +1 from my side for I did not see any real benefits if using syn
Congxian Qiu(klion26) created FLINK-15150:
-
Summary:
ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange failed
on Travis
Key: FLINK-15150
URL: https://issues.apache.org/jira/browse/FL
Hello,
anyone who has opened a PR recently probably noticed that Travis is
taking a weee bit longer to run the builds.
This is due to Travis (rightfully) downgrading our machines (we had
beefy machines that we shouldn't have had), without any prior notice to
speak of.
Combined with the feature
Timo Walther created FLINK-15149:
Summary: Merge InputTypeStrategy and InputTypeValidator
Key: FLINK-15149
URL: https://issues.apache.org/jira/browse/FLINK-15149
Project: Flink
Issue Type: Su
Terry Wang created FLINK-15148:
--
Summary: Add doc for create/drop/alter database ddl
Key: FLINK-15148
URL: https://issues.apache.org/jira/browse/FLINK-15148
Project: Flink
Issue Type: Sub-task
Terry Wang created FLINK-15147:
--
Summary: Add doc for alter table set properties and rename table
Key: FLINK-15147
URL: https://issues.apache.org/jira/browse/FLINK-15147
Project: Flink
Issue Typ
hehuiyuan created FLINK-15146:
-
Summary: The value of `cleanupSize` should be grater than 0 for
`IncrementalCleanupStrategy`
Key: FLINK-15146
URL: https://issues.apache.org/jira/browse/FLINK-15146
Project
Xintong Song created FLINK-15145:
Summary: Tune default values for FLIP-49 TM memory configurations
with real production jobs.
Key: FLINK-15145
URL: https://issues.apache.org/jira/browse/FLINK-15145
P
Piotr Nowojski created FLINK-15144:
--
Summary: Document performance related changes caused by FLIP-49
Key: FLINK-15144
URL: https://issues.apache.org/jira/browse/FLINK-15144
Project: Flink
Is
Xintong Song created FLINK-15143:
Summary: Create document for FLIP-49 TM memory model and
configuration guide
Key: FLINK-15143
URL: https://issues.apache.org/jira/browse/FLINK-15143
Project: Flink
Guy Korland created FLINK-15142:
---
Summary: Extend Redis Sink Connector to Redis Streams
Key: FLINK-15142
URL: https://issues.apache.org/jira/browse/FLINK-15142
Project: Flink
Issue Type: Wish
56 matches
Mail list logo