Tony Xintong Song created FLINK-12812:
-
Summary: Set resource profiles for task slots
Key: FLINK-12812
URL: https://issues.apache.org/jira/browse/FLINK-12812
Project: Flink
Issue Type: Su
vinoyang created FLINK-12811:
Summary: flink-table-planner-blink compile error
Key: FLINK-12811
URL: https://issues.apache.org/jira/browse/FLINK-12811
Project: Flink
Issue Type: Bug
Com
Hi Aljoscha,
I am happy to create a FLIP and have a voting process for this feature. I
have already sent a mail to apply for the wiki permissions.
Once I get the permission, will start the next step. When it is ready, I
will ping you again.
Best,
Vino
Aljoscha Krettek 于2019年6月11日周二 下午10:57写道:
Hi,
I am going to create a new FLIP for Proposal of supporting local
aggregation in Flink.
The discussion thread in the Flink dev mailing list is here.[1]
Could you please give me the create and edit permission for this page[2].
My ID is: yanghua
Best,
Vino
[1]:
http://mail-archives.apache.or
Jing Zhang created FLINK-12810:
--
Summary: Support to run a TableAPI query like 'table.select('a,
'b, 'c)'
Key: FLINK-12810
URL: https://issues.apache.org/jira/browse/FLINK-12810
Project: Flink
+1 on the proposal!
Maintaining only one Python API is helpful for users and contributors.
Best, Hequn
On Wed, Jun 12, 2019 at 9:41 AM Jark Wu wrote:
> +1 and looking forward to the new Python API world.
>
> Best,
> Jark
>
> On Wed, 12 Jun 2019 at 09:22, Becket Qin wrote:
>
>> +1 on deprecatin
+1 and looking forward to the new Python API world.
Best,
Jark
On Wed, 12 Jun 2019 at 09:22, Becket Qin wrote:
> +1 on deprecating the old Python API in 1.9 release.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Wed, Jun 12, 2019 at 9:07 AM Dian Fu wrote:
>
>> +1 for this proposal.
>>
>> Regard
+1 on deprecating the old Python API in 1.9 release.
Thanks,
Jiangjie (Becket) Qin
On Wed, Jun 12, 2019 at 9:07 AM Dian Fu wrote:
> +1 for this proposal.
>
> Regards,
> Dian
>
> 在 2019年6月12日,上午8:24,jincheng sun 写道:
>
> big +1 for the proposal.
>
> We will soon complete all the Python API func
+1 for this proposal.
Regards,
Dian
> 在 2019年6月12日,上午8:24,jincheng sun 写道:
>
> big +1 for the proposal.
>
> We will soon complete all the Python API functional development of the 1.9
> release, the development of UDFs will be carried out. After the support of
> UDFs is completed, it will be
big +1 for the proposal.
We will soon complete all the Python API functional development of the 1.9
release, the development of UDFs will be carried out. After the support of
UDFs is completed, it will be very natural to support Datastream API.
If all of us agree with this proposal, I believe tha
Hi all!
I want to discuss making the Table API jars part of the "flink uber jar" in
"/lib" by default.
So far, the Table API was an optional dependency in "/opt".
With the current effort to make it a first class API in Flink, it would be
good experience if the Table API was available by default.
>>> Any API we expose should not have dependencies on the runtime
(flink-runtime) package or other implementation details. To me, this means
that the current ClusterClient cannot be exposed to users because it uses
quite some classes from the optimiser and runtime packages.
We should change Clu
Some points to consider:
* Any API we expose should not have dependencies on the runtime (flink-runtime)
package or other implementation details. To me, this means that the current
ClusterClient cannot be exposed to users because it uses quite some classes
from the optimiser and runtime packa
Hi,
I think this proposed change is big enough to warrant a FLIP [1], which should
have a voting process as described in that link before the FLIP is accepted.
I’m writing this because such a bigger change has the possibility of
languishing for a long time due to lack of PMC/committer bandwidth
+1
Best,
tison.
zhijiang 于2019年6月11日周二 下午10:52写道:
> It is reasonable as stephan explained. +1 from my side!
>
> --
> From:Jeff Zhang
> Send Time:2019年6月11日(星期二) 22:11
> To:Stephan Ewen
> Cc:user ; dev
> Subject:Re: [DISCUSS] De
It is reasonable as stephan explained. +1 from my side!
--
From:Jeff Zhang
Send Time:2019年6月11日(星期二) 22:11
To:Stephan Ewen
Cc:user ; dev
Subject:Re: [DISCUSS] Deprecate previous Python APIs
+1
Stephan Ewen 于2019年6月11日周二 下午9:30写道
+1
Stephan Ewen 于2019年6月11日周二 下午9:30写道:
> Hi all!
>
> I would suggest to deprecating the existing python APIs for DataSet and
> DataStream API with the 1.9 release.
>
> Background is that there is a new Python API under development.
> The new Python API is initially against the Table API. Flink
Jark Wu created FLINK-12809:
---
Summary: Introduce PartitionableTableSink for support writing data
into partitions
Key: FLINK-12809
URL: https://issues.apache.org/jira/browse/FLINK-12809
Project: Flink
Jark Wu created FLINK-12808:
---
Summary: Introduce OverwritableTableSink for supporting insert
overwrite
Key: FLINK-12808
URL: https://issues.apache.org/jira/browse/FLINK-12808
Project: Flink
Issue
Hi all!
I would suggest to deprecating the existing python APIs for DataSet and
DataStream API with the 1.9 release.
Background is that there is a new Python API under development.
The new Python API is initially against the Table API. Flink 1.9 will
support Table API programs without UDFs, 1.10
zjuwangg created FLINK-12807:
Summary: Support Hive table columnstats related operations in
HiveCatalog
Key: FLINK-12807
URL: https://issues.apache.org/jira/browse/FLINK-12807
Project: Flink
Iss
Hi All,
It definitely requires a massive effort to allow at-most-once delivery in
Flink. But as the feature is urgently demanded by many Flink users, i think
every effort we made is worthy. Actually, the inability to support
at-most-once delivery has become a major obstacle for Storm users to turn
Piotr Nowojski created FLINK-12806:
--
Summary: Remove beta feature remark from the Universal Kafka
connector
Key: FLINK-12806
URL: https://issues.apache.org/jira/browse/FLINK-12806
Project: Flink
Jark Wu created FLINK-12805:
---
Summary: Introduce PartitionableTableSource for partition pruning
Key: FLINK-12805
URL: https://issues.apache.org/jira/browse/FLINK-12805
Project: Flink
Issue Type: Ne
Stefan Richter created FLINK-12804:
--
Summary: Introduce mailbox-based ExecutorService
Key: FLINK-12804
URL: https://issues.apache.org/jira/browse/FLINK-12804
Project: Flink
Issue Type: Sub-t
sunjincheng created FLINK-12803:
---
Summary: Correct the package name for python API
Key: FLINK-12803
URL: https://issues.apache.org/jira/browse/FLINK-12803
Project: Flink
Issue Type: Sub-task
Thanks for launching this topic xiaogang!
I also heard of this requirement from users before and I agree it could bring
benefits for some scenarios.
As we know, the fault tolerance is one of the biggest challenges in stream
architecuture, because it is difficult to change if the initial system d
+1 from my side to support this feature in Flink.
Best,
Vino
Biao Liu 于2019年6月11日周二 下午6:14写道:
> Hi Piotrek,
> I agree with you that there are strained resources of community to support
> such a feature. I was planing to start a similar discussion after 1.9
> released. Anyway we don't have enoug
Hi Piotrek,
I agree with you that there are strained resources of community to support
such a feature. I was planing to start a similar discussion after 1.9
released. Anyway we don't have enough time to support this feature now, but
I think a discussion is fine.
It's very interesting of your checkp
Hi,
I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is julien1987 .
Jingsong Lee created FLINK-12802:
Summary: Optimizing DataFormat Code to Blink
Key: FLINK-12802
URL: https://issues.apache.org/jira/browse/FLINK-12802
Project: Flink
Issue Type: Bug
Thanks Xiaogang for initiating the discussion. I think it is a very good
proposal.
We also received this requirements for Flink from Alibaba internal and
external customers.
In these cases, users are less concerned of the data consistency, but have
higher demands for low latency.
Here are a couple
Hi Xiaogang, it's an interesting discussion.
I have heard some of similar feature requirements before. Some users need a
lighter failover strategy since the correctness is not so critical for
their scenario as you mentioned. Even more some jobs may not enable
checkpointing at all, a global/region
Great to hear Dyana. Thanks for the update.
Cheers,
Till
On Fri, Jun 7, 2019 at 2:48 PM dyana.rose wrote:
> Just wanted to give an update on this.
>
> Our ops team and myself independently came to the same conclusion that our
> ZooKeeper quorum was having syncing issues.
>
> After a bit more re
Hi Xiaogang,
It sounds interesting and definitely a useful feature, however the questions
for me would be how useful, how much effort would it require and is it worth
it? We simply can not do all things at once, and currently people that could
review/drive/mentor this effort are pretty much str
XuPingyong created FLINK-12801:
--
Summary: Set parallelism for batch SQL
Key: FLINK-12801
URL: https://issues.apache.org/jira/browse/FLINK-12801
Project: Flink
Issue Type: Task
Componen
Hi Xiaogang,
It is an interesting topic.
Notice that there is some effort to build a mature mllib of flink these
days, it could be also possible for some ml cases trade off correctness for
timeliness or throughput. Excatly-once delivery excatly makes flink stand
out but an at-most-once option wou
leesf created FLINK-12800:
-
Summary: Harden Tests when availableProcessors is 1
Key: FLINK-12800
URL: https://issues.apache.org/jira/browse/FLINK-12800
Project: Flink
Issue Type: Bug
Compon
Flink offers a fault-tolerance mechanism to guarantee at-least-once and
exactly-once message delivery in case of failures. The mechanism works well
in practice and makes Flink stand out among stream processing systems.
But the guarantee on at-least-once and exactly-once delivery does not come
with
Dawid Wysakowicz created FLINK-12799:
Summary: Improve expression based TableSchema extraction from
DataStream/DataSet
Key: FLINK-12799
URL: https://issues.apache.org/jira/browse/FLINK-12799
Proje
Dawid Wysakowicz created FLINK-12798:
Summary: Port TableEnvironment to flink-api modules
Key: FLINK-12798
URL: https://issues.apache.org/jira/browse/FLINK-12798
Project: Flink
Issue Type
41 matches
Mail list logo