Shimin Yang created FLINK-14003:
---
Summary: Add access to customized state other than built in state
types
Key: FLINK-14003
URL: https://issues.apache.org/jira/browse/FLINK-14003
Project: Flink
Congratulations Kostas!
Best
Yun Tang
--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
Dear Community,
happy to share this "week's" community update, back after a three week
summer break. It's been a very busy time in the Flink community as a lot of
FLIP discussions and votes for Apache Flink 1.10 are on their way. I will
try to cover a good part of it in this update along with bugs
zhijiang created FLINK-14004:
Summary: Define SourceReader interface to verify the integration
with StreamOneInputProcessor
Key: FLINK-14004
URL: https://issues.apache.org/jira/browse/FLINK-14004
Project:
Hi Shimin,
Thanks for bring this discussion up.
First of all, I'd like to confirm/clarify that this discussion is mainly
about managed state with customized state descriptor rather than raw state,
right? Asking because raw state was the very first thing came to my mind
when seeing the title.
And
Hi all
First of all, I agreed with Yu that we should support to make state type
pluginable.
If we take a look at current Flink implementation. Users could implement their
pluginable state backend to satisfy their own meets now. However, even users
could define their own state descriptor, they
Okay, thanks for clarifying. I have some followup question here. If we
consider Kafka offsets commits, this basically means that
the offsets committed during the checkpoint are not necessarily the
offsets that were really processed by the pipeline and written to sink ? I
mean If there is a window i
确定取消
Hi all,
This VOTE looks like everyone agrees with the current FLIP.
Hi Time & Aljoscha Do you have any other comments after the ML discussion?
[1]
Hi Dian, Could you announce the VOTE result and create a JIRA. for the FLIP
today later, if there no other feedback?
Cheers,
Jincheng
[1]
http://ap
Congrats, Kostas!
On Sun, Sep 8, 2019 at 11:48 PM myasuka wrote:
> Congratulations Kostas!
>
> Best
> Yun Tang
>
>
>
> --
> Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
>
Hi Dom,
There are sync phase and async phase in checkpointing. When a operator receives
a barrier, it performs snapshot aka the sync phase. And when the barriers pass
through all the operators including sinks, the operators will get a
notification, after which they do the async part, like commi
Congratulations, Kostas!
Best,
Yun
--
From:Becket Qin
Send Time:2019 Sep. 9 (Mon.) 10:47
To:dev
Subject:Re: [ANNOUNCE] Kostas Kloudas joins the Flink PMC
Congrats, Kostas!
On Sun, Sep 8, 2019 at 11:48 PM mya
Xuefu Zhang created FLINK-14005:
---
Summary: Support Hive version 2.2.0
Key: FLINK-14005
URL: https://issues.apache.org/jira/browse/FLINK-14005
Project: Flink
Issue Type: Improvement
Co
Looking at feature list, I don't see an item for complete the data type
support. Specifically, high precision timestamp is needed to Hive
integration, as it's so common. Missing it would damage the completeness of
our Hive effort.
Thanks,
Xuefu
On Sat, Sep 7, 2019 at 7:06 PM Xintong Song wrote:
sunjincheng created FLINK-14006:
---
Summary: Add doc for how to using Java UDFs in Python API
Key: FLINK-14006
URL: https://issues.apache.org/jira/browse/FLINK-14006
Project: Flink
Issue Type: Im
sunjincheng created FLINK-14007:
---
Summary: Add doc for how to using Java user-defined source/sink in
Python API
Key: FLINK-14007
URL: https://issues.apache.org/jira/browse/FLINK-14007
Project: Flink
Hi,
W.r.t temp functions, I feel both options have their benefits and can
theoretically achieve similar functionalities one way or another. In the
end, it's more about use cases, users habits, and trade-offs.
Re> Not always users are in full control of the catalog functions. There is
also the cas
Thanks jark and dian:
1.jark's approach: do the work in task-0. Simple way.
2.dian's approach: use StreamingRuntimeContext#getGlobalAggregateManager Can do
more operation. But these accumulators are not fault-tolerant?
Best,
Jingsong Lee
-
Hi Xuefu,
If I understand it correctly, the data type support work should be included
in the "Table API improvements->Finish type system" part, please check it
and let us know if anything missing there. Thanks.
Best Regards,
Yu
On Mon, 9 Sep 2019 at 11:14, Xuefu Z wrote:
> Looking at feature
Hi dawid:
It is difficult to describe specific examples.
Sometimes users will generate some java converters through some
Java code, or generate some Java classes through third-party
libraries. Of course, these can be best done through properties.
But this requires additional work from users.My
+1 (binding)
- checked signatures [SUCCESS]
- built from source without tests [SUCCESS]
- ran some tests in IDE [SUCCESS]
- start local cluster and submit word count example [SUCCESS]
- announcement PR for website looks good! (I have left a few comments)
Best,
Jincheng
Jark Wu 于2019年9月6日周五 下午8:
Hi Yu,
For the first question, I would say yes. I was talking about managed
states, to be more specific, it's managed keyed states. And the reason why
we need the framework to manage life cycle is that we need checkpoint to
guarantee exact once semantic in our customized keyed state backend.
For
Hi Tang,
Actually in my case we implement a totally different KeyedStateBackend and
its' factory based on data store other than Heap or RocksDB.
Also for state factory of heap and rocksdb, you've made a quite good point
and I agree with you opinion.
Best,
Shimin
shimin yang 于2019年9月9日周一 下午2:31
23 matches
Mail list logo