xuqianjin created FLINK-10994:
-
Summary: The bug of timestampadd handles time
Key: FLINK-10994
URL: https://issues.apache.org/jira/browse/FLINK-10994
Project: Flink
Issue Type: Bug
Comp
vinoyang created FLINK-10993:
Summary: Bring bloomfilter as a public API
Key: FLINK-10993
URL: https://issues.apache.org/jira/browse/FLINK-10993
Project: Flink
Issue Type: New Feature
C
Thanks for the suggestion, Jincheng.
Yes, I think it makes sense to have a persist() with lifecycle/defined
scope. I just added a section in the future work for this.
Thanks,
Jiangjie (Becket) Qin
On Fri, Nov 23, 2018 at 1:55 PM jincheng sun
wrote:
> Hi Jiangjie,
>
> Thank you for the explan
Hi Xiaowei,
Thanks for the comment. That is a valid point.
The callback is not only associated with a particular temp table. It is a
clean up logic provided by the user. The temp table to session ID mapping
is tracked internally. We also need to associate the callback with the
session lifecycle a
Hi Jiangjie,
Thank you for the explanation about the name of `cache()`, I understand why
you designed this way!
Another idea is whether we can specify a lifecycle for data persistence?
For example, persist (LifeCycle.SESSION), so that the user is not worried
about data loss, and will clearly spec
Re: Jincheng,
Thanks for the feedback. Regarding cache() v.s. persist(), personally I
find cache() to be more accurately describing the behavior, i.e. the Table
is cached for the session, but will be deleted after the session is closed.
persist() seems a little misleading as people might think the
Hi Piotrek,
Regarding the split assignment. My hunch is that Flink might not have
enough information to assign the splits to the readers in the best way.
Even if a SplitReader says it COULD take another split, it does not mean it
is the best reader to take the split. For example, it is possible th
Hi Timo,
Thanks for the effort and the Google writeup. During our external catalog
rework, we found much confusion between Java and Scala, and this Scala-free
roadmap should greatly mitigate that.
I'm wondering that whether we can have rule in the interim when Java and Scala
coexist that depen
Hi Timo,
Thanks for initiating this great discussion.
Currently when using SQL/TableAPI should include many dependence. In
particular, it is not necessary to introduce the specific implementation
dependencies which users do not care about. So I am glad to see your
proposal, and hope when we consid
Thanks Fabian,
Thanks a lot for your feedback, and very important and necessary design
reminders!
Yes, your are right! Spark is the specified grouping columns displayed
before 1.3, but the grouping columns are implicitly passed in spark1.4 and
later. The reason for changing this behavior is that
Hi all,
@Shaoxuan, I think the lifecycle or access domain are both orthogonal to the
cache problem. Essentially, this may be the first time we plan to introduce
another storage mechanism other than the state. Maybe it’s better to first draw
a big picture and then concentrate on a specific part?
Gary Yao created FLINK-10992:
Summary: Jepsen: Do not use /tmp as HDFS Data Directory
Key: FLINK-10992
URL: https://issues.apache.org/jira/browse/FLINK-10992
Project: Flink
Issue Type: Bug
Konstantin Knauf created FLINK-10991:
Summary: Dockerfile in flink-container does not work with
RocksDBStatebackend
Key: FLINK-10991
URL: https://issues.apache.org/jira/browse/FLINK-10991
Project:
aitozi created FLINK-10990:
--
Summary: Pre-check timespan in meterview to avoid NAN
Key: FLINK-10990
URL: https://issues.apache.org/jira/browse/FLINK-10990
Project: Flink
Issue Type: Bug
Co
Till Rohrmann created FLINK-10989:
-
Summary: OrcRowInputFormat uses two different file systems
Key: FLINK-10989
URL: https://issues.apache.org/jira/browse/FLINK-10989
Project: Flink
Issue Typ
Scott Sue created FLINK-10988:
-
Summary: Improve debugging / visibility of job state
Key: FLINK-10988
URL: https://issues.apache.org/jira/browse/FLINK-10988
Project: Flink
Issue Type: Improvement
Relying on a callback for the temp table for clean up is not very reliable.
There is no guarantee that it will be executed successfully. We may risk
leaks when that happens. I think that it's safer to have an association
between temp table and session id. So we can always clean up temp tables
which
Hi Timo, thanks for driving this! I think that this is a nice thing to do.
While we are doing this, can we also keep in mind that we want to
eventually have a TableAPI interface only module which users can take
dependency on, but without including any implementation details?
Xiaowei
On Thu, Nov 2
Hi Becket,
I think the problem is not with the split re-assignment, but with dynamic split
discovery. We do not always know before the hand the number of splits (for
example Kafka partition/topic discovery, but this can also happen in batch),
while the source parallelism is fixed/known before h
Hi all,
First of all, it is correct that the flatMap(Expression*) and
flatAggregate(Expression*) methods would mix scalar and table values.
This would be a new concept that is not present in the current API.
>From my point of view, the semantics are quite clear, but I understand that
others are mo
Hi Jincheng,
#1) ok, got it.
#3)
> From points of my view I we can using
> `Expression`, and after the discussion decided to use Expression*, then
> improve it. In any case, we can use Expression, and there is an opportunity
> to become Expression* (compatibility). If we use Expression* directly,
I think the oshi-core test dependency in flink-tests is ok because we don't
include the oshi-core binary in any of our published binaries (neither
the flink-tests_2.11-1.8-SNAPSHOT.jar nor the convenience tarballs contain
oshi-core classes). Linking against EPL 1.0 binaries should be ok.
Concernin
Till Rohrmann created FLINK-10987:
-
Summary: LICENSE and NOTICE files are not correct
Key: FLINK-10987
URL: https://issues.apache.org/jira/browse/FLINK-10987
Project: Flink
Issue Type: Bug
Gary Yao created FLINK-10986:
Summary: Jepsen: Deploy Kafka Broker
Key: FLINK-10986
URL: https://issues.apache.org/jira/browse/FLINK-10986
Project: Flink
Issue Type: New Feature
Compone
Gary Yao created FLINK-10985:
Summary: Enable multiple Job Submission in distributed Tests
Key: FLINK-10985
URL: https://issues.apache.org/jira/browse/FLINK-10985
Project: Flink
Issue Type: Bug
Hi Timo,
Thanks for writing up this document.
I like the new structure and agree to prioritize the porting of the
flink-table-common classes.
Since flink-table-runtime is (or should be) independent of the API and
planner modules, we could start porting these classes once the code is
split into the
I see, thanks Fabian !
Fabian Hueske 于2018年11月22日周四 下午6:10写道:
> Yes, I think so.
> Currently, only PMC members are in the Jira Admin group.
>
> Best, Fabian
>
>
> Am Do., 22. Nov. 2018 um 10:56 Uhr schrieb jincheng sun <
> sunjincheng...@gmail.com>:
>
> > Hi Fabian,
> >
> > I have a question th
Yes, I think so.
Currently, only PMC members are in the Jira Admin group.
Best, Fabian
Am Do., 22. Nov. 2018 um 10:56 Uhr schrieb jincheng sun <
sunjincheng...@gmail.com>:
> Hi Fabian,
>
> I have a question that only PMC can manage Contributor permissions?
>
> Thanks,
> Jincheng
>
>
> Fabian Hu
Chesnay Schepler created FLINK-10984:
Summary: Move flink-shaded-hadoop to flink-shaded
Key: FLINK-10984
URL: https://issues.apache.org/jira/browse/FLINK-10984
Project: Flink
Issue Type:
Hi Fabian,
Yes, Timers is not only the difference between Table and DataStream, but
also the difference between DataStream and DataSet. We need to unify the
batch and Stream in Table, so the difference about timers needs to be
considered in depth. :)
Thanks, Jincheng
Fabian Hueske 于2018年11月15日
Hi Fabian,
I have a question that only PMC can manage Contributor permissions?
Thanks,
Jincheng
Fabian Hueske 于2018年11月22日周四 下午5:52写道:
> Hi,
>
> I gave you contributor permissions.
> Looking forward to your contributions.
>
> Best, Fabian
>
> Am Do., 22. Nov. 2018 um 04:26 Uhr schrieb Wei Zho
Hi everyone,
I would like to continue this discussion thread and convert the outcome
into a FLIP such that users and contributors know what to expect in the
upcoming releases.
I created a design document [1] that clarifies our motivation why we
want to do this, how a Maven module structure c
Hi,
I gave you contributor permissions.
Looking forward to your contributions.
Best, Fabian
Am Do., 22. Nov. 2018 um 04:26 Uhr schrieb Wei Zhong :
> Hi guys:
>
> Could somebody give me contributor permissions? my jira username is :
> zhongwei.
>
> Thanks.
>
>
Hi Moiz,
What do you mean by exit pattern? Do you mean that you want an event to
belong to a single match? If so I think you can achieve that with
AfterMatchSkip.SKIP_PAST_LAST strategy[1].
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/libs/cep.html#after-match-skip-strategy
Hi,
I'd recommend to post questions about how to use Flink to the
u...@flink.apache.org mailing list.
The dev mailing list is focused on the development of Flink.
Thank you,
Fabian
Am Mi., 21. Nov. 2018 um 17:30 Uhr schrieb Durga Durga :
> Folks,
>
>We've been having a tough time building
Hi Hequn,
Thanks for check the test, +1 this not a blocker of the release. and I'll
merge your PR.
Best,
Jincheng
Hequn Cheng 于2018年11月22日周四 下午4:35写道:
> Hi,
>
> I'm trying to check if the source release is building properly. And found
> that NonHAQueryableStateFsBackendITCase failed when run `
Hi,
I'm trying to check if the source release is building properly. And found
that NonHAQueryableStateFsBackendITCase failed when run `mvn install`. This
should not be a release blocker, however, I ceated a pr[1] to make it more
stable.
Best,
Hequn
[1] https://issues.apache.org/jira/browse/FLINK
37 matches
Mail list logo